Add Batch ebe97fb2-d9f3-41ce-b216-0eb7c924c3ff
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +64 -0
- 2201.12xxx/2201.12987/e39d9397-7e7d-4466-995c-a7a5cf1cbcda_content_list.json +0 -0
- 2201.12xxx/2201.12987/e39d9397-7e7d-4466-995c-a7a5cf1cbcda_model.json +0 -0
- 2201.12xxx/2201.12987/e39d9397-7e7d-4466-995c-a7a5cf1cbcda_origin.pdf +3 -0
- 2201.12xxx/2201.12987/full.md +554 -0
- 2201.12xxx/2201.12987/images.zip +3 -0
- 2201.12xxx/2201.12987/layout.json +0 -0
- 2201.13xxx/2201.13078/10fd0fd5-66a6-4d87-8655-e3c0fe766d3f_content_list.json +0 -0
- 2201.13xxx/2201.13078/10fd0fd5-66a6-4d87-8655-e3c0fe766d3f_model.json +0 -0
- 2201.13xxx/2201.13078/10fd0fd5-66a6-4d87-8655-e3c0fe766d3f_origin.pdf +3 -0
- 2201.13xxx/2201.13078/full.md +610 -0
- 2201.13xxx/2201.13078/images.zip +3 -0
- 2201.13xxx/2201.13078/layout.json +0 -0
- 2201.13xxx/2201.13100/e7d5763b-dffe-40a2-928b-5f73672ed49e_content_list.json +0 -0
- 2201.13xxx/2201.13100/e7d5763b-dffe-40a2-928b-5f73672ed49e_model.json +0 -0
- 2201.13xxx/2201.13100/e7d5763b-dffe-40a2-928b-5f73672ed49e_origin.pdf +3 -0
- 2201.13xxx/2201.13100/full.md +491 -0
- 2201.13xxx/2201.13100/images.zip +3 -0
- 2201.13xxx/2201.13100/layout.json +0 -0
- 2201.13xxx/2201.13117/ef37afad-eb01-4970-bb57-94e62038b1d4_content_list.json +0 -0
- 2201.13xxx/2201.13117/ef37afad-eb01-4970-bb57-94e62038b1d4_model.json +0 -0
- 2201.13xxx/2201.13117/ef37afad-eb01-4970-bb57-94e62038b1d4_origin.pdf +3 -0
- 2201.13xxx/2201.13117/full.md +814 -0
- 2201.13xxx/2201.13117/images.zip +3 -0
- 2201.13xxx/2201.13117/layout.json +0 -0
- 2201.13xxx/2201.13125/22a9b67e-4248-4898-877b-81213525c31c_content_list.json +1346 -0
- 2201.13xxx/2201.13125/22a9b67e-4248-4898-877b-81213525c31c_model.json +1947 -0
- 2201.13xxx/2201.13125/22a9b67e-4248-4898-877b-81213525c31c_origin.pdf +3 -0
- 2201.13xxx/2201.13125/full.md +267 -0
- 2201.13xxx/2201.13125/images.zip +3 -0
- 2201.13xxx/2201.13125/layout.json +0 -0
- 2201.13xxx/2201.13143/abce0a26-20db-491d-836f-c008291aceaf_content_list.json +0 -0
- 2201.13xxx/2201.13143/abce0a26-20db-491d-836f-c008291aceaf_model.json +0 -0
- 2201.13xxx/2201.13143/abce0a26-20db-491d-836f-c008291aceaf_origin.pdf +3 -0
- 2201.13xxx/2201.13143/full.md +373 -0
- 2201.13xxx/2201.13143/images.zip +3 -0
- 2201.13xxx/2201.13143/layout.json +0 -0
- 2201.13xxx/2201.13148/2c84f44d-f098-4430-8e8c-b79d28977a5f_content_list.json +934 -0
- 2201.13xxx/2201.13148/2c84f44d-f098-4430-8e8c-b79d28977a5f_model.json +1156 -0
- 2201.13xxx/2201.13148/2c84f44d-f098-4430-8e8c-b79d28977a5f_origin.pdf +3 -0
- 2201.13xxx/2201.13148/full.md +170 -0
- 2201.13xxx/2201.13148/images.zip +3 -0
- 2201.13xxx/2201.13148/layout.json +0 -0
- 2201.13xxx/2201.13178/573f7739-27bd-47fa-a5f9-705c685effde_content_list.json +0 -0
- 2201.13xxx/2201.13178/573f7739-27bd-47fa-a5f9-705c685effde_model.json +0 -0
- 2201.13xxx/2201.13178/573f7739-27bd-47fa-a5f9-705c685effde_origin.pdf +3 -0
- 2201.13xxx/2201.13178/full.md +613 -0
- 2201.13xxx/2201.13178/images.zip +3 -0
- 2201.13xxx/2201.13178/layout.json +0 -0
- 2201.13xxx/2201.13182/dfa19251-da5b-41a6-8f0e-83972fee5c18_content_list.json +0 -0
.gitattributes
CHANGED
|
@@ -8748,3 +8748,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 8748 |
2202.01xxx/2202.01288/840fb690-8481-4aae-8a04-6bc06eb14007_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8749 |
2202.03xxx/2202.03274/334b8934-ae0b-49eb-b15e-f2ac967651fc_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8750 |
2203.03xxx/2203.03540/ad3557be-9ad0-4c54-9213-fcc03072376e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8748 |
2202.01xxx/2202.01288/840fb690-8481-4aae-8a04-6bc06eb14007_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8749 |
2202.03xxx/2202.03274/334b8934-ae0b-49eb-b15e-f2ac967651fc_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8750 |
2203.03xxx/2203.03540/ad3557be-9ad0-4c54-9213-fcc03072376e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8751 |
+
2201.12xxx/2201.12987/e39d9397-7e7d-4466-995c-a7a5cf1cbcda_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8752 |
+
2201.13xxx/2201.13078/10fd0fd5-66a6-4d87-8655-e3c0fe766d3f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8753 |
+
2201.13xxx/2201.13100/e7d5763b-dffe-40a2-928b-5f73672ed49e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8754 |
+
2201.13xxx/2201.13117/ef37afad-eb01-4970-bb57-94e62038b1d4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8755 |
+
2201.13xxx/2201.13125/22a9b67e-4248-4898-877b-81213525c31c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8756 |
+
2201.13xxx/2201.13143/abce0a26-20db-491d-836f-c008291aceaf_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8757 |
+
2201.13xxx/2201.13148/2c84f44d-f098-4430-8e8c-b79d28977a5f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8758 |
+
2201.13xxx/2201.13178/573f7739-27bd-47fa-a5f9-705c685effde_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8759 |
+
2201.13xxx/2201.13182/dfa19251-da5b-41a6-8f0e-83972fee5c18_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8760 |
+
2201.13xxx/2201.13256/853b530f-e2d4-4a27-a82a-61035a81dc5d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8761 |
+
2201.13xxx/2201.13259/53734a9e-747e-4cf8-8bc1-679b6b2b4fb8_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8762 |
+
2201.13xxx/2201.13291/ecc8573e-ec3e-49ff-b85b-92e60258eef3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8763 |
+
2201.13xxx/2201.13320/d7bc00c8-8aa0-4526-8c89-18ae3f03f8ba_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8764 |
+
2201.13xxx/2201.13323/b8b4ebac-e105-43ed-9a6e-e6a280f9fd49_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8765 |
+
2201.13xxx/2201.13348/a24faa66-2256-4058-ae9b-e11699505751_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8766 |
+
2201.13xxx/2201.13360/5e3715fa-3e84-404d-9b21-88cfba661037_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8767 |
+
2201.13xxx/2201.13396/d161a330-dc99-44db-a3f4-4e5e357814e0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8768 |
+
2201.13xxx/2201.13409/7d4c0c1f-1b98-46ce-9f73-6bf14a233c1b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8769 |
+
2201.13xxx/2201.13425/8fe43f7a-09a3-4702-9362-19042e537936_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8770 |
+
2201.13xxx/2201.13433/bb9cf04b-4540-4ed3-83c9-95e76363ae3c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8771 |
+
2202.00xxx/2202.00063/1d3fe777-0ffb-493e-a603-622021961cbd_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8772 |
+
2202.00xxx/2202.00089/f2b74ccf-cd3d-471b-a22f-6b3cf0a096da_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8773 |
+
2202.00xxx/2202.00120/a57c9dbd-610a-4e19-b0a7-f77afdc22bfa_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8774 |
+
2202.00xxx/2202.00132/0f45a21f-f132-4ab5-b7b5-fd89b91e84d5_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8775 |
+
2202.00xxx/2202.00155/91614c40-d00a-47fa-a2ba-57463a3fa787_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8776 |
+
2202.00xxx/2202.00161/45973387-2743-4e75-83c2-26c8b4687dd0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8777 |
+
2202.00xxx/2202.00164/0d9b961c-d0f9-4eed-8eb0-88193f50ee26_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8778 |
+
2202.00xxx/2202.00181/d2818ade-894a-4be2-ae26-69262861b602_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8779 |
+
2202.00xxx/2202.00217/cfead4d5-2a19-43ac-bc7a-b97ac44fe16d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8780 |
+
2202.00xxx/2202.00273/6e001973-bfee-48f7-8e1e-fb1d800e7eff_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8781 |
+
2202.00xxx/2202.00275/dea0d28c-9804-451f-854f-1c7ad0daaaf0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8782 |
+
2202.00xxx/2202.00379/0efa9a51-bf8b-4ed1-a71b-60cbfd5195ec_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8783 |
+
2202.00xxx/2202.00433/8b67e1dc-f4d1-4704-bf1b-4b5c363f1fe4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8784 |
+
2202.00xxx/2202.00449/2332e109-7c18-4f13-b5d9-63f4bdc8b1d1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8785 |
+
2202.00xxx/2202.00455/39445e4f-98f8-4c8e-aa4b-bd57c3887af7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8786 |
+
2202.00xxx/2202.00529/19863caa-89ae-45e5-913d-6bf2324a7170_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8787 |
+
2202.00xxx/2202.00580/0c51a83a-2e79-496a-b26c-9329ea954162_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8788 |
+
2202.00xxx/2202.00622/37acfddc-621a-4b6a-acb9-e219c0753001_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8789 |
+
2202.00xxx/2202.00645/2b604d0d-1bc4-4e7f-9e06-3fbc964344dd_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8790 |
+
2202.00xxx/2202.00661/a6b55596-7872-4344-8419-b916cc013f6d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8791 |
+
2202.00xxx/2202.00665/23ceab7c-575c-4923-a207-7b8a879174db_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8792 |
+
2202.00xxx/2202.00666/b998b3c0-248b-47b1-8445-ee461750b584_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8793 |
+
2202.00xxx/2202.00667/b219ea4d-fb1b-4894-8cc1-b8188e542e86_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8794 |
+
2202.00xxx/2202.00728/0c7e00ab-6c92-40e3-b4b2-da6be791c41f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8795 |
+
2202.00xxx/2202.00732/d49391fe-5e59-45e5-b600-90de41713c1e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8796 |
+
2202.00xxx/2202.00734/4d450da5-07ab-45d8-9692-3ccd04d0339d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8797 |
+
2202.00xxx/2202.00758/2836bca2-b79d-4898-8540-b36433e6ec2f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8798 |
+
2202.00xxx/2202.00787/9b890fb5-5703-4d9b-b272-ff7375b4298f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8799 |
+
2202.00xxx/2202.00807/588a000e-cf19-4fc3-847b-ce17e5cf6a45_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8800 |
+
2202.00xxx/2202.00817/e208a4fd-92f9-43da-a026-392124b33f74_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8801 |
+
2202.00xxx/2202.00821/d6e27894-a72a-46bb-827b-01ad39a60d8f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8802 |
+
2202.00xxx/2202.00842/d1a8dba7-a7f3-4726-b11f-5de1d93a8e46_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8803 |
+
2202.00xxx/2202.00868/d39b6d52-10ce-4fa8-8215-e8575eebf7d1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8804 |
+
2202.00xxx/2202.00874/8a2c056b-b342-4f05-b474-a34e62936d8d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8805 |
+
2202.00xxx/2202.00881/ffd91f43-1462-48e4-a3eb-ec0a16940287_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8806 |
+
2202.00xxx/2202.00914/8fa6d17c-493b-47fa-839e-a200d4f5dbc3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8807 |
+
2202.00xxx/2202.00961/5571755d-7db3-4825-a884-3d825e75f686_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8808 |
+
2202.00xxx/2202.00972/80a992e8-14f3-4330-87f1-8aacefd9b67d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8809 |
+
2202.00xxx/2202.00973/655368e1-df77-4fe1-8f7e-ad5530fa8641_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8810 |
+
2202.01xxx/2202.01079/8c8ab227-e1b2-4bcd-82e9-a035f41f61c2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8811 |
+
2202.01xxx/2202.01337/113fdb22-a21a-46c3-86a7-27255626c01e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8812 |
+
2202.01xxx/2202.01338/f89ef104-8da1-4b4d-9ab7-ebf9b1df6d34_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8813 |
+
2202.01xxx/2202.01889/53552a2e-4228-423b-beef-977c874d5658_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 8814 |
+
2202.03xxx/2202.03133/720c3a3c-62ba-4d7e-a0bd-adc5fa5f08a4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2201.12xxx/2201.12987/e39d9397-7e7d-4466-995c-a7a5cf1cbcda_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.12xxx/2201.12987/e39d9397-7e7d-4466-995c-a7a5cf1cbcda_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.12xxx/2201.12987/e39d9397-7e7d-4466-995c-a7a5cf1cbcda_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:de148b899279bc08e9f7ae5655f5ccab5c86cfcc573ed93df82dbadcab654177
|
| 3 |
+
size 1895234
|
2201.12xxx/2201.12987/full.md
ADDED
|
@@ -0,0 +1,554 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism
|
| 2 |
+
|
| 3 |
+
Siqi Miao<sup>1</sup> Miaoyuan Liu<sup>2</sup> Pan Li<sup>1</sup>
|
| 4 |
+
|
| 5 |
+
# Abstract
|
| 6 |
+
|
| 7 |
+
Interpretable graph learning is in need as many scientific applications depend on learning models to collect insights from graph-structured data. Previous works mostly focused on using post-hoc approaches to interpret pre-trained models (graph neural networks in particular). They argue against inherently interpretable models because the good interpretability of these models is often at the cost of their prediction accuracy. However, those post-hoc methods often fail to provide stable interpretation and may extract features that are spuriously correlated with the task. In this work, we address these issues by proposing Graph Stochastic Attention (GSAT). Derived from the information bottleneck principle, GSAT injects stochasticity to the attention weights to block the information from task-irrelevant graph components while learning stochasticity-reduced attention to select task-relevant subgraphs for interpretation. The selected subgraphs provably do not contain patterns that are spuriously correlated with the task under some assumptions. Extensive experiments on eight datasets show that GSAT outperforms the state-of-the-art methods by up to $20\% \uparrow$ in interpretation AUC and $5\% \uparrow$ in prediction accuracy. Our code is available at https://github.com/Graph-COM/GSAT.
|
| 8 |
+
|
| 9 |
+
# 1. Introduction
|
| 10 |
+
|
| 11 |
+
Graph learning models are widely used in science, such as physics (Bapst et al., 2020) and biochemistry (Jumper et al., 2021). In many such disciplines, building more accurate predictive models is typically not the only goal. It is often
|
| 12 |
+
|
| 13 |
+
$^{1}$ Department of Computer Science, Purdue University, West Lafayette, USA $^{2}$ Department of Physics and Astronomy, Purdue University, West Lafayette, USA. Correspondence to: Siqi Miao <miao61@purdue.edu>, Pan Li <panli@purdue.edu>.
|
| 14 |
+
|
| 15 |
+
Proceedings of the $39^{th}$ International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s).
|
| 16 |
+
|
| 17 |
+
more crucial for scientists to discover the patterns from the data that induce certain predictions (Cranmer et al., 2020). For example, identifying the functional groups in a molecule that yield its certain properties may provide insights to guide further experiments (Wencel-Delord & Glorius, 2013).
|
| 18 |
+
|
| 19 |
+
Recently, graph neural networks (GNNs) have become almost the de facto graph learning models due to their great expressive power (Kipf & Welling, 2017; Xu et al., 2019). However, their expressivity is often built upon a highly nonlinear entanglement of irregular graph features. So, it is often quite challenging to figure out the patterns in the data that GNNs use to make predictions.
|
| 20 |
+
|
| 21 |
+
Many works have been recently proposed to extract critical data patterns for the prediction by interpreting GNNs in post-hoc ways (Ying et al., 2019; Yuan et al., 2020a; Vu & Thai, 2020; Luo et al., 2020; Schlichtkrull et al., 2021; Yuan et al., 2021; Lin et al., 2021; Henderson et al., 2021). They work on a pre-trained model and propose different types of combinatorial search methods to detect the subgraphs of the input data that affect the model predictions the most.
|
| 22 |
+
|
| 23 |
+
In contrast to the above post-hoc methods, inherently interpretable models have been rarely investigated for graph learning tasks. There are two main concerns regarding such models. First, the prediction accuracy and inherent interpretability of a model often forms a trade-off (Du et al., 2019). Practitioners may not allow sacrificing prediction accuracy for better interpretability. Second, the attention mechanism, a widely-used technique to provide inherent interpretability, often cannot provide faithful interpretation (Lipton, 2018). The rationale of the attention mechanism is to learn weights for different features during the model training, and the rank of the learned weights can be interpreted as the importance of certain features (Bahdanau et al., 2015; Xu et al., 2015). However, recent extensive evaluations in NLP tasks (Serrano & Smith, 2019; Jain & Wallace, 2019; Mohankumar et al., 2020) have shown that the attention may not weigh the features that dominate the model output more than other features. In particular, for graph learning tasks, the widely-used graph attention models (Veličković et al., 2018; Li et al., 2016) seem unable to provide any reliable interpretation of the data (Ying et al., 2019; Yu et al., 2021).
|
| 24 |
+
|
| 25 |
+
Along another line of research, invariant learning (Pearl
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
Figure 1. The architecture of GSAT. $g_{\phi}$ encodes the input graph $G$ and learns stochastic attention $\alpha$ (from Bernoulli distributions) that randomly drop the edges and obtain a perturbed graph $G_{S}$ . $f_{\theta}$ encodes $G_{S}$ to make predictions. GSAT does not constrain the size of $G_{S}$ but injects stochasticity to constrain information. The subgraph of $G_{S}$ with learnt reduced-stochasticity (edges with $p_{e} \rightarrow 1$ ) provides interpretation. GSAT is a unified model by adopting just one GNN for both $g_{\phi}$ and $f_{\theta}$ . GSAT can be either trained from scratch or start from a pre-trained GNN predictor $f_{\theta}$ .
|
| 29 |
+
|
| 30 |
+
et al., 2016; Arjovsky et al., 2019; Chang et al., 2020; Krueger et al., 2021) has been proposed to provide inherent interpretability and better generalizability. They argue that the models naively trained over biased data may risk capturing spurious correlations between the input environment features and the labels, and thus suffer from severe generalization issues. So, they propose to train models that align with the causal relations between the signal features and the labels. However, such training approaches to match causal relations typically have high computational complexity.
|
| 31 |
+
|
| 32 |
+
In this work, we are to address the above concerns by proposing Graph Stochastic Attention (GSAT), a novel attention mechanism to build inherently interpretable and well generalizable GNNs. The rationale of GSAT roots in the notion of information bottleneck (IB) (Tishby et al., 2000; Tishby & Zaslavsky, 2015). We formulate the attention as an IB by injecting stochasticity into the attention to constrain the information flow from the input graph to the prediction (Shannon, 1948). Such stochasticity over the label-irrelevant graph components will be kept during the training while that over the label-relevant ones can automatically get reduced. This difference eventually provides model interpretation. By penalizing the amount of information from the input data, GSAT is also expected to be more generalizable.
|
| 33 |
+
|
| 34 |
+
Our study achieves the following observations and contributions. First, the IB principle frees GSAT from any potentially biased assumptions adopted in previous methods such as the size or the connectivity constraints on the detected graph patterns. Even when those assumptions are satisfied, GSAT still works the best without using such assumptions, while when those assumptions are not satisfied, GSAT achieves significantly better interpretation. See the sampled interpretation result visualizations in Fig. 2 and Fig. 3. Second, from the perspective of IB, all post-hoc interpretation methods are suboptimal. They essentially optimize a model without any information control and then perform a single-step projection to an information-controlled
|
| 35 |
+
|
| 36 |
+

|
| 37 |
+
Figure 2. Visualizing attention (normalized to [0, 1]) of GSAT (second row) v.s. masks of GraphMask (Schlichtkrull et al., 2021) (third row) on MNIST-75sp. The first row shows the ground-truth. Different digit samples contain interpretable subgraphs of different sizes, while GSAT is not sensitive to such varied sizes.
|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
Figure 3. Visualizing attention (normalized to [0, 1]) of GSAT (first row) and masks of GraphMask (Schlichtkrull et al., 2021) (second row) on a motif example, where graphs with three house motifs and graphs with two house motifs represent two classes. Samples may contain disconnected interpretable subgraphs, while GSAT detects them accurately. More details can be found in Appendix C.4.
|
| 41 |
+
|
| 42 |
+
space, which makes the final interpretation performance sensitive to the pre-trained models. Third, by reducing the information from the input graph, GSAT can provably remove spurious correlations in the training data under certain assumptions and achieve better generalization. Fourth, if a pre-trained model is provided, GSAT may further improve both of its interpretation and prediction accuracy.
|
| 43 |
+
|
| 44 |
+
We evaluate GSAT in terms of both interpretability and label-prediction performance. Experiments over 8 datasets show that GSAT outperforms the state-of-the-art (SOTA) methods by up to $20\% \uparrow$ in interpretation AUC and $5\% \uparrow$ in prediction accuracy. Notably, GSAT achieves the SOTA performance on molhiv on OGB (Hu et al., 2020) among the models that do not use manually-designed expert features.
|
| 45 |
+
|
| 46 |
+
# 2. Preliminaries
|
| 47 |
+
|
| 48 |
+
As preliminaries, we define a few notations and concepts.
|
| 49 |
+
|
| 50 |
+
Graph. An attributed graph can be denoted as $G = (A, X)$ where $A$ is the adjacency matrix and $X$ includes node attributes. Let $V$ and $E$ denote the node set and the edge set, respectively. We focus on graph-level tasks: A training set of graphs with their labels $(G^{(i)}, Y^{(i)})$ , $i = 1, \dots, n$ are given, where each sample $(G^{(i)}, Y^{(i)})$ is assumed to be IID sampled from some unknown distribution $\mathbb{P}_{\mathcal{Y} \times \mathcal{G}} = \mathbb{P}_{\mathcal{Y} | \mathcal{G}} \mathbb{P}_{\mathcal{G}}$ .
|
| 51 |
+
|
| 52 |
+
Label-relevant Subgraph. A label-relevant subgraph refers to the subgraph $G_{S}$ of the input graph $G$ that mostly
|
| 53 |
+
|
| 54 |
+
indicates the label $Y$ . For example, to determine the solubility of a molecule, the hydroxy group -OH is a positive-label-relevant subgraph, as if it exists, the molecule is often soluble to the water. Finding label-relevant subgraphs is a common goal of interpretable graph learning.
|
| 55 |
+
|
| 56 |
+
Attention Mechanism. Attention mechanism has been widely used in interpretable neural networks for NLP and CV tasks (Bahdanau et al., 2015; Xu et al., 2015; Vaswani et al., 2017). However, GNNs with attention (Veličković et al., 2018) often generate low-fidelity attention weights. As it learns multiple weights for every edge, it is far from trivial to combine those weights with the irregular graph structure to perform graph label-relevant feature selection.
|
| 57 |
+
|
| 58 |
+
There are two types of attention models: One normalizes the attention weights to sum to one (Bahdanau et al., 2015), while the other learns weights between $[0,1]$ without normalization (Xu et al., 2015). As the counterparts in GNN models, GAT adopts the normalized one (Veličković et al., 2018) while GGNN adopts the unnormalized one (Li et al., 2016). Our method belongs to the second category.
|
| 59 |
+
|
| 60 |
+
Graph Neural Network. GNNs are neural network models that encode graph-structured data into node representations or graph representations. They initialize each node feature representation with its attributes $h_v^{(0)} = X_v$ and then gradually update it by aggregating representations from its neighbors, i.e., $h_v^{(l+1)} \gets q(h_v^{(l)}, \{h_u^{(l)} | u : (u, v) \in E\})$ where $q(\cdot)$ denotes a function implemented by NNs (Gilmer et al., 2017). Graph representations are often obtained via an aggregation (sum/mean) of node representations.
|
| 61 |
+
|
| 62 |
+
Learning to Explain (L2X). L2X (Chen et al., 2018) studies the feature selection problem in the regular feature space and proposed a mutual information (MI) maximization rule to select a fixed number of features. Specifically, let $I(a; b) \triangleq \sum_{a, b} \mathbb{P}(a, b) \log \frac{\mathbb{P}(a, b)}{\mathbb{P}(a) \mathbb{P}(b)}$ denote the MI between two random variables $a$ and $b$ . Large MI indicates certain high correlation between two random variables. Hence, with input features $X \in \mathbb{R}^F$ , L2X is to search a $k$ -sized set of indices $S \subseteq \{1, 2, \dots, F\}$ , where $k = |S| < F$ , such that the features in the subspace indexed by $S$ (denoted by $X_S$ ) maximizes the mutual information with the labels $Y$ , i.e.,
|
| 63 |
+
|
| 64 |
+
$$
|
| 65 |
+
\max _ {S \subseteq \{1, 2, \dots , F \}} I (X _ {S}; Y), \quad \text {s . t .} | S | \leq k. \tag {1}
|
| 66 |
+
$$
|
| 67 |
+
|
| 68 |
+
Our model is inspired by L2X. However, as graph features and their interpretable counterparts are in an irregular space without a fixed dimension, directly applying L2X may achieve subpar performance in graph learning tasks. We propose to use information constraint instead in Sec. 3.1.
|
| 69 |
+
|
| 70 |
+
Later, we will also use the entropy defined as $H(a) \triangleq -\sum_{a} \mathbb{P}(a) \log \mathbb{P}(a)$ and the KL-divergence defined as $\mathrm{KL}(\mathbb{P}(a) || \mathbb{Q}(a)) \triangleq \sum_{a} \mathbb{P}(a) \log \frac{\mathbb{P}(a)}{\mathbb{Q}(a)}$ (Cover, 1999).
|
| 71 |
+
|
| 72 |
+

|
| 73 |
+
Figure 4. Post-hoc methods just perform one-step projection to the information-constrained space, which is always suboptimal and the interpretation performance is sensitive to the pre-trained model.
|
| 74 |
+
|
| 75 |
+
# 3. Graph Learning Interpretation via GIB
|
| 76 |
+
|
| 77 |
+
In this section, we will first propose the GIB-based objective for interpretable graph learning and point out the issues of post-hoc GNN interpretation methods.
|
| 78 |
+
|
| 79 |
+
# 3.1. GIB-based Objective for Interpretation
|
| 80 |
+
|
| 81 |
+
Finding label-relevant subgraphs in graph learning tasks has unique challenges. As for the irregularity of graph structures, graph learning models often have to deal with the input graphs of various sizes. The critical subgraph patterns may be also of different sizes and be highly irregular. Consider the example of molecular solubility again, although the functional groups for positive solubility such as -OH, $-\mathrm{NH}_2$ are of similar sizes, those for negative solubility range from small groups (e.g., -Cl) to extremely large ones (e.g. $-\mathrm{C}_{10}\mathrm{H}_9$ ). And, a molecule may contain multiple functional groups scattered in the graph that determine its properties. Given these observations, it is not proper to just mimic the cardinality constraint used for a regular dimension space (Eq. (1)) and select subgraphs of certain sizes potentially with a connectivity constraint as done in (Ying et al., 2019). Inspired by the graph information bottleneck (GIB) principle (Wu et al., 2020; Yu et al., 2021), we propose to use information constraint instead to select label-relevant subgraphs, i.e., solving
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\max _ {G _ {S}} I \left(G _ {S}; Y\right), \text {s . t .} I \left(G _ {S}; G\right) \leq \gamma , G _ {S} \in \mathbb {G} _ {\text {s u b}} (G) \tag {2}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
where $\mathbb{G}_{sub}(G)$ denotes the set of the subgraphs of $G$ . Note that GIB does not impose any potentially biased constraints such as the size or the connectivity of the selected subgraphs. Instead, GIB uses the information constraint $I(G_{S};G)\leq \gamma$ to select $G_{S}$ that inherits only the most indicative information from $G$ to predict the label $Y$ by maximizing $I(G_{S};Y)$ . As thus, $G_{S}$ provides model interpretation.
|
| 88 |
+
|
| 89 |
+
Yu et al. (2021) also considered using GIB to select subgraphs. However, we adopt a fundamentally different mechanism that we will provide a detailed comparison in Sec. 4.4.
|
| 90 |
+
|
| 91 |
+
# 3.2. Issues of Post-hoc GNN Interpretation Methods
|
| 92 |
+
|
| 93 |
+
Almost all previous GNN interpretation methods are posthoc, such as GNNExplainer (Ying et al., 2019), PGEx-
|
| 94 |
+
|
| 95 |
+

|
| 96 |
+
(a) Ba-2Motifs
|
| 97 |
+
|
| 98 |
+

|
| 99 |
+
|
| 100 |
+

|
| 101 |
+
(b) Mutag
|
| 102 |
+
Figure 5. Issues of post-hoc interpretation methods. All methods are trained with 10 random seeds; post-hoc methods are also provided with models pre-trained with different seeds. Interpretation performance and the training losses of Eq. 2 for GSAT and Eq. 4 for others are shown. We guarantee that all the pre-trained models are well-trained in their pre-training stage (Acc. $\sim 100\%$ Ba-2Motif, $\sim 90\%$ Mutag).
|
| 103 |
+
|
| 104 |
+

|
| 105 |
+
|
| 106 |
+
plainer (Luo et al., 2020) and GraphMask (Schlichtkrull et al., 2021). Given a pre-trained predictor $f_{\theta}(\cdot): \mathcal{G} \to \mathcal{V}$ , they try to find out the subgraph $G_{S}$ that impacts the model predictions the most, while keeping the pre-trained model unchanged. This procedure essentially first maximizes the MI between $f_{\theta}(G)$ and $Y$ and obtains a model parameter
|
| 107 |
+
|
| 108 |
+
$$
|
| 109 |
+
\tilde {\theta} \triangleq \arg \max _ {\theta} I \left(f _ {\theta} (G); Y\right), \tag {3}
|
| 110 |
+
$$
|
| 111 |
+
|
| 112 |
+
and then optimizes a subgraph extractor $g_{\phi}$ via
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
\tilde {\phi} \triangleq \arg \max _ {\phi} I \left(f _ {\tilde {\theta}} \left(G _ {S}\right); Y\right), \text {s . t .} G _ {S} = g _ {\phi} (G) \in \Omega . \tag {4}
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
where $\Omega$ implies a subset of the subgraphs $\mathbb{G}_{sub}(G)$ that satisfy some constraints, e.g., the cardinality constraint adopted by GNNExplainer and PGExplainer. Let us temporarily ignore the difference between different constraints and just focus on the optimization objective. The post-hoc objective Eq. (4) and GIB (Eq. (2)) share some similar spirits. However, the post-hoc methods may not give or even approximate the optimal solution to Eq. (2) because $f_{\theta} \circ g_{\phi}$ is not jointly trained. From the optimization perspective, post-hoc methods just perform one-single step projection (see Fig. 4) from the model $f_{\tilde{\theta}}$ in an unconstrained space to $f_{\tilde{\theta}} \circ g_{\tilde{\phi}}$ in the information-constrained space $\Omega$ where the projection rule follows that the induced MI decreases $I(f_{\tilde{\theta}}(G);Y) - I(f_{\tilde{\theta}}(g_{\tilde{\phi}}(G));Y)$ gets minimized.
|
| 119 |
+
|
| 120 |
+
In practice, such a suboptimal behavior will yield two undesired consequences. First, $f_{\tilde{\theta}}$ may not fully extract the information from $G_S = g_\phi(G)$ to predict $Y$ during the optimization of Eq. (4) because $f_{\tilde{\theta}}$ is originally trained to make $I(f_{\tilde{\theta}}(G);Y)$ approximate $I(G,Y)$ while $(G_S,Y) = (g_\phi(G),Y)$ follows a distribution different from $(G,Y)$ . Therefore, $I(f_{\tilde{\theta}}(G_S);Y)$ may not well approximate $I(G_S;Y)$ , and thus may mislead the optimization of $g_\phi$ and disable $g_\phi$ to select $G_S$ that indeed indicates $Y$ . GN-Explainer suffers from this issue over Ba-2Motif as shown in Fig. 5: The training loss, $-I(f_{\tilde{\theta}}(G_S);Y)$ keeps high and the interpretation performance is subpar. It is possible to further decrease the training loss via a more aggressive optimization of $g_\phi$ . However, the models may risk overfitting the data, which yields the second issue.
|
| 121 |
+
|
| 122 |
+
An aggressive optimization of $g_{\phi}$ may give a large empirical MI $\hat{I}\left(f_{\tilde{\theta}}(g_{\phi}(G));Y\right)$ (or a small training loss equivalently) by selecting features that help to distinguish labels for training but are essentially irrelevant to the labels or spuriously correlated with the labels in the population level. Previous works have shown that label-irrelevant features are known to be discriminative enough to even identify each graph in the training dataset let alone the labels (Suresh et al., 2021). Empirically, we indeed observe such overfitting problems of all post-hoc methods over Mutag as shown in Fig. 5, especially PGExplainer and GraphMask. In the first 5 to 10 epochs, these two models succeed in selecting good explanations while having a large training loss. Further training successfully decreases the loss (after 10 epochs) but degenerates the interpretation performance substantially. This might also be the reason why in the original literatures of these post-hoc methods, training over only a small number of epochs is suggested. However, in practical tasks, it is hard to have the ground truth interpretation labels to verify the results and decide a trusty stopping criterion.
|
| 123 |
+
|
| 124 |
+
Another observation of Fig. 5 also matches our expectation: From the optimization perspective, post-hoc methods suffer from an initialization issue. Their interpretability can be highly sensitive to the pre-trained model $f_{\tilde{\theta}}$ , as empirically demonstrated by the large variances in Fig. 5. Only if the pretrained $f_{\tilde{\theta}}$ approximates the optimal $f_{\theta^*}$ , the performance can be roughly guaranteed. So, a joint training of $f_{\theta} \circ g_{\phi}$ according to the GIB principle Eq. (2) is typically needed.
|
| 125 |
+
|
| 126 |
+
# 4. Stochastic Attention Mechanism for GIB
|
| 127 |
+
|
| 128 |
+
In this section, we will first give a tractable variational bound of the GIB objective (Eq. (2)), and then introduce our model GSAT with the stochastic attention mechanism. We will further discuss how the stochastic attention mechanism improves both model interpretation and generalization.
|
| 129 |
+
|
| 130 |
+
# 4.1. A Tractable Objective for GIB
|
| 131 |
+
|
| 132 |
+
GSAT is to learn an extractor $g_{\phi}$ with parameter $\phi$ to extract $G_{S} \in \mathbb{G}_{\mathrm{sub}}(G)$ . $g_{\phi}$ blocks the label-irrelevant informa
|
| 133 |
+
|
| 134 |
+
tion in the data $G$ via injected stochasticity while allowing the label-relevant information kept in $G_{S}$ to make predictions. In GSAT, $g_{\phi}(G)$ essentially gives a distribution over $\mathbb{G}_{\mathrm{sub}}(G)$ . We also denote this distribution as $\mathbb{P}_{\phi}(G_S|G)$ . Later, $g_{\phi}(G)$ and $\mathbb{P}_{\phi}(G_S|G)$ are used interchangeably.
|
| 135 |
+
|
| 136 |
+
Putting the constraint into the objective (Eq.(2)), we obtain the optimization of $g_{\phi}$ via GIB, i.e., for some $\beta > 0$ ,
|
| 137 |
+
|
| 138 |
+
$$
|
| 139 |
+
\min _ {\phi} - I \left(G _ {S}; Y\right) + \beta I \left(G _ {S}; G\right), \text {s . t .} G _ {S} \sim g _ {\phi} (G). \tag {5}
|
| 140 |
+
$$
|
| 141 |
+
|
| 142 |
+
Next, we follow Alemi et al. (2016); Poole et al. (2019); Wu et al. (2020) to derive a tractable variational upper bound of the two terms in Eq. (5). Detailed derivation is given in Appendix B. For the term $I(G_S;Y)$ , we introduce a parameterized variational approximation $\mathbb{P}_{\theta}(Y|G_S)$ for $\mathbb{P}(Y|G_S)$ . We obtain a lower bound:
|
| 143 |
+
|
| 144 |
+
$$
|
| 145 |
+
I \left(G _ {S}; Y\right) \geq \mathbb {E} _ {G _ {S}, Y} \left[ \log \mathbb {P} _ {\theta} (Y \mid G _ {S}) \right] + H (Y). \tag {6}
|
| 146 |
+
$$
|
| 147 |
+
|
| 148 |
+
Note that $\mathbb{P}_{\theta}(Y|G_S)$ essentially works as the predictor $f_{\theta}:\mathcal{G}\to \mathcal{V}$ with parameter $\theta$ in our model. For the term $I(G_{S};G)$ , we introduce a variational approximation $\mathbb{Q}(G_S)$ for the marginal distribution $\mathbb{P}(G_S) = \sum_G\mathbb{P}_\phi (G_S|G)\mathbb{P}_\mathcal{G}(G)$ . And, we obtain an upper bound:
|
| 149 |
+
|
| 150 |
+
$$
|
| 151 |
+
I \left(G _ {s}; G\right) \leq \mathbb {E} _ {G} \left[ \mathrm {K L} \left(\mathbb {P} _ {\phi} \left(G _ {S} \mid G\right) \mid \mid \mathbb {Q} \left(G _ {S}\right)\right) \right] \tag {7}
|
| 152 |
+
$$
|
| 153 |
+
|
| 154 |
+
Plugging in the above two inequalities, we obtain a variational upper bound of Eq. (5) as the objective of GSAT:
|
| 155 |
+
|
| 156 |
+
$$
|
| 157 |
+
\begin{array}{l} \min _ {\theta , \phi} - \mathbb {E} \left[ \log \mathbb {P} _ {\theta} (Y | G _ {S}) \right] + \beta \mathbb {E} \left[ \mathrm {K L} (\mathbb {P} _ {\phi} (G _ {S} | G) | | \mathbb {Q} (G _ {S})) \right], \\ \text {s . t .} \quad G _ {S} \sim \mathbb {P} _ {\phi} \left(G _ {S} \mid G\right). \tag {8} \\ \end{array}
|
| 158 |
+
$$
|
| 159 |
+
|
| 160 |
+
Next, we specify $\mathbb{P}_{\theta}$ (aka $f_{\theta}$ ), $\mathbb{P}_{\phi}$ (aka $g_{\phi}$ ) and $\mathbb{Q}$ in GSAT.
|
| 161 |
+
|
| 162 |
+
# 4.2. GSAT and Stochastic Attention Mechanism
|
| 163 |
+
|
| 164 |
+
For clarity, we introduced the predictor $f_{\theta}$ and the extractor $g_{\phi}$ separately. Actually, GSAT is a unified model as $f_{\theta}, g_{\phi}$ share the same GNN encoder except their last layers.
|
| 165 |
+
|
| 166 |
+
Stochastic Attention via $\mathbb{P}_{\phi}$ . The extractor $g_{\phi}$ first encodes the input graph $G$ via the GNN into a set of node representations $\{h_v|v\in V\}$ . For each edge $(u,v)\in E$ , $g_{\phi}$ contains an MLP layer plus sigmoid that maps the concatenation $(h_u,h_v)$ into $p_{uv}\in [0,1]$ . Then, for each forward pass of the training, we sample stochastic attention from Bernoulli distributions $\alpha_{uv}\sim \mathrm{Bern}(p_{uv})$ . To make sure the gradient w.r.t. $p_{uv}$ is computable, we apply the gumbel-softmax reparameterization trick (Jang et al., 2017). The extracted graph $G_{S}$ will have an attention-selected subgraph as $A_{S} = \alpha \odot A$ . Here $\alpha$ is the matrix with entries $\alpha_{uv}$ for $(u,v)\in E$ or zeros for the non-edge entries. $A$ is the adjacency matrix of $G$ and $\odot$ is entry-wise product. The distribution of $G_{S}$ given $G$ through the above procedure characterizes $\mathbb{P}_{\phi}(G_S|G)$ , so
|
| 167 |
+
|
| 168 |
+
$\mathbb{P}_{\phi}(G_S|G) = \prod_{u,v\in E}\mathbb{P}(\alpha_{uv}|p_{uv})$ , where $p_{uv}$ is a function of $G$ . This essentially makes the attention $\alpha_{uv}$ to be conditionally independent across different edges given the input graph $G$ .
|
| 169 |
+
|
| 170 |
+
Prediction via $\mathbb{P}_{\theta}$ . The predictor $f_{\theta}$ adopts the same GNN to encode the extracted graph $G_{S}$ to a graph representation, and finally passes such representation through an MLP layer plus softmax to model the distribution of $Y$ . This procedure gives the variational distribution $\mathbb{P}_{\theta}(Y|G_S)$ .
|
| 171 |
+
|
| 172 |
+
Marginal Distribution Control via $\mathbb{Q}$ . The bound Eq.(7) is always true for any $\mathbb{Q}(G_S)$ . We define $\mathbb{Q}(G_S)$ as follows. For every graph $G\sim \mathbb{P}_{\mathcal{G}}$ and every two directed node pair $(u,v)$ in $G$ , we sample $\alpha_{uv}^{\prime}\sim \mathrm{Bern}(r)$ where $r\in [0,1]$ is a hyperparameter. We remove all edges in $G$ and add all edges $(u,v)$ if $\alpha_{uv}^{\prime} = 1$ . Suppose the obtained graph is $G_{S}$ . This procedure defines the distribution $\mathbb{Q}(G_S) = \sum_G\mathbb{P}(\alpha'|G)\mathbb{P}_{\mathcal{G}}(G)$ . As $\alpha^{\prime}$ is independent from the graph $G$ given its size $n$ , $\mathbb{Q}(G_S) = \sum_n\mathbb{P}(\alpha'|n)\mathbb{P}_{\mathcal{G}}(G = n) = \mathbb{P}(n)\prod_{u,v = 1}^n\mathbb{P}(\alpha_{uv}^{\prime})$ . The probability of an $n$ -sized graph $\mathbb{P}(n)$ is a constant and thus will not affect the model. Note that our choice of $\mathbb{Q}(G_S)$ shares the similar spirit of using standard Gaussian as the latent distribution with variational auto-encoders (Kingma & Welling, 2014).
|
| 173 |
+
|
| 174 |
+
Using the above $\mathbb{P}_{\theta}$ , the first term in Eq.(8) reduces to a standard cross entropy loss. Using $\mathbb{P}_{\phi}$ and $\mathbb{Q}$ , the KL-divergence term becomes, for every $G \sim \mathbb{P}_{\mathcal{G}}$ , $n$ as the size of $G$ ,
|
| 175 |
+
|
| 176 |
+
$$
|
| 177 |
+
\begin{array}{l} \operatorname {K L} \left(\mathbb {P} _ {\phi} \left(G _ {S} \mid G\right) \mid \mid \mathbb {Q} \left(G _ {S}\right)\right) = \tag {9} \\ \sum_ {(u, v) \in E} p _ {u v} \log \frac {p _ {u v}}{r} + (1 - p _ {u v}) \log \frac {1 - p _ {u v}}{1 - r} + c (n, r). \\ \end{array}
|
| 178 |
+
$$
|
| 179 |
+
|
| 180 |
+
where $c(n,r)$ is a constant without any trainable parameters.
|
| 181 |
+
|
| 182 |
+
# 4.3. The Interpretation Mechanism of GSAT
|
| 183 |
+
|
| 184 |
+
The interpretability of GSAT essentially comes from the information control: GSAT decreases the information from the input graphs by injecting stochasticity via attention into $G_{S}$ . In the training, the regularization term Eq.(9) would try to assign large stochasticity for all edges, yet driven by the classification loss $\min - I(G_{S}; Y)$ (equivalent to cross-entropy loss), GSAT can learn to reduce such stochasticity of the attention on the task-relevant subgraphs. So, it is not the entire $G_{S}$ but the part of $G_{S}$ with the stochasticity-reduced attention, aka $p_{uv} \to 1$ , that provide model interpretation. Therefore, when GSAT provides interpretation, in practice, one can rank all edges according to $p_{uv}$ and use those top ranked ones (given a certain budget if needed) as the detected subgraph for interpretation. The contribution of injecting stochasticity to the performance is so significant as shown in experiments (Table 5), so is the contribution of our regularization term (Eq. (9)) when we compare it with the sparsity-driven $\ell_{1}$ -norm (Fig. 7).
|
| 185 |
+
|
| 186 |
+
GSAT is substantially different from previous methods, as we do not use any sparsity constraints such as $\ell_1$ -norm (Ying et al., 2019; Luo et al., 2020), $\ell_0$ -norm (Schlichtkrull et al., 2021) or $\ell_2$ -regression to $\{0,1\}$ (Yu et al., 2021) to select size-constrained (or connectivity-constrained) subgraphs. We actually observe that setting $r$ away from 0 in the marginal regularization (Eq. (9)), i.e., pushing $G_S$ away from being sparse often provides more robust interpretation. This matches our intuition that GIB by definition does not make any assumptions on the selected subgraphs but just constrains the information from the original graphs. Our experiments show that GSAT outperform baselines significantly without leveraging those assumptions in the optimization even if the label-relevant subgraphs satisfy these assumptions. If the label-relevant subgraphs are indeed disconnected or vary in sizes, the improvement of GSAT is expected to be even more.
|
| 187 |
+
|
| 188 |
+
# 4.4. Further Comparison on Interpretation Mechanism
|
| 189 |
+
|
| 190 |
+
PGExplainer and GraphMask also have stochasticity in their models (Luo et al., 2020; Schlichtkrull et al., 2021). However, their main goal is to enable a gradient-based search over a discrete subgraph-selection space rather than control the information as GSAT does. Hence, they did not in principle derive the information regularization as ours (Eq. (9)) but adopt sparsity constraints to extract a small subgraph $G_{S}$ directly used for interpretation.
|
| 191 |
+
|
| 192 |
+
IB-subgraph (Yu et al., 2021) considers using GIB as the objective but does not inject any stochasticity to generate $G_{S}$ , so its selected subgraph $G_{S}$ is a deterministic function of $G$ . Specifically, IB-subgraph samples batches of graphs $G$ to estimate $I(G_{S};G)$ and optimize a deterministic function $G_{S} = g_{\phi}(G)$ to minimize such MI estimation. In this case $I(G_{S};G)(= H(G_{S}) - H(G_{S}|G))$ reduces to the entropy $H(G_{S})$ , which tends to give a small-sized $G_{S}$ , because the space of small graphs is small and has a lower upper bound of the entropy. By contrast, $G_{S} \sim g_{\phi}(G)$ is random in GSAT, and GSAT implements GIB mainly by increasing $H(G_{S}|G)$ via injecting stochasticity.
|
| 193 |
+
|
| 194 |
+
# 4.5. Guaranteed Spurious Correlation Removal
|
| 195 |
+
|
| 196 |
+
GSAT can remove spurious correlations in the training data and has guaranteed interpretability. We may prove that if there exists a correspondence between a subgraph pattern $G_{S}^{*}$ and the label $Y$ , the pattern $G_{S}^{*}$ is the optimal solution of the GIB objective (Eq. (2)).
|
| 197 |
+
|
| 198 |
+
Theorem 4.1. Suppose each $G$ contains a subgraph $G_{S}^{*}$ such that $Y$ is determined by $G_{S}^{*}$ in the sense that $Y = f(G_{S}^{*}) + \epsilon$ for some deterministic invertible function $f$ with randomness $\epsilon$ that is independent from $G$ . Then, for any $\beta \in [0,1]$ , $G_{S} = G_{S}^{*}$ maximizes the GIB $I(G_{S};Y) - \beta I(G_{S};G)$ , where $G_{S} \in \mathbb{G}_{\mathrm{sub}}(G)$ .
|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
Figure 6. $G_S^*$ determines $Y$ . However, the environment features in $G \backslash G_S^*$ may contain spurious (backdoor) correlation with $Y$ .
|
| 202 |
+
|
| 203 |
+
Proof. Consider the following derivation:
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
\begin{array}{l} I \left(G _ {S}; Y\right) - \beta I \left(G _ {S}; G\right) \\ = I (Y; G, G _ {S}) - I (G; Y \mid G _ {S}) - \beta I (G _ {S}; G) \\ = I (Y; G, G _ {S}) - (1 - \beta) I (G; Y | G _ {S}) - \beta I (G; G _ {S}, Y) \\ = I (Y; G) - (1 - \beta) I (G; Y | G _ {S}) - \beta I (G; G _ {S}, Y) \\ = (1 - \beta) I (Y; G) - (1 - \beta) I (G; Y | G _ {S}) - \beta I (G; G _ {S} | Y), \\ \end{array}
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
where the third equality is because $G_{S} \in \mathbb{G}_{sub}(G)$ , then $(G_{S}, G)$ holds no more information than $G$ .
|
| 210 |
+
|
| 211 |
+
If $\beta \in [0,1]$ , $G_{S}$ that maximizes $I(G_{S},Y) - \beta I(G_{S};G)$ can also minimize $(1 - \beta)I(G;Y|G_{S}) + \beta I(G;G_{S}|Y)$ . As $I(G;Y|G_{S})\geq 0$ , $I(G;G_{S}|Y)\geq 0$ , the lower bound of $(1 - \beta)I(G;Y|G_{S}) + \beta I(G;G_{S}|Y)$ is 0.
|
| 212 |
+
|
| 213 |
+
$G_{S}^{*}$ is the subgraph that makes $(1 - \beta)I(G;Y|G_{S}^{*}) + \beta I(G;G_{S}^{*}|Y) = 0$ . This is because (a) $Y = f(G_{S}^{*}) + \epsilon$ where $\epsilon$ is independent of $G$ so $I(G;Y|G_{S}^{*}) = 0$ and (b) $G_{S}^{*} = f^{-1}(Y - \epsilon)$ where $\epsilon$ is independent of $G$ so $I(G;G_{S}^{*}|Y) = 0$ . Therefore, $G_{S} = G_{S}^{*}$ maximizes GIB $I(G_{S};Y) - \beta I(G_{S};G)$ , where $G_{S}\in \mathbb{G}_{\mathrm{sub}}(G)$ .
|
| 214 |
+
|
| 215 |
+
Although $G_{S}^{*}$ determines $Y$ , in the training dataset the data $G$ and $Y$ may have some spurious correlation caused by the environment (Pearl et al., 2016; Arjovsky et al., 2019; Chang et al., 2020; Krueger et al., 2021). That is, $G \backslash G_{S}^{*}$ may have some correlation with the label, but this correlation is spurious and is not the true reason that determines its label (illustrated in Fig. 6). A model trained over $G$ to predict $Y$ via just MI maximization may capture such spurious correlation. If such correlation is changed during the test phase, the model suffers from performance decay.
|
| 216 |
+
|
| 217 |
+
However, Theorem 4.1 indicates that GSAT by optimizing the GIB objective has the capability to address the above issue by only extracting $G_{S}^{*}$ , which removes the spurious correlation and also provides guaranteed interpretability.
|
| 218 |
+
|
| 219 |
+
# 4.6. Fine-tuning and Interpreting a Pre-trained Model
|
| 220 |
+
|
| 221 |
+
GSAT can also fine-tune and interpret a pre-trained GNN. Given a GNN $f_{\tilde{\theta}}$ pre-trained by $\max_{\theta} I(f_{\theta}(G);Y)$ , GSAT can fine-tune it via $\max_{\theta, \phi} I(f_{\theta}(G_S);Y) - \beta I(G_S;G)$ , $G_S \sim g_{\phi}(G)$ by initializing the GNN used in $g_{\phi}$ and $f_{\theta}$ as the one in the pre-trained model $f_{\tilde{\theta}}$ .
|
| 222 |
+
|
| 223 |
+
We observe that this framework almost never hurts the original prediction performance (and sometimes even boosts it).
|
| 224 |
+
|
| 225 |
+
Moreover, this framework often achieves better interpretation results compared with training the GNN from scratch.
|
| 226 |
+
|
| 227 |
+
# 5. Other Related Works
|
| 228 |
+
|
| 229 |
+
Besides the models (Ying et al., 2019; Luo et al., 2020; Schlichtkrull et al., 2021; Yu et al., 2021) that we have compared with in detail in Sec. 3.2 and Sec. 4.4, we review some other interpretation methods here.
|
| 230 |
+
|
| 231 |
+
Most previous works on GNN interpretation are posthoc (Ribeiro et al., 2016). Some works strongly rely on the connectivity assumption and only search over the space of connected subgraphs for interpretation. They adopt either reinforcement learning (Yuan et al., 2020a) or Monte Carlo tree search (Yuan et al., 2021). Other methods including PGM-Explainer (Vu & Thai, 2020) leveraging graphical models, Gem (Lin et al., 2021) checking Granger causality and Graphlime (Huang et al., 2020) using HSIC Lasso are only applied to node-level task interpretation. Some works check the gradients w.r.t. the input features to find important features (Pope et al., 2019; Baldassarre & Azizpour, 2019).
|
| 232 |
+
|
| 233 |
+
Much fewer works have considered intrinsic interpretation. Recently, Wu et al. (2022) has proposed DIR to make the model avoid overfitting spurious correlations and only capture invariant rationales to provide interpretability. However, DIR needs to iteratively break graphs into subgraphs and assemble subgraphs into graphs during the model training, which is far more complicated than GSAT.
|
| 234 |
+
|
| 235 |
+
# 6. Experiments
|
| 236 |
+
|
| 237 |
+
We evaluate our method for both interpretability and prediction performance. We will compare our method with both state-of-the-art (SOTA) post-hoc interpretation methods and inherently interpretable models. We will also compare with several invariant learning methods to demonstrate the ability of GSAT to remove spurious correlations. We briefly introduce datasets, baselines and experiment settings here, and more details can be found in Appendix C.
|
| 238 |
+
|
| 239 |
+
# 6.1. Datasets
|
| 240 |
+
|
| 241 |
+
Mutag (Debnath et al., 1991) is a molecular property prediction dataset. Following (Luo et al., 2020), $-\mathrm{NO}_2$ and $-\mathrm{NH}_2$ in mutagen graphs are labeled as ground-truth explanations.
|
| 242 |
+
|
| 243 |
+
BA-2Motifs (Luo et al., 2020) is a synthetic dataset with binary graph labels. House motifs and cycle motifs give class labels and thus are regarded as ground-truth explanations for the two classes respectively.
|
| 244 |
+
|
| 245 |
+
Spurious-Motif (Wu et al., 2022) is a synthetic dataset with three graph classes. Each class contains a particular motif that can be regarded as the ground-truth explanation. Some spurious correlation between the rest graph components
|
| 246 |
+
|
| 247 |
+
(other than the motifs) and the labels also exist in the training data. The degree of such correlation is controlled by $b$ , and we include datasets with $b = 0.5, 0.7$ and 0.9.
|
| 248 |
+
|
| 249 |
+
MNIST-75sp (Knyazev et al., 2019) is an image classification dataset, where each image in MNIST is converted to a superpixel graph. Nodes with nonzero pixel values provide ground-truth explanations. Note that the subgraphs that provide explanations are of different sizes in this dataset.
|
| 250 |
+
|
| 251 |
+
Graph-SST2 (Socher et al., 2013; Yuan et al., 2020b) is a sentiment analysis dataset, where each text sequence in SST2 is converted to a graph. Following the splits in (Wu et al., 2022), this dataset contains degree shifts and no ground-truth explanation labels. So, we only evaluate prediction performance and provide interpretation visualizations.
|
| 252 |
+
|
| 253 |
+
OGBG-Molhiv (Wu et al., 2018; Hu et al., 2020) is a molecular property prediction datasets. We also evaluate GSAT on molbace, molbbbp, molclintosh, moltox21 and molsider datasets from OGBG. As there are no ground truth explanation labels for these datasets, we only evaluate the prediction performance of GSAT.
|
| 254 |
+
|
| 255 |
+
# 6.2. Baselines and Setup
|
| 256 |
+
|
| 257 |
+
Interpretability Baselines. We compare interpretability with post-hoc methods GNNExplainer (Ying et al., 2019), PGExplainer (Luo et al., 2020), GraphMask (Schlichtkrull et al., 2021), and inherently interpretable models DIR (Wu et al., 2022) and IB-subgraph (Yu et al., 2021).
|
| 258 |
+
|
| 259 |
+
Prediction Baselines. We compare prediction performance with the backbone models GIN (Xu et al., 2019) and PNA (Corso et al., 2020), and inherently interpretable models DIR (Wu et al., 2022) and IB-subgraph (Yu et al., 2021).
|
| 260 |
+
|
| 261 |
+
Invariant Learning Baselines. We compare the ability to remove spurious correlations with invariant learning methods IRM (Arjovsky et al., 2019), V-REx (Krueger et al., 2021) and DIR (Wu et al., 2022). Baseline results yielded by empirical risk minimization (ERM) are also included.
|
| 262 |
+
|
| 263 |
+
Metrics. For interpretation evaluation, we report explanation ROC AUC following (Ying et al., 2019; Luo et al., 2020). For prediction performance, we report classification ROC AUC for all OGBG datasets and report accuracy for all other datasets. All the results are averaged over 10 times tests with different random seeds. For the post-hoc methods, we do not cherry pick a pre-trained model. Instead, in each test, we interpret a model pre-trained independently that achieves the best validation performance.
|
| 264 |
+
|
| 265 |
+
Setup. Since we focus on graph classification tasks, GIN (Xu et al., 2019) is used as the backbone model for both baselines and GSAT. We also apply PNA (Corso et al., 2020) to further test the wide applicability of GSAT, for which we adopt the no-scalars version since the scalars used in PNA
|
| 266 |
+
|
| 267 |
+
Table 1. Interpretation Performance (AUC). The underlined results highlight the best baselines. The bold font and bold† font highlight when GSAT outperform the means of the best baselines based on the mean of GSAT and the mean-2*std of GSAT, respectively.
|
| 268 |
+
|
| 269 |
+
<table><tr><td rowspan="2"></td><td rowspan="2">BA-2MOTIFS</td><td rowspan="2">MUTAG</td><td rowspan="2">MNIST-75SP</td><td rowspan="2">b=0.5</td><td colspan="2">SPURIOUS-MOTIF</td></tr><tr><td>b=0.7</td><td>b=0.9</td></tr><tr><td>GNNEXPLAINER</td><td>67.35±3.29</td><td>61.98±5.45</td><td>59.01±2.04</td><td>62.62±1.35</td><td>62.25±3.61</td><td>58.86±1.93</td></tr><tr><td>PGEXPLAINER</td><td>84.59±9.09</td><td>60.91±17.10</td><td>69.34±4.32</td><td>69.54±5.64</td><td>72.33±9.18</td><td>72.34±2.91</td></tr><tr><td>GRAPHMASK</td><td>92.54±8.07</td><td>62.23±9.01</td><td>73.10±6.41</td><td>72.06±5.58</td><td>73.06±4.91</td><td>66.68±6.96</td></tr><tr><td>IB-SUBGRAPH</td><td>86.06±28.37</td><td>91.04±6.59</td><td>51.20±5.12</td><td>57.29±14.35</td><td>62.89±15.59</td><td>47.29±13.39</td></tr><tr><td>DIR</td><td>82.78±10.97</td><td>64.44±28.81</td><td>32.35±9.39</td><td>78.15±1.32</td><td>77.68±1.22</td><td>49.08±3.66</td></tr><tr><td>GIN+GSAT</td><td>98.74†±0.55</td><td>99.60†±0.51</td><td>83.36†±1.02</td><td>78.45±3.12</td><td>74.07±5.28</td><td>71.97±4.41</td></tr><tr><td>GIN+GSAT*</td><td>97.43†±1.77</td><td>97.75†±0.92</td><td>83.70†±1.46</td><td>85.55†±2.57</td><td>85.56†±1.93</td><td>83.59†±2.56</td></tr><tr><td>PNA+GSAT</td><td>93.77±3.90</td><td>99.07†±0.50</td><td>84.68†±1.06</td><td>83.34†±2.17</td><td>86.94†±4.05</td><td>88.66†±2.44</td></tr><tr><td>PNA+GSAT*</td><td>89.04±4.92</td><td>96.22†±2.08</td><td>88.54†±0.72</td><td>90.55†±1.48</td><td>89.79†±1.91</td><td>89.54†±1.78</td></tr></table>
|
| 270 |
+
|
| 271 |
+
Table 2. Prediction Performance (Acc.). The **bold font highlights the inherently interpretable methods that significantly outperform the corresponding backbone model, GIN or PNA, when the mean-1*std of a method > the mean of its corresponding backbone model.
|
| 272 |
+
|
| 273 |
+
<table><tr><td rowspan="2"></td><td rowspan="2">MOLHIV (AUC)</td><td rowspan="2">GRAPH-SST2</td><td rowspan="2">MNIST-75SP</td><td colspan="3">SPURIOUS-MOTIF</td></tr><tr><td>b=0.5</td><td>b=0.7</td><td>b=0.9</td></tr><tr><td>GIN</td><td>76.69 ± 1.25</td><td>82.73 ± 0.77</td><td>95.74 ± 0.36</td><td>39.87 ± 1.30</td><td>39.04 ± 1.62</td><td>38.57 ± 2.31</td></tr><tr><td>IB-SUBGRAPH</td><td>76.43 ± 2.65</td><td>82.99 ± 0.67</td><td>93.10 ± 1.32</td><td>54.36 ± 7.09</td><td>48.51 ± 5.76</td><td>46.19 ± 5.63</td></tr><tr><td>DIR</td><td>76.34 ± 1.01</td><td>82.32 ± 0.85</td><td>88.51 ± 2.57</td><td>45.49 ± 3.81</td><td>41.13 ± 2.62</td><td>37.61 ± 2.02</td></tr><tr><td>GIN+GSAT</td><td>76.47 ± 1.53</td><td>82.95 ± 0.58</td><td>96.24 ± 0.17</td><td>52.74 ± 4.08</td><td>49.12 ± 3.29</td><td>44.22 ± 5.57</td></tr><tr><td>GIN+GSAT*</td><td>76.16 ± 1.39</td><td>82.57 ± 0.71</td><td>96.21 ± 0.14</td><td>46.62 ± 2.95</td><td>41.26 ± 3.01</td><td>39.74 ± 2.20</td></tr><tr><td>PNA (NO SCALAR)</td><td>78.91 ± 1.04</td><td>79.87 ± 1.02</td><td>87.20 ± 5.61</td><td>68.15 ± 2.39</td><td>66.35 ± 3.34</td><td>61.40 ± 3.56</td></tr><tr><td>PNA+GSAT</td><td>80.24 ± 0.73</td><td>80.92 ± 0.66</td><td>93.96 ± 0.92</td><td>68.74 ± 2.24</td><td>64.38 ± 3.20</td><td>57.01 ± 2.95</td></tr><tr><td>PNA+GSAT*</td><td>80.67 ± 0.95</td><td>82.81 ± 0.56</td><td>92.38 ± 1.44</td><td>69.72 ± 1.93</td><td>67.31 ± 1.86</td><td>61.49 ± 3.46</td></tr></table>
|
| 274 |
+
|
| 275 |
+
are essentially a type of attention, which may conflict with our method. GIN+GSAT denotes using GIN as the base GNN encoder of GSAT, and PNA+GSAT means replacing the GNN encoder with PNA. In addition, we apply GSAT to fine-tune and interpret pre-trained models as described in Sec. 4.6, which is highlighted as $\mathrm{GSAT^{*}}$ . In all the experiments, we use $r = 0.7$ in Eq. (9) by default or otherwise specified. Our studies have shown that GSAT is generally robust when $r \in [0.5, 0.9]$ (see Fig. 7 later).
|
| 276 |
+
|
| 277 |
+
# 6.3. Result Comparison and Analysis
|
| 278 |
+
|
| 279 |
+
Interpretability Results. As shown in Table 1, our methods significantly outperform the baselines by $9\% \uparrow$ on average and up to $20\% \uparrow$ . If we just compare among inherently interpretable models, the boost is even more significant. Moreover, GSAT also provides much stabler interpretation than the baselines as for the much smaller variance. $\mathrm{GSAT}^*$ via fine-tuning a pre-trained model can often further boost the interpretation performance. Also, when the more expressive model PNA is used as the backbone, we find the posthoc methods are likely to suffer from the overfitting issue as explained in Sec. 3.2. However, GSAT does not suffer from that and can yield even better interpretation results. Over Ba-2Motifs and Mutag, GNNExplainer and PGExplainer work worse than what reported in (Luo et al., 2020) as we do not cherry pick the pre-trained model. However, GSAT
|
| 280 |
+
|
| 281 |
+
still significantly outperforms their reported performance in the Appendix C.4. We also provide visualizations of the subgraphs discovered by GSAT in Appendix D.
|
| 282 |
+
|
| 283 |
+
Prediction Results. As explained in Sec. 4.5, being trained via the GIB principle, GSAT is more generalizable and thus may achieve even better prediction performance. As shown in Table 2, $\mathrm{GIN + GSAT}$ significantly outperforms the backbone GIN over the Spurious-Motif datasets, where spurious correlation exists in the training data. For other datasets, $\mathrm{GIN + GSAT}$ can achieve comparable results, which matches our claim that GSAT provides interpretation without hurting the prediction. IB-subgraph, trained via the GIB principle, also achieves good prediction performance though its interpretability is poor (Table 1). When PNA is used, GSAT improves it by about $1 - 5\%$ on the datasets in the first three columns. Notably, $\mathrm{GSAT}^*$ achieves the SOTA performance on molhiv among all models that do not incorporate expert knowledge according to the leaderboard. Unexpectedly, PNA achieves very good performance on Spurious-Motif and $\mathrm{GSAT}^*$ just slightly improves it. Our results on the other 5 molecular datasets from OGBG are showed in Table 3, where GSAT and $\mathrm{GSAT}^*$ mostly outperform PNA.
|
| 284 |
+
|
| 285 |
+
Invariant Learning Results. We note that DIR achieves a bit lower prediction performance in Table 2 than what reported in (Wu et al., 2022) even after we extensively tune its
|
| 286 |
+
|
| 287 |
+
Table 3. Generalization ROC AUC on other OGBG-Mol datasets. The bold font highlights when GSAT outperforms PNA.
|
| 288 |
+
|
| 289 |
+
<table><tr><td></td><td>MOLBACE</td><td>MOLBBBP</td><td>MOLCLINTOX</td><td>MOLTOX21</td><td>MOLSIDER</td></tr><tr><td>PNA</td><td>73.52 ± 3.02</td><td>67.21 ± 1.34</td><td>86.72 ± 2.33</td><td>75.08 ± 0.64</td><td>56.51 ± 1.90</td></tr><tr><td>GSAT</td><td>77.41 ± 2.42</td><td>69.17 ± 1.12</td><td>87.80 ± 2.36</td><td>74.96 ± 0.66</td><td>57.58 ± 1.23</td></tr><tr><td>GSAT*</td><td>73.61 ± 1.59</td><td>66.30 ± 0.79</td><td>89.26 ± 1.66</td><td>75.71 ± 0.48</td><td>59.19 ± 1.03</td></tr></table>
|
| 290 |
+
|
| 291 |
+
Table 4. Direct comparison (Acc.) with invariant learning methods on the ability to remove spurious correlations, by applying the backbone model used in (Wu et al., 2022).
|
| 292 |
+
|
| 293 |
+
<table><tr><td>SPURIOUS-MOTIF</td><td>b=0.5</td><td>b=0.7</td><td>b=0.9</td></tr><tr><td>ERM</td><td>39.69±1.73</td><td>38.93±1.74</td><td>33.61±1.02</td></tr><tr><td>V-REx</td><td>39.43±2.69</td><td>39.08±1.56</td><td>34.81±2.04</td></tr><tr><td>IRM</td><td>41.30±1.28</td><td>40.16±1.74</td><td>35.12±2.71</td></tr><tr><td>DIR</td><td>45.50±2.15</td><td>43.36±1.64</td><td>39.87±0.56</td></tr><tr><td>GSAT</td><td>53.27†±5.12</td><td>56.50†±3.96</td><td>53.11†±4.64</td></tr><tr><td>GSAT*</td><td>43.27±4.58</td><td>42.51±5.32</td><td>45.76†±5.32</td></tr></table>
|
| 294 |
+
|
| 295 |
+
parameters, which is probably due to the different backbone models used. Hence, we also compare with DIR by using their backbone model. And we include several invariant learning baselines reported in DIR to further demonstrate the ability of GSAT to remove spurious correlations. Results are shown in Table 4. GSAT significantly outperforms all invariant learning methods on spurious correlation removal, even without utilizing causality analysis, which further validates our claims in Sec. 4.5. A comparison of interpretability of these models is shown in Table 7 in the appendix.
|
| 296 |
+
|
| 297 |
+
Ablation Studies. We conduct ablation studies from three aspects: First, the importance of stochasticity in GSAT, where we replace the Bernoulli sampling procedure with setting attention $\alpha_{uv} = p_{uv}$ without stochasticity; Second, the importance of the information regularization term (Eq. (9)), where we set its coefficient $\beta = 0$ in Eq. (8); Third, the superiority of the information regularization term over the sparsity-driven term $\ell_1$ -norm.
|
| 298 |
+
|
| 299 |
+
As shown in Table 5, the performance drops significantly when there is either no stochasticity or $\beta = 0$ . Specifically, GSAT-NoStoch means applying deterministic attention $\in [0,1]$ , which causes the most performance drop. GSAT-NoStoch- $\beta = 0$ corresponds to using deterministic attention without the regularization term in Eq. (9), which causes the second most performance drop. GSAT- $\beta = 0$ denotes applying stochastic attention with no regularization, which performs better than baselines but worse than original GSAT and suffers from large variance. Overall, no stochasticity yields the biggest drop, which well matches our theory. This also implies that directly using the deterministic attention mechanisms such as GAT (Velicković et al., 2018) or GGNN (Li et al., 2016) may not yield good interpretability.
|
| 300 |
+
|
| 301 |
+
Fig. 7 shows that our information regularization term can achieve consistently better performance than the sparsity-driven $\ell_1$ -norm regularization even when the grid search is used to tune hyperparameters. We also observe that when $r$ is close to 0, the results often get decreased or have higher
|
| 302 |
+
|
| 303 |
+
Table 5. Ablation study on $\beta$ and stochasticity in GSAT (GIN as the backbone model) on Spurious-Motif. We report both interpretation ROC AUC (top) and prediction accuracy (bottom).
|
| 304 |
+
|
| 305 |
+
<table><tr><td>SPURIOUS-MOTIF</td><td>b=0.5</td><td>b=0.7</td><td>b=0.9</td></tr><tr><td>GSAT</td><td>79.81 ± 3.98</td><td>74.07 ± 5.28</td><td>71.97 ± 4.41</td></tr><tr><td>GSAT-β=0</td><td>66.00 ± 11.04</td><td>65.92 ± 3.28</td><td>66.31 ± 6.82</td></tr><tr><td>GSAT-NOSTOCH</td><td>59.64 ± 5.33</td><td>55.78 ± 2.84</td><td>55.27 ± 7.49</td></tr><tr><td>GSAT-NOSTOCH-β=0</td><td>63.37 ± 12.33</td><td>60.61 ± 10.08</td><td>66.19 ± 7.76</td></tr><tr><td>GIN</td><td>39.87 ± 1.30</td><td>39.04 ± 1.62</td><td>38.57 ± 2.31</td></tr><tr><td>GSAT</td><td>51.86 ± 5.51</td><td>49.12 ± 3.29</td><td>44.22 ± 5.57</td></tr><tr><td>GSAT-β=0</td><td>45.97 ± 8.37</td><td>49.67 ± 7.01</td><td>49.84 ± 5.45</td></tr><tr><td>GSAT-NOSTOCH</td><td>40.34 ± 2.77</td><td>41.90 ± 3.70</td><td>37.98 ± 2.64</td></tr><tr><td>GSAT-NOSTOCH-β=0</td><td>43.41 ± 8.05</td><td>45.88 ± 9.54</td><td>42.25 ± 9.77</td></tr></table>
|
| 306 |
+
|
| 307 |
+

|
| 308 |
+
Figure 7. Comparison between (a) using the information constraint in Eq. (9) and (b) replacing it with $\ell_1$ -norm. Results are shown for Spurious-Motif $b = 0.5$ , where $r$ is tuned from 0.9 to 0.1 and the coefficient of the $\ell_1$ -norm $\lambda_1$ is tuned from 1e-5 to 1.
|
| 309 |
+
|
| 310 |
+

|
| 311 |
+
|
| 312 |
+
variance. The best performance is often achieved when $r \in [0.5, 0.9]$ , which matches our theory. More results on other datasets can be found in Fig. 8 in the appendix.
|
| 313 |
+
|
| 314 |
+
# 7. Conclusion
|
| 315 |
+
|
| 316 |
+
Graph Stochastic Attention (GSAT) is a novel attention mechanism to build interpretable graph learning models. GSAT injects stochasticity to block label-irrelevant information and leverages the reduction of stochasticity to select label-relevant subgraphs. Such rationale is grounded by the information bottleneck principle. GSAT has many transformative characteristics. For example, it removes the sparsity, continuity or other potentially biased assumptions in graph learning interpretation without performance decay. It can also remove spurious correlation to better the model generalization. As a by-product, we also reveal a potentially severe issue behind post-hoc interpretation methods from the optimization perspective of information bottleneck.
|
| 317 |
+
|
| 318 |
+
# ACKNOWLEDGMENTS
|
| 319 |
+
|
| 320 |
+
We greatly thank the actionable suggestions given by reviewers. S. Miao and M. Liu are supported by the National Science Foundation (NSF) award HDR-2117997. P. Li is supported by the JPMorgan Faculty Award.
|
| 321 |
+
|
| 322 |
+
# References
|
| 323 |
+
|
| 324 |
+
Alemi, A. A., Fischer, I., Dillon, J. V., and Murphy, K. Deep variational information bottleneck. In International Conference on Learning Representations, 2016.
|
| 325 |
+
|
| 326 |
+
Arjovsky, M., Bottou, L., Gulrajani, I., and Lopez-Paz, D. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.
|
| 327 |
+
Bahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations, 2015.
|
| 328 |
+
Baldassarre, F. and Azizpour, H. Explainability techniques for graph convolutional networks. In International Conference on Machine Learning Workshops, 2019 Workshop on Learning and Reasoning with Graph-Structured Representations, 2019.
|
| 329 |
+
Bapst, V., Keck, T., Grabska-Barwińska, A., Donner, C., Cubuk, E. D., Schoenholz, S. S., Obika, A., Nelson, A. W., Back, T., Hassabis, D., et al. Unveiling the predictive power of static structure in glassy systems. Nature Physics, 16(4):448-454, 2020.
|
| 330 |
+
Bengio, Y., Louradour, J., Collobert, R., and Weston, J. Curriculum learning. In International Conference on Machine Learning, pp. 41-48, 2009.
|
| 331 |
+
Chang, S., Zhang, Y., Yu, M., and Jaakkola, T. Invariant rationalization. In International Conference on Machine Learning, pp. 1448-1458. PMLR, 2020.
|
| 332 |
+
Chen, J., Song, L., Wainwright, M., and Jordan, M. Learning to explain: An information-theoretic perspective on model interpretation. In International Conference on Machine Learning, pp. 883-892. PMLR, 2018.
|
| 333 |
+
Corso, G., Cavalleri, L., Beaini, D., Lio, P., and Velicković, P. Principal neighbourhood aggregation for graph nets. In Advances in Neural Information Processing Systems, pp. 13260-13271, 2020.
|
| 334 |
+
Cover, T. M. Elements of information theory. John Wiley & Sons, 1999.
|
| 335 |
+
Cranmer, M., Sanchez Gonzalez, A., Battaglia, P., Xu, R., Cranmer, K., Spergel, D., and Ho, S. Discovering symbolic models from deep learning with inductive biases. In Advances in Neural Information Processing Systems, pp. 17429-17442, 2020.
|
| 336 |
+
Debnath, A. K., Lopez de Compadre, R. L., Debnath, G., Shusterman, A. J., and Hansch, C. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, 34 (2):786-797, 1991.
|
| 337 |
+
Du, M., Liu, N., and Hu, X. Techniques for interpretable machine learning. Communications of the ACM, 63(1): 68-77, 2019.
|
| 338 |
+
|
| 339 |
+
Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., and Dahl, G. E. Neural message passing for quantum chemistry. In International Conference on Machine Learning, pp. 1263-1272. PMLR, 2017.
|
| 340 |
+
Henderson, R., Clevert, D.-A., and Montanari, F. Improving molecular graph neural network explainability with orthonormalization and induced sparsity. In International Conference on Machine Learning, pp. 4203-4213. PMLR, 2021.
|
| 341 |
+
Hu, W., Fey, M., Zitnik, M., Dong, Y., Ren, H., Liu, B., Catasta, M., and Leskovec, J. Open graph benchmark: Datasets for machine learning on graphs. In Advances in Neural Information Processing Systems, pp. 22118-22133, 2020.
|
| 342 |
+
Huang, Q., Yamada, M., Tian, Y., Singh, D., Yin, D., and Chang, Y. Graphlime: Local interpretable model explanations for graph neural networks. arXiv preprint arXiv:2001.06216, 2020.
|
| 343 |
+
Jain, S. and Wallace, B. C. Attention is not explanation. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 3543-3556, 2019.
|
| 344 |
+
Jang, E., Gu, S., and Poole, B. Categorical reparameterization with gumbel-softmax. In International Conference on Learning Representations, 2017.
|
| 345 |
+
Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., et al. Highly accurate protein structure prediction with alphafold. Nature, 596(7873):583-589, 2021.
|
| 346 |
+
Kingma, D. P. and Welling, M. Auto-encoding variational bayes. In International Conference on Learning Representations, 2014.
|
| 347 |
+
Kipf, T. N. and Welling, M. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017.
|
| 348 |
+
Knyazev, B., Taylor, G. W., and Amer, M. Understanding attention and generalization in graph neural networks. In Advances in Neural Information Processing Systems, pp. 4204-4214, 2019.
|
| 349 |
+
Krueger, D., Caballero, E., Jacobsen, J.-H., Zhang, A., Binas, J., Zhang, D., Priol, R. L., and Courville, A. Out-of-distribution generalization via risk extrapolation (rex). In International Conference on Machine Learning, pp. 5815-5826. PMLR, 2021.
|
| 350 |
+
Li, Y., Zemel, R., Brockschmidt, M., and Tarlow, D. Gated graph sequence neural networks. In International Conference on Learning Representations, 2016.
|
| 351 |
+
|
| 352 |
+
Lin, W., Lan, H., and Li, B. Generative causal explanations for graph neural networks. In International Conference on Machine Learning, pp. 6666-6679. PMLR, 2021.
|
| 353 |
+
Lipton, Z. C. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3):31-57, 2018.
|
| 354 |
+
Luo, D., Cheng, W., Xu, D., Yu, W., Zong, B., Chen, H., and Zhang, X. Parameterized explainer for graph neural network. In Advances in Neural Information Processing Systems, pp. 19620-19631, 2020.
|
| 355 |
+
Mohankumar, A. K., Nema, P., Narasimhan, S., Khapra, M. M., Srinivasan, B. V., and Ravindran, B. Towards transparent and explainable attention models. In Association for Computational Linguistics, pp. 4206-4216, 2020.
|
| 356 |
+
Pearl, J., Glymour, M., and Jewell, N. P. Causal Inference in Statistics: A Primer. John Wiley & Sons, 2016.
|
| 357 |
+
Poole, B., Ozair, S., Van Den Oord, A., Alemi, A., and Tucker, G. On variational bounds of mutual information. In International Conference on Machine Learning, pp. 5171-5180. PMLR, 2019.
|
| 358 |
+
Pope, P. E., Kolouri, S., Rostami, M., Martin, C. E., and Hoffmann, H. Explainability methods for graph convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 10772-10781, 2019.
|
| 359 |
+
Ribeiro, M. T., Singh, S., and Guestrin, C. Model-agnostic interpretability of machine learning. In International Conference on Machine Learning Workshops, 2016 Workshop on Human Interpretability in Machine Learning, 2016.
|
| 360 |
+
Schlichtkrull, M. S., Cao, N. D., and Titov, I. Interpreting graph neural networks for {nlp} with differentiable edge masking. In International Conference on Learning Representations, 2021.
|
| 361 |
+
Serrano, S. and Smith, N. A. Is attention interpretable? In Association for Computational Linguistics, pp. 2931-2951, 2019.
|
| 362 |
+
Shannon, C. E. A mathematical theory of communication. The Bell system technical journal, 27(3):379-423, 1948.
|
| 363 |
+
Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C. D., Ng, A. Y., and Potts, C. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical methods in natural language processing, pp. 1631-1642, 2013.
|
| 364 |
+
Suresh, S., Li, P., Hao, C., and Neville, J. Adversarial graph augmentation to improve graph contrastive learning. In Advances in Neural Information Processing Systems, pp. 15920-15933, 2021.
|
| 365 |
+
|
| 366 |
+
Tishby, N. and Zaslavsky, N. Deep learning and the information bottleneck principle. In IEEE Information Theory Workshop, pp. 1-5. IEEE, 2015.
|
| 367 |
+
Tishby, N., Pereira, F. C., and Bialek, W. The information bottleneck method. arXiv preprint physics/0004057, 2000.
|
| 368 |
+
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
|
| 369 |
+
Velicković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., and Bengio, Y. Graph attention networks. In International Conference on Learning Representations, 2018.
|
| 370 |
+
Vu, M. and Thai, M. T. Pgm-explainer: Probabilistic graphical model explanations for graph neural networks. In Advances in Neural Information Processing Systems, pp. 12225-12235, 2020.
|
| 371 |
+
Wencel-Delord, J. and Glorius, F. C-h bond activation enables the rapid construction and late-stage diversification of functional molecules. Nature chemistry, 5(5):369-375, 2013.
|
| 372 |
+
Wu, T., Ren, H., Li, P., and Leskovec, J. Graph information bottleneck. In Advances in Neural Information Processing Systems, pp. 20437-20448, 2020.
|
| 373 |
+
Wu, Y., Wang, X., Zhang, A., He, X., and Chua, T.-S. Discovering invariant rationales for graph neural networks. In International Conference on Learning Representations, 2022.
|
| 374 |
+
Wu, Z., Ramsundar, B., Feinberg, E. N., Gomes, J., Geniesse, C., Pappu, A. S., Leswing, K., and Pande, V. Molecularnet: a benchmark for molecular machine learning. Chemical science, 9(2):513-530, 2018.
|
| 375 |
+
Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., and Bengio, Y. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, pp. 2048-2057. PMLR, 2015.
|
| 376 |
+
Xu, K., Hu, W., Leskovec, J., and Jegelka, S. How powerful are graph neural networks? In International Conference on Learning Representations, 2019.
|
| 377 |
+
Ying, Z., Bourgeois, D., You, J., Zitnik, M., and Leskovec, J. Gnnexplainer: Generating explanations for graph neural networks. In Advances in Neural Information Processing Systems, pp. 9240-9251, 2019.
|
| 378 |
+
|
| 379 |
+
Yu, J., Xu, T., Rong, Y., Bian, Y., Huang, J., and He, R. Graph information bottleneck for subgraph recognition. In International Conference on Learning Representations, 2021.
|
| 380 |
+
Yuan, H., Tang, J., Hu, X., and Ji, S. Xgnn: Towards model-level explanations of graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 430-438, 2020a.
|
| 381 |
+
Yuan, H., Yu, H., Gui, S., and Ji, S. Explainability in graph neural networks: A taxonomic survey. arXiv preprint arXiv:2012.15445, 2020b.
|
| 382 |
+
Yuan, H., Yu, H., Wang, J., Li, K., and Ji, S. On explainability of graph neural networks via subgraph explorations. In International Conference on Machine Learning, pp. 12241-12252. PMLR, 2021.
|
| 383 |
+
|
| 384 |
+
# A. Supplementary Notations for Information Theory and Graph Neural Networks
|
| 385 |
+
|
| 386 |
+
Entropy. Given a discrete random variable $a$ , its entropy is defined as $H(a) \triangleq -\sum_{a} \mathbb{P}(a) \log \mathbb{P}(a)$ . If $a$ is a continuous random variable, its differential entropy is defined as $H(a) \triangleq -\int_{a} \mathbb{P}(a) \log \mathbb{P}(a) da$ .
|
| 387 |
+
|
| 388 |
+
KL-Divergence. Given two distributions $\mathbb{P}(x)$ and $\mathbb{Q}(x)$ , KL-Divergence is used to measure the difference between $\mathbb{P}$ and $\mathbb{Q}$ , and it is defined as $\mathrm{KL}(\mathbb{P}(x)||\mathbb{Q}(x)) \triangleq \sum_{x} \mathbb{P}(x) \log \frac{\mathbb{P}(x)}{\mathbb{Q}(x)}$ .
|
| 389 |
+
|
| 390 |
+
Mutual Information. Given two random variables $a$ and $b$ , the mutual information (MI) $I(a; b)$ is a measure of the mutual dependence between them. MI quantifies the amount of information regarding one random variable if another random variable is known. Formally, $I(a; b) \triangleq \sum_{a, b} \mathbb{P}(a, b) \log \frac{\mathbb{P}(a, b)}{\mathbb{P}(a) \mathbb{P}(b)}$ , where $\mathbb{P}(a, b)$ is the joint distribution and $\mathbb{P}(a), \mathbb{P}(b)$ are the marginal distributions. By definition, $I(a, b) = \mathrm{KL}(\mathbb{P}(a, b) || \mathbb{P}(a) \mathbb{P}(b)) = \sum_{a, b} \mathbb{P}(a, b) \log \mathbb{P}(a | b) - \sum_{b} \mathbb{P}(b) \log \mathbb{P}(b) = -H(a | b) + H(b)$ .
|
| 391 |
+
|
| 392 |
+
Graph Neural Networks (GNNs). Given an $L$ -layer GNN, let $h_v^{(l)}$ denote the node representation for node $v$ in the $i^{th}$ layer and $\mathcal{N}(v)$ denote a set of nodes adjacent to node $v$ . Let $h_v^{(0)}$ be the node feature $X_v$ . Most GNNs follow a message passing scheme, where there are two main steps in each layer: (1) neighbourhood aggregation, $m_v^{(l)} = \mathrm{AGG}(\{h_u^{(l-1)} | u \in \mathcal{N}(v)\})$ ; (2) node representation update, $h_v^{(l)} = \mathrm{UPDATE}(m_v^{(l)}, h_v^{(l-1)})$ . For graph classification tasks, after obtaining $h_v^{(L)}$ for each node, the graph representation is given by $h_G = \mathrm{POOL}(\{h_v^{(L)} | v \in V\})$ and $h_G$ will be used to make predictions. The above AGG, UPDATE, POOL are three functions. AGG and POOL are typically implemented via SUM, MEAN and MAX while UPDATE is a fully connected (typically shallow) neural network. In some cases, edge representations may be in need, and they are often given by $h_{u,v}^{(l)} = \mathrm{CONCAT}(h_u^{(l)}, h_v^{(l)})$ .
|
| 393 |
+
|
| 394 |
+
# B. Variational Bounds for the GIB Objective — Eq. (6) and Eq. (7)
|
| 395 |
+
|
| 396 |
+
From Eq. (5), the IB objective is:
|
| 397 |
+
|
| 398 |
+
$$
|
| 399 |
+
\min _ {\phi} - I \left(G _ {S}; Y\right) + \beta I \left(G _ {S}; G\right), \text {s . t .} G _ {S} \sim g _ {\phi} (G). \tag {10}
|
| 400 |
+
$$
|
| 401 |
+
|
| 402 |
+
To optimize it, we introduce two variational bounds on the two terms, respectively.
|
| 403 |
+
|
| 404 |
+
For the first term $I(G_S;Y)$ , by definition:
|
| 405 |
+
|
| 406 |
+
$$
|
| 407 |
+
I \left(G _ {S}; Y\right) = \mathbb {E} _ {G _ {S}, Y} \left[ \log \frac {\mathbb {P} (Y \mid G _ {S})}{\mathbb {P} (Y)} \right]. \tag {11}
|
| 408 |
+
$$
|
| 409 |
+
|
| 410 |
+
Since $\mathbb{P}(Y|G_S)$ is intractable, we introduce a variational approximation $\mathbb{P}_{\theta}(Y|G_S)$ for it. Then, we obtain a lower bound for Eq. (6):
|
| 411 |
+
|
| 412 |
+
$$
|
| 413 |
+
\begin{array}{l} I \left(G _ {S}; Y\right) = \mathbb {E} _ {G _ {S}, Y} \left[ \log \frac {\mathbb {P} _ {\theta} (Y \mid G _ {S})}{\mathbb {P} (Y)} \right] + \mathbb {E} _ {G _ {S}} \left[ \mathrm {K L} \left(\mathbb {P} (Y \mid G _ {S}) | | \mathbb {P} _ {\theta} (Y \mid G _ {S})\right) \right] \\ \geq \mathbb {E} _ {G _ {S}, Y} \left[ \log \frac {\mathbb {P} _ {\theta} (Y | G _ {S})}{\mathbb {P} (Y)} \right] \\ = \mathbb {E} _ {G _ {S}, Y} \left[ \log \mathbb {P} _ {\theta} (Y | G _ {S}) \right] + H (Y). \tag {12} \\ \end{array}
|
| 414 |
+
$$
|
| 415 |
+
|
| 416 |
+
For the second term $I\left(G;G_S\right)$ , by definition:
|
| 417 |
+
|
| 418 |
+
$$
|
| 419 |
+
I \left(G; G _ {S}\right) = \mathbb {E} _ {G _ {S}, G} \left[ \log \frac {\mathbb {P} \left(G _ {S} \mid G\right)}{\mathbb {P} \left(G _ {S}\right)} \right]. \tag {13}
|
| 420 |
+
$$
|
| 421 |
+
|
| 422 |
+
Since $\mathbb{P}(G_S)$ is intractable, we introduce a variational approximation $\mathbb{Q}(G_S)$ for the marginal distribution $\mathbb{P}(G_S) = \sum_G\mathbb{P}_\phi (G_S|G)\mathbb{P}_\mathcal{G}(G)$ . Then, we obtain an upper bound for Eq. (7):
|
| 423 |
+
|
| 424 |
+
$$
|
| 425 |
+
\begin{array}{l} I \left(G; G _ {S}\right) = \mathbb {E} _ {G _ {S}, G} \left[ \log \frac {\mathbb {P} _ {\phi} \left(G _ {S} \mid G\right)}{\mathbb {Q} \left(G _ {S}\right)} \right] - \operatorname {K L} \left(\mathbb {P} \left(G _ {S}\right) | | \mathbb {Q} \left(G _ {S}\right)\right) \\ \leq \mathbb {E} _ {G} \left[ \mathrm {K L} \left(\mathbb {P} _ {\phi} \left(G _ {S} | G\right) \mid \mid \mathbb {Q} \left(G _ {S}\right)\right) \right]. \tag {14} \\ \end{array}
|
| 426 |
+
$$
|
| 427 |
+
|
| 428 |
+
Table 6. Direct comparison with the interpretation ROC AUC of GNNExplainer and PGExplainer reported in (Luo et al., 2020), which are given a selected pre-trained model.
|
| 429 |
+
|
| 430 |
+
<table><tr><td></td><td>BA-2MOTIFS</td><td>MUTAG</td></tr><tr><td>GNNEXPLAINER</td><td>74.2</td><td>72.7</td></tr><tr><td>PGEXPLAINER</td><td>92.6</td><td>87.3</td></tr><tr><td>GSAT</td><td>98.74† ± 0.55</td><td>99.60† ± 0.51</td></tr><tr><td>GSAT*</td><td>97.43† ± 0.02</td><td>97.75† ± 0.92</td></tr></table>
|
| 431 |
+
|
| 432 |
+
Table 7. Direct comparison with the interpretation precision@5 of DIR reported in (Wu et al., 2022) based on the backbone model in (Wu et al., 2022).
|
| 433 |
+
|
| 434 |
+
<table><tr><td></td><td colspan="3">SPURIOUS-MOTIF</td></tr><tr><td></td><td>b=0.5</td><td>b=0.7</td><td>b=0.9</td></tr><tr><td>GNNEXPLAINER</td><td>0.203±0.019</td><td>0.167±0.039</td><td>0.066±0.007</td></tr><tr><td>DIR</td><td>0.255±0.016</td><td>0.247±0.012</td><td>0.192±0.044</td></tr><tr><td>GSAT</td><td>0.519†±0.022</td><td>0.503†±0.034</td><td>0.416†±0.081</td></tr><tr><td>GSAT*</td><td>0.532†±0.019</td><td>0.512†±0.011</td><td>0.520†±0.022</td></tr></table>
|
| 435 |
+
|
| 436 |
+
# C. Supplementary Experiments
|
| 437 |
+
|
| 438 |
+
# C.1. Details of the Datasets
|
| 439 |
+
|
| 440 |
+
Mutag (Debnath et al., 1991) is a molecular property prediction dataset, where nodes are atoms and edges are chemical bonds. Each graph is associated with a binary label based on its mutagenic effect. Following (Luo et al., 2020), $-\mathrm{NO}_2$ and $-\mathrm{NH}_2$ in mutagen graphs are labeled as ground-truth explanations.
|
| 441 |
+
|
| 442 |
+
BA-2Motifs (Luo et al., 2020) is a synthetic dataset, where the base graph is generated by Barabási-Albert (BA) model. Each base graph is attached with a house-like motif or a five-node cycle motif. House motifs and cycle motifs give class labels and thus are regarded as ground-truth explanations for the two classes respectively.
|
| 443 |
+
|
| 444 |
+
Spurious-Motif (Wu et al., 2022) is a synthetic dataset with three graph classes. Following the notations in (Wu et al., 2022), each graph consists of a base graph (tree/ladder/wheel denoted by $\bar{G}_S = 0,1,2$ respectively, with some abuse of notations) and a motif (cycle/house/crane denoted by $G_{S} = 0,1,2$ , respectively, with some abuse of notations). The label is determined only by $G_{S}$ , while there also exists spurious correlation between the label and $\bar{G}_S$ . Specifically, to construct a graph in the training set, $G_{S}$ will be sampled uniformly, while $\bar{G}_{S}$ will be sampled with probability $\mathbb{P}(\bar{G}_S)$ , where $\mathbb{P}(\bar{G}_S) = b$ if $\bar{G}_S = G_S$ ; otherwise $\mathbb{P}(\bar{G}_S) = (1 - b) / 2$ . So, $b$ is a parameter used to control the degree of such spurious correlation. When $b = 1 / 3$ , there is no spurious correlation. We include datasets with $b = 0.5$ , $b = 0.7$ and $b = 0.9$ . Note that for testing data, the motifs and bases are randomly attached to each other, which can test if the model overfits the spurious correlation.
|
| 445 |
+
|
| 446 |
+
MNIST-75sp (Knyazev et al., 2019) is a image classification dataset, where each image in MNIST is converted to a superpixel graph. Each node in the graph represents a superpixel and edges are formed based on spatial distance between superpixel centers. Node features are the coordinates of their centers of masses. Nodes with nonzero pixel values provide ground-truth explanations. Note that the subgraphs that provide explanations are of different sizes in this dataset.
|
| 447 |
+
|
| 448 |
+
Graph-SST2 (Socher et al., 2013; Yuan et al., 2020b) is a sentiment analysis dataset, where each text sequence in SST2 is converted to a graph. Each node in the graph represents a word and edges are formed based on relationships between different words. We follow the dataset splits in (Wu et al., 2022) to create degree shifts in the training set, which can better test generalizability of models. Specifically, graphs with higher average node degree will be used to train and validate models, while graphs with fewer nodes will be used to test models. And this dataset contains no ground-truth explanation labels, so we only evaluate prediction performance here and provide interpretation visualizations in Appendix D.
|
| 449 |
+
|
| 450 |
+
OGBG-Molhiv (Wu et al., 2018; Hu et al., 2020) is a molecular property prediction datasets, where nodes are atoms and edges are chemical bonds. A binary label is assigned to each graph according to whether a molecule inhibits HIV virus replication or not. We also evaluate GSAT on molbace, molbbbpl, molclintox, moltox21 and molsider datasets from OGBG. As there are no ground truth explanation labels for these datasets, we only evaluate the prediction performance of GSAT.
|
| 451 |
+
|
| 452 |
+
Table 8. Ablation study on $\beta$ and stochasticity in GSAT (PNA as the backbone model) on Spurious-Motif. We report both interpretation ROC AUC (top) and prediction accuracy (bottom).
|
| 453 |
+
|
| 454 |
+
<table><tr><td>SPURIOUS-MOTIF</td><td>b=0.5</td><td>b=0.7</td><td>b=0.9</td></tr><tr><td>PNA+GSAT</td><td>83.34 ± 2.17</td><td>86.94 ± 4.05</td><td>88.66 ± 2.44</td></tr><tr><td>PNA+GSAT-β=0</td><td>82.01 ± 6.43</td><td>78.88 ± 6.74</td><td>80.53 ± 5.03</td></tr><tr><td>PNA+GSAT-NOSTOCH</td><td>79.72 ± 3.86</td><td>76.36 ± 2.57</td><td>80.21 ± 3.76</td></tr><tr><td>PNA+GSAT-NOSTOCH-β=0</td><td>78.69 ± 10.77</td><td>78.97 ± 13.95</td><td>79.91 ± 13.11</td></tr><tr><td>PNA</td><td>68.15 ± 2.39</td><td>66.35 ± 3.34</td><td>61.40 ± 3.56</td></tr><tr><td>PNA+GSAT</td><td>68.74 ± 2.24</td><td>64.38 ± 3.20</td><td>57.01 ± 2.95</td></tr><tr><td>PNA+GSAT-β=0</td><td>59.68 ± 7.28</td><td>58.03 ± 11.84</td><td>53.94 ± 8.11</td></tr><tr><td>PNA+GSAT-NOSTOCH.</td><td>51.92 ± 11.17</td><td>41.22 ± 7.72</td><td>39.56 ± 2.74</td></tr><tr><td>PNA+GSAT-NOSTOCH.-β=0</td><td>56.54 ± 6.88</td><td>48.93 ± 10.33</td><td>45.82 ± 9.60</td></tr></table>
|
| 455 |
+
|
| 456 |
+

|
| 457 |
+
Figure 8. Ablation study on (a) using the info. constraint in Eq. (9) and (b) replacing it with $\ell_1$ -norm, where $r$ is tuned from 0.9 to 0.1 and the coefficient of the $\ell_1$ -norm $\lambda_1$ is tuned from 1e-5 to 1.
|
| 458 |
+
|
| 459 |
+

|
| 460 |
+
|
| 461 |
+

|
| 462 |
+
(b) Spurious-Motif, $b = 0.9$
|
| 463 |
+
|
| 464 |
+

|
| 465 |
+
|
| 466 |
+
# C.2. Details on Hyperparameter Tuning
|
| 467 |
+
|
| 468 |
+
# C.2.1. BACKBONE MODELS
|
| 469 |
+
|
| 470 |
+
Backbone Architecture. We use a two-layer GIN (Xu et al., 2019) with 64 hidden dimensions and 0.3 dropout ratio. We use the setting from (Corso et al., 2020) for PNA, which has 4 layers with 80 hidden dimensions, 0.3 dropout ratio, and no scalars are used. For OGBG-Mol datasets, we directly follow (Corso et al., 2020) using (mean, min, max, std) aggregators for PNA; yet we find PNA has convergence issues on other datasets when sum aggregator is not used. Hence, PNA uses (mean, min, max, std, sum) aggregators for all other datasets.
|
| 471 |
+
|
| 472 |
+
Dataset Splits. For Ba-2Motifs, we split it randomly into three sets (80%/10%/10%). For Mutag, we split it randomly into 80%/20% to train and validate models, and following (Luo et al., 2020) we use mutagen molecules with -NO $_2$ or -NH $_2$ as test data (because only these samples have explanation labels). For MNIST-75sp, we use the default splits given by (Knyazev et al., 2019); due to its large size in the graph setting, we also reduce the number of training samples following (Wu et al., 2022) to speed up training. For Graph-SST2, Spurious-Motifs and OGBG-Mol, we use the default splits given by (Yuan et al., 2020b) and (Wu et al., 2022). Following (Corso et al., 2020), edge features are not used for all OGBG-Mol datasets.
|
| 473 |
+
|
| 474 |
+
Epoch. We tune the number of epochs to make sure the convergence of all models. When GIN is used as the backbone model, MNIST-75sp and OGBG-Molhiv are trained for 200 epochs, and all other datasets are trained for 100 epochs. When PNA is used, Mutag and Ba-2Motifs are trained for 50 epochs and all other datasets are trained for 200 epochs. We report the performance of the epoch that achieves the best validation prediction performance and use the models that achieve such best validation performance as the pre-trained models. When multiple epochs achieve the same best performance, we report the one with the lowest validation prediction loss.
|
| 475 |
+
|
| 476 |
+
Batch Size. All datasets use a batch size of 128; except for MNIST-75sp we use a batch size of 256 to speed up training due to its large size in the graph setting.
|
| 477 |
+
|
| 478 |
+
Learning Rate. GIN uses 0.003 learning rate for Spurious-Motifs and 0.001 for all other datasets. PNA uses 0.01 learning rate with scheduler following (Corso et al., 2020), 0.003 learning rate for Graph-SST2 and Spurious-Motifs, and 0.001 learning rate for all other datasets.
|
| 479 |
+
|
| 480 |
+
# C.2.2. GSAT
|
| 481 |
+
|
| 482 |
+
Basic Setting. If not specified, GSAT uses the same settings mentioned for the backbone models. All Spurious-Motif datasets share the same hyperparameters, which are tuned based on $b = 0.5$ .
|
| 483 |
+
|
| 484 |
+
Learning Rate. When PNA is used, GSAT uses 0.001 learning rate for all OGBG-Mol datasets; otherwise it uses the same learning rate as mentioned above.
|
| 485 |
+
|
| 486 |
+
$r$ in Equation (9). Ba-2Motif and Mutag use $r = 0.5$ , and all other datasets use $r = 0.7$ . We find $r = 0.7$ can generally provide great performance for all datasets. Inspired by curriculum learning (Bengio et al., 2009), $r$ will initially set to 0.9 and gradually decay to the tuned value. We adopt a step decay, where $r$ will decay 0.1 for every 10 epochs.
|
| 487 |
+
|
| 488 |
+
$\beta$ in Equation (8). $\beta$ is not tuned and is set to $\frac{1}{|E|}$ for all datasets.
|
| 489 |
+
|
| 490 |
+
Temperature. Temperature used in the Gumbel-softmax trick (Jang et al., 2017) is not tuned, and we use 1 for all datasets.
|
| 491 |
+
|
| 492 |
+
# C.2.3. BASELINE INTERPRETABLE METHODS/MODELS
|
| 493 |
+
|
| 494 |
+
Basic Setting. If not specified, baselines use the same settings mentioned for the backbone models. All Spurious-Motif datasets share the same hyperparameters, which are tuned based on $b = 0.5$ .
|
| 495 |
+
|
| 496 |
+
GNNExplainer. We tune the learning rate from $(1,0.1,0.01,0.001)$ and the coefficient of the $\ell_1$ -norm from $(0.1,0.01,0.001)$ , based on validation interpretation ROC AUC. The coefficient of the entropy regularization term is set to the recommended value 1. Again, in a real-world setting, post-hoc methods have no clear metric to tune hyper-parameters.
|
| 497 |
+
|
| 498 |
+
PGExplainer. We use the tuned recommended settings from (Luo et al., 2020), including the temperature, the coefficient of $\ell_1$ -norm regularization and the coefficient of entropy regularization.
|
| 499 |
+
|
| 500 |
+
GraphMask. We use the recommended settings from (Schlichtkrull et al., 2021), including the temperature, gamma, zeta and the coefficient of $\ell_0$ -norm regularization.
|
| 501 |
+
|
| 502 |
+
DIR. Causal ratio is tuned for Ba-2Motif and Mutag. Since the other datasets we use are the same, we use the recommended settings from (Wu et al., 2022). However, even though datasets are the same, we find the same $\alpha$ specified in their source code do not work well in our setting. Hence, we tune $\alpha$ from (10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001, 0.000001).
|
| 503 |
+
|
| 504 |
+
IB-subgraph. Due to the extreme inefficiency of IB-subgraph, we are only able to tune its mi-weight around the recommended value from (2, 0.2, 0.02). And we use the default inner loop iterations and con-weight as specified in their source code. IB-subgraph needs $\sim 40$ hours to train 100 epochs for 1 seed on Spurious-Motif and $\sim 150$ hours for OGBG-Molhiv on a Quadro RTX 6000. By contrast, GSAT only needs $\sim 15$ minutes to train 100 epochs on OGBG-Molhiv.
|
| 505 |
+
|
| 506 |
+
Random Seed. All methods are trained with 10 different random seeds; except for IB-subgraph we train it for 5 different random seeds due to its inefficiency. For post-hoc methods, the pre-trained models are also trained with 10 different random seeds instead of a fixed pre-trained model in (Luo et al., 2020). For inherently interpretable models, GSAT, IB-subgraph and DIR, we average the best epoch's performance according to their validation prediction performance. For post-hoc baselines, we average their last epoch's performance. For IB-subgraph, we stop training when there is no improvement over 20 epochs to make the training possible on large datasets.
|
| 507 |
+
|
| 508 |
+
# C.3. Node/Edge Attention
|
| 509 |
+
|
| 510 |
+
We also explore node-level attention, and we find it is especially useful for molecular datasets and datasets with large graph sizes. Hence, we use node-level attention for on Mutag, MNIST-75sp and OGBG-Mol datasets, and for all other datasets we use edge attention. Specifically, when node attention is used, the MLP layers in $\mathbb{P}_{\phi}$ will take as input the node embeddings and output $p_v$ for each $v \in V$ . Then, the stochastic node attention is sampled for each node $\alpha_v \sim \mathrm{Bern}(p_v)$ . After that, $\alpha_{uv}$ is obtained by $\alpha_{uv} = \alpha_u \alpha_v$ .
|
| 511 |
+
|
| 512 |
+
# C.4. Further Supplementary Experiments
|
| 513 |
+
|
| 514 |
+
Fig. 3 shows an experiment with disconnected critical subgraphs, where the dataset is generated in a similar way used to generate Ba-2Motifs. Specifically, each base graph is generated using the BA model and will be attached with two house motifs or three house motifs randomly. The number of house motifs represents the graph class. Both GSAT and GraphMask
|
| 515 |
+
|
| 516 |
+

|
| 517 |
+
Figure 9. Visualizing label-relevant subgraphs discovered by GSAT for Ba-2Motifs. Nodes colored pink are ground-truth explanations, and each row represents a graph class.
|
| 518 |
+
|
| 519 |
+

|
| 520 |
+
Figure 10. Visualizing label-relevant subgraphs discovered by GSAT for Mutag. $-\mathrm{NO}_2$ and $-\mathrm{NH}_2$ are ground-truth explanations. We only present mutagen graphs as only these graphs are with ground-truth explanation labels.
|
| 521 |
+
|
| 522 |
+
are trained with the same settings used on Ba-2Motifs.
|
| 523 |
+
|
| 524 |
+
Table 6 shows a direct comparison with PGExplainer and GNNExplainer between the interpretation ROC AUC reported in (Luo et al., 2020) and the performance of GSAT. And GSAT still outperforms their methods significantly.
|
| 525 |
+
|
| 526 |
+
Table 4 and Table 7 show direct comparisons with DIR, where we apply GSAT with the backbone model used in DIR. And GSAT still greatly outperforms their method.
|
| 527 |
+
|
| 528 |
+
Table 8 shows the ablation study on $\beta$ and stochasticity in GSAT, where PNA is the backbone model. Figure 8 shows the ablation study of the information constraint introduced in Eq. (9) on Spurious-Motif $b = 0.7$ and $b = 0.9$ . We observe the same trends from these ablation studies as discussed in Sec. 6.3.
|
| 529 |
+
|
| 530 |
+
# D. Interpretation Visualization
|
| 531 |
+
|
| 532 |
+
We provide visualizations of the label-relevant subgraphs discovered by GSAT on eight datasets, as shown from Fig. 9 to Fig. 16. The transparency of the edges shown in the figures represents the normalized attention weights learned by GSAT. The normalized attention weights are to rescale the learnt weights $\{p_{uv}|(u,v)\in E\}$ to [0, 1]: For each graph, denote $p_{\mathrm{min}} = \min \{p_{uv}|(u,v)\in E\}$ and $p_{\mathrm{max}} = \max \{p_{uv}|(u,v)\in E\}$ . We rescale the weights according to
|
| 533 |
+
|
| 534 |
+
$$
|
| 535 |
+
\hat {p} _ {u v} = \frac {p _ {u v} - p _ {\min }}{p _ {\max } - p _ {\min }} \tag {15}
|
| 536 |
+
$$
|
| 537 |
+
|
| 538 |
+

|
| 539 |
+
Figure 11. Visualizing label-relevant subgraphs discovered by GSAT for Spurious-Motif $b = 0.5$ . Nodes colored pink are ground-truth explanations, and each row represents a graph class.
|
| 540 |
+
|
| 541 |
+

|
| 542 |
+
Figure 12. Visualizing label-relevant subgraphs discovered by GSAT for Spurious-Motif $b = 0.7$ . Nodes colored pink are ground-truth explanations, and each row represents a graph class.
|
| 543 |
+
|
| 544 |
+

|
| 545 |
+
Figure 13. Visualizing label-relevant subgraphs discovered by GSAT for Spurious-Motif $b = 0.9$ . Nodes colored pink are ground-truth explanations, and each row represents a graph class.
|
| 546 |
+
|
| 547 |
+

|
| 548 |
+
Figure 14. Visualizing label-relevant subgraphs discovered by GSAT for OGBG-Molhiv. Each row represents a graph class.
|
| 549 |
+
|
| 550 |
+

|
| 551 |
+
Figure 15. Visualizing label-relevant subgraphs discovered by GSAT for Graph-SST2. The top two rows show sentences with negative sentiment, and the bottom two rows show sentences with positive sentiment.
|
| 552 |
+
|
| 553 |
+

|
| 554 |
+
Figure 16. Visualizing label-relevant subgraphs discovered by GSAT for MNIST-75sp. The first row shows the raw images and the second row shows the normalized attention weights learned by GSAT.
|
2201.12xxx/2201.12987/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c3611a1478ea7d119569a5d71b44b0766e139260c1743eb058e847a4ff223cc1
|
| 3 |
+
size 1383876
|
2201.12xxx/2201.12987/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.13xxx/2201.13078/10fd0fd5-66a6-4d87-8655-e3c0fe766d3f_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.13xxx/2201.13078/10fd0fd5-66a6-4d87-8655-e3c0fe766d3f_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.13xxx/2201.13078/10fd0fd5-66a6-4d87-8655-e3c0fe766d3f_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cd4c31e4867231b5e6ff0dcae17c2957d67b32ce74aa44fb52609fcb6b70b189
|
| 3 |
+
size 2949661
|
2201.13xxx/2201.13078/full.md
ADDED
|
@@ -0,0 +1,610 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Lymphoma segmentation from 3D PET-CT images using a deep evidential network
|
| 2 |
+
|
| 3 |
+
Ling Huang $^{a,b}$ , Su Ruan $^{b}$ , Pierre Decazes $^{c}$ , Thierry Denoeux $^{a,d}$
|
| 4 |
+
|
| 5 |
+
<sup>a</sup>Heudiasyc, CNRS, Université de technologie de Compiègne, Compiègne, France
|
| 6 |
+
<sup>b</sup>Quantif, LITIS, University of Rouen Normandy, Rouen, France
|
| 7 |
+
<sup>c</sup>Department of Nuclear Medicine, Henri Becquerel Cancer Center, Rouen, France
|
| 8 |
+
<sup>d</sup>Institut universitaire de France, Paris, France
|
| 9 |
+
|
| 10 |
+
# Abstract
|
| 11 |
+
|
| 12 |
+
An automatic evidential segmentation method based on Dempster-Shafer theory and deep learning is proposed to segment lymphomas from three-dimensional Positron Emission Tomography (PET) and Computed Tomography (CT) images. The architecture is composed of a deep feature-extraction module and an evidential layer. The feature extraction module uses an encoder-decoder framework to extract semantic feature vectors from 3D inputs. The evidential layer then uses prototypes in the feature space to compute a belief function at each voxel quantifying the uncertainty about the presence or absence of a lymphoma at this location. Two evidential layers are compared, based on different ways of using distances to prototypes for computing mass functions. The whole model is trained end-to-end by minimizing the Dice loss function. The proposed combination of deep feature extraction and evidential segmentation is shown to outperform the baseline UNet model as well as three other state-of-the-art models on a dataset of 173 patients.
|
| 13 |
+
|
| 14 |
+
Keywords: medical image analysis, Dempster-Shafer theory, evidence theory, belief functions, uncertainty quantification, deep learning
|
| 15 |
+
|
| 16 |
+
# 1. Introduction
|
| 17 |
+
|
| 18 |
+
Positron Emission Tomography - Computed Tomography (PET-CT) scanning is an effective imaging tool for lymphoma segmentation with application to clinical diagnosis and radiotherapy planning. The standardized uptake value (SUV), defined as the measured activity normalized for body weight and injected dose to remove variability in image intensity between patients, is widely used to locate and segment lymphomas thanks to its high sensitivity and specificity to the metabolic activity of tumors [1]. However, PET images have a low resolution and suffer from the partial volume effect blurring the contours of objects [2]. For that reason, CT images are usually used jointly with PET images because of their anatomical feature-representation capability and high resolution. Figure 1 shows 3D PET-CT views of a lymphoma patient. The lymphomas are marked in black as well as the brain and the bladder. As we can see from this figure, lymphomas vary in intensity distribution, shape, type, and number.
|
| 19 |
+
|
| 20 |
+

|
| 21 |
+
View 1 (0°)
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
View 2 $(90^{\circ})$
|
| 25 |
+
|
| 26 |
+

|
| 27 |
+
View 3 (180°)
|
| 28 |
+
Figure 1: Example of a patient with lymphomas in 3D PET-CT views. The lymphomas are the dark areas circled in red.
|
| 29 |
+
|
| 30 |
+

|
| 31 |
+
View 4 $(270^{\circ})$
|
| 32 |
+
|
| 33 |
+
Approaches to lymphoma segmentation. Techniques for lymphoma segmentation can be divided into three classes: SUV-based, region-growing-based and deep learning-based methods. For PET images, it is common to segment lymphomas with a set of fixed SUV thresholds. The so-called SUV-based methods [3][4] are fast but lack flexibility in boundary delineation and requires domain knowledge to locate the region of interest. Region-growing-based methods [5][6] have been proposed to optimize boundary delineation by taking texture and shape information into account. By setting the specific growing function and stopping condition, the tumor region grows step by step until it reaches the stopping condition. However, those methods still need clinicians to locate the seeds for region growing [5] and they are time-consuming, especially when applied to 3D images. Lymphoma segmentation with deep learning has become a popular research topic thanks to its high feature representation ability [7][8].
|
| 34 |
+
|
| 35 |
+
Deep-learning-based methods. Long et al. [9] were the first authors to show that a fully convolutional network (FCN) could be trained end-to-end for semantic segmentation, exceeding the state-of-the-art when the paper was published. UNet [10], a successful modification and extension of FCN, has become the most popular model for medical image segmentation in recent years. Driven by different tasks and datasets, several extended and optimized variants of UNet have been proposed for medical image segmentation, including VNet [11], SegResNet [12], and nnUNet [13]. VNet is a variant of UNet that introduces short residual connections at each stage. Compared with UNet, SegResNet contains an additional variational autoencoder branch. Finally, nnUNet is more flexible than UNet in three aspects: (1) residual connection in convolution blocks, (2) anisotropic kernel sizes and strides in each layer, and (3) deep supervision heads. Deep learning has been applied to lymphoma segmentation, yielding promising results. In [7], Li et al. proposed a DenseX-Net-based lymphoma segmentation model with a two-flow architecture for 3D PET-CT images: a segmentation flow (DenseU-Net) for lymphoma segmentation and a reconstruction flow (encoder-decoder) for learning semantic representation of different lymphomas. In [8], Hu et al. introduced
|
| 36 |
+
|
| 37 |
+
a multi-source fusion model for lymphoma segmentation with PET images. First, three 2D and one 3D segmentation models were trained with three orthogonal views and one 3D image, respectively. The four segmentation maps were then fused by a convolutional layer to get a final result. In [14], Blanc-Durand et al. proposed a nnUNet-based lymphoma segmentation network with additional validation of total metabolic tumor volume for 3D PET-CT images. In [15], Huang et al. proposed to fuse the outputs of two UNets trained on CT and PET data, using Dempster's rule of combination [16], a combination operator of Dempster-Shafer theory (DST) (see Section 2 below). However, the outputs of the UNets were probabilities and this approach did not harness the full power of DST.
|
| 38 |
+
|
| 39 |
+
Uncertainty. In spite of the excellent performance of deep learning methods, the issue of quantifying prediction uncertainty remains [17]. This uncertainty can be classified into three types: distribution, model, and data uncertainty. Distribution uncertainty is caused by training-test distribution mismatch (dataset shift) [18]. Model uncertainty arises from limited training set size and model misspecification [19][20][21]. Finally, sources of data uncertainty include class overlap, label noise, and homo or hetero-Seedastic noise [22]. Because of the limitations of medical imaging and labeling technology, as well as the need to use a large nonlinear parametric segmentation model, PET-CT image segmentation results are particularly tainted with uncertainty, which limits the reliability of the segmentation. Figure 2 shows examples of PET and CT image slices for one patient with lymphomas. As can be seen, lymphomas in PET images usually correspond to the brightest pixels, but organs such as the brain and bladder are also located in bright pixel areas, which may result in segmentation errors. Moreover, lymphoma boundaries are blurred, which makes it hard to delineate lymphomas precisely.
|
| 40 |
+
|
| 41 |
+
Approaches to uncertainty modeling. Early approaches to uncertainty quantification in machine learning were based on Bayesian theory [23][24]. The popularity of deep learning models has revived research of model uncertainty estimation and has given rise to specific methods such as variational dropout [25][26]. In this paper, we explore a different approach based on DST [27][16] [28], a theoretical framework for reasoning with imperfect (uncertain, imprecise, partial) information. DST was first introduced by Dempster [27] and Shafer [16] and was further popularized and developed by Smets [29]. Applications in machine learning were first introduced by Denoeux [30, 31, 32]. DST is based on the representation of elementary items of evidence by belief functions, and their combination by a specific operator called Dempster's rule of combination. In recent years, DST has generated considerable interest and has had great success in various fields, including information fusion [33][34][35], classification [36][37][38], clustering [28][39][40], and image segmentation [41][42][43].
|
| 42 |
+
|
| 43 |
+
In this paper<sup>1</sup>, we propose a 3D PET-CT diffuse large B-cell lymphoma segmentation model based on DST and deep learning, which not only focuses on lymphoma segmentation accuracy but also on uncertainty quantification using belief functions. The proposed
|
| 44 |
+
|
| 45 |
+

|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
|
| 51 |
+

|
| 52 |
+
Figure 2: Example of a patient with lymphomas. The first and second rows show, respectively, PET and CT slices for one patient in axial, sagittal and coronal views.
|
| 53 |
+
|
| 54 |
+

|
| 55 |
+
|
| 56 |
+

|
| 57 |
+
|
| 58 |
+
segmentation model is composed of a UNet module for feature extraction and an evidential segmentation module for uncertainty quantification and decision-making. End-to-end learning is performed by minimizing the Dice loss function.
|
| 59 |
+
|
| 60 |
+
The rest of the paper is organized as follows. The main concepts of DST are first recalled in Section 2, and two approaches for computing belief functions in classification tasks are described in Section 3. The proposed model is then introduced in Section 4, and experimental results are reported in Section 5. Finally, Section 6 concludes the paper.
|
| 61 |
+
|
| 62 |
+
# 2. Dempster-Shafer theory
|
| 63 |
+
|
| 64 |
+
In this section, we first recall some necessary notations and definitions regarding DST. Let $\Omega = \{\omega_1,\omega_2,\dots ,\omega_K\}$ be a finite set of all possible answers some question, called the frame of discernment. Evidence about the question of interest can be represented by a mass function $m$ , defined as a mapping from the power set $2^{\Omega}$ to [0, 1] such that
|
| 65 |
+
|
| 66 |
+
$$
|
| 67 |
+
\sum_ {A \subseteq \Omega} m (A) = 1 \tag {1}
|
| 68 |
+
$$
|
| 69 |
+
|
| 70 |
+
and $m(\emptyset) = 0$ , where $\emptyset$ denotes the empty set. Subsets $A \subseteq \Omega$ such that $m(A) > 0$ are called the focal sets of $m$ . Each mass $m(A)$ represents a share of a unit mass of belief allocated to focal set $A$ , and which cannot be allocated to any strict subset of $A$ . The mass $m(\Omega)$ allocated to the whole frame can be seen as a degree of ignorance. Full ignorance is represented by
|
| 71 |
+
|
| 72 |
+
the vacuous mass function $m_{?}$ verifying $m_{?}(\Omega) = 1$ . A mass function is said to be Bayesian if its focal sets are singletons, and logical if it has only one focal set.
|
| 73 |
+
|
| 74 |
+
Discounting. Let $m$ be a mass function on $\Omega$ and $s$ a coefficient in $[0,1]$ . The discounting operation [16] with discount rate $1 - s$ transforms $m$ into a weaker, less informative mass function defined as follows:
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
{ } ^ { s } m = s m + ( 1 - s ) m ? . \tag {2}
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
As shown in [45], coefficient $s$ can be interpreted as a degree of belief that the source of information providing mass function $m$ is reliable.
|
| 81 |
+
|
| 82 |
+
Simple mass functions. A mass function $m$ is said to be simple if it can be obtained by discounting a logical mass function; it thus has the following form:
|
| 83 |
+
|
| 84 |
+
$$
|
| 85 |
+
m (A) = s, \quad m (\Omega) = 1 - s, \tag {3}
|
| 86 |
+
$$
|
| 87 |
+
|
| 88 |
+
for some $A \subset \Omega$ such that $A \neq \emptyset$ and some $s \in [0,1]$ , called the degree of support in $A$ . The quantity $w = -\ln(1 - s)$ is called the weight of evidence associated to $m$ [16, page 77]. In the following, a simple mass function with focal set $A$ and weight of evidence $w$ will be denoted as $A^w$ .
|
| 89 |
+
|
| 90 |
+
Belief and plausibility. Given a mass function $m$ , belief and plausibility functions are defined, respectively, as follows:
|
| 91 |
+
|
| 92 |
+
$$
|
| 93 |
+
B e l (A) = \sum_ {B \subseteq A} m (B) \tag {4}
|
| 94 |
+
$$
|
| 95 |
+
|
| 96 |
+
and
|
| 97 |
+
|
| 98 |
+
$$
|
| 99 |
+
P l (A) = \sum_ {B \cap A \neq \emptyset} m (B) = 1 - B e l \left(A ^ {c}\right), \tag {5}
|
| 100 |
+
$$
|
| 101 |
+
|
| 102 |
+
for all $A \subseteq \Omega$ , where $A^c$ denotes the complement of $A$ . The quantity $Bel(A)$ can be interpreted as a degree of support for $A$ , while $Pl(A)$ can be interpreted as a measure of lack of support for the complement of $A$ .
|
| 103 |
+
|
| 104 |
+
Dempster's rule. Two mass functions $m_{1}$ and $m_{2}$ derived from two independent items of evidence can be combined by considering each pair of a focal set $B$ of $m_{1}$ and a focal set $C$ of $m_{2}$ , and assigning the product $m_{1}(B)m_{2}(C)$ to the intersection $B \cap C$ . A normalization step is then necessary to ensure that the mass of the empty set is equal to zero. This operation, called Dempster's rule of combination [16] and denoted as $\oplus$ , is formally defined by $(m_{1} \oplus m_{2})(\emptyset) = 0$ and
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
(m _ {1} \oplus m _ {2}) (A) = \frac {1}{1 - \kappa} \sum_ {B \cap C = A} m _ {1} (B) m _ {2} (C), \tag {6}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
for all $A \subseteq \Omega$ , $A \neq \emptyset$ , where $\kappa$ represents the degree of conflict between $m_1$ and $m_2$ equal to
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
\kappa = \sum_ {B \cap C = \emptyset} m _ {1} (B) m _ {2} (C). \tag {7}
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
The combined mass $m_{1} \oplus m_{2}$ is called the orthogonal sum of $m_{1}$ and $m_{2}$ . It can easily be checked that the orthogonal sum of two simple mass functions $A^{w_{1}}$ and $A^{w_{2}}$ with the same focal set $A$ is the simple mass function $A^{w_{1} + w_{2}}$ : Dempster's rule thus adds up weights of evidence.
|
| 117 |
+
|
| 118 |
+
Decision-making. After aggregating all the available evidence in the form of a mass function, it is often necessary to make a final decision. Decision-making based on belief functions for classification tasks has been studied in [46], and, more recently, by Ma and Deneux in [47]. The reader is referred to Ref. [48] for a recent review of decision methods based on belief functions. Here, we briefly introduce the approach used in this paper. Consider a classification task with $K$ classes in the set $\Omega = \{\omega_1, \dots, \omega_K\}$ . Assume that the utility of selecting the correct class is 1, and the utility of an error is 0. As shown in [46], the lower and upper expected utilities of selecting class $\omega_k$ are then, respectively, $Bel(\{\omega_k\})$ and $Pl(\{\omega_k\})$ . A pessimistic decision-maker (DM) maximizing the lower expected utility will then select the class with the highest degree of belief, while an optimistic DM minimizing the upper expected utility will select the most plausible class. Alternatively, the Hurwicz criterion consists in maximizing a weighted sum of the lower and upper expected utility. In the decision context, we then select the class $\omega_k$ such that $(1 - \xi)Bel(\{\omega_k\}) + \xi Pl(\{\omega_k\})$ is maximum, where $\xi$ is an optimism index. Another approach, advocated by Smets in the Transferable Belief Model [45], is to base decisions on the pignistic probability distribution, defined as
|
| 119 |
+
|
| 120 |
+
$$
|
| 121 |
+
p _ {m} (\omega) = \sum_ {\{A \subseteq \Omega : \omega \in A \}} \frac {m (A)}{| A |} \tag {8}
|
| 122 |
+
$$
|
| 123 |
+
|
| 124 |
+
for all $\omega \in \Omega$
|
| 125 |
+
|
| 126 |
+
# 3. Evidential classifiers
|
| 127 |
+
|
| 128 |
+
In this section, we review two methods for designing classifiers that output mass functions, referred to as evidential classifiers. The evidential neural network (ENN) classifier introduced in [31] is first recalled in Section 3.1. A new model based on the interpretation of a radial basis function (RBF) network as combining of simple mass functions by Dempster's rule, inspired by [49], is then described in Section 3.2. The two models are compared experimentally in Section 3.3.
|
| 129 |
+
|
| 130 |
+
# 3.1. Evidential neural network
|
| 131 |
+
|
| 132 |
+
In [31], Denoeux proposed the ENN classifier, in which mass functions are computed based on distances to prototypes. The basic idea is to consider each prototype as a piece of evidence, which is discounted based on its distance to the input vector. The evidence from different prototypes is then pooled by Dempster's rule (6). We provide a brief introduction to the ENN model in this section.
|
| 133 |
+
|
| 134 |
+
The ENN classifier is composed on an input layer of $H$ neurons (where $H$ is the dimension of input space), two hidden layers and an output layer (Figure 3). The first input layer is
|
| 135 |
+
|
| 136 |
+

|
| 137 |
+
Figure 3: Evidential neural network.
|
| 138 |
+
|
| 139 |
+
composed of $I$ units, whose weights vectors are prototypes $\pi_1,\ldots ,\pi_I$ in input space. The activation of unit $i$ in the prototype layer is
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
s _ {i} = \alpha_ {i} \exp (- \gamma_ {i} d _ {i} ^ {2}), \tag {9}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
where $d_{i} = \| \pmb{x} - \pmb{\pi}_{i} \|$ is the Euclidean distance between input vector $\pmb{x}$ and prototype $\pmb{\pi}_{i}$ , $\gamma_{i} > 0$ is a scale parameter, and $\alpha_{i} \in [0,1]$ is an additional parameter.
|
| 146 |
+
|
| 147 |
+
The second hidden layer computes mass functions $m_{i}$ representing the evidence of each prototype $\pmb{\pi}_{i}$ , using the following equations:
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
m _ {i} (\{\omega_ {k} \}) = u _ {i k} s _ {i}, \quad k = 1, \dots , K \tag {10a}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
m _ {i} (\Omega) = 1 - s _ {i}, \tag {10b}
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
where $u_{ik}$ is the membership degree of prototype $i$ to class $\omega_k$ , and $\sum_{k=1}^{K} u_{ik} = 1$ . Mass function $m_i$ can thus be seen as a discounted Bayesian mass function, with discount rate $1 - s_i$ ; its focal sets are singletons and $\Omega$ . The mass assigned to $\Omega$ increases with the distance between $x$ and $\pi_i$ . Finally, the third layer combines the $I$ mass functions $m_1, \ldots, m_I$ using Dempster's rule (6). The output mass function $m = \bigoplus_{i=1}^{I} m_i$ is a discounted Bayesian mass function that summarizes the evidence of the $I$ prototypes. Because the focal sets of $m$ are singletons and $\Omega$ , the class with the highest degree of belief also has the highest plausibility and pignistic probability: consequently, the decision rules recalled in Section 2 are equivalent in this case.
|
| 158 |
+
|
| 159 |
+
Let $\pmb{\theta}$ denote the vector of all network parameters, composed of the $I$ prototypes $\pi_{i}$ , their parameters $\gamma_{i}$ and $\alpha_{i}$ , and their membership degrees $u_{ik}$ , $k = 1,\ldots ,K$ . In [31], it was proposed to learn these parameters by minimizing the regularized sum-of-squares loss function
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
L _ {S S} (\boldsymbol {\theta}) = \sum_ {n = 1} ^ {N} \sum_ {k = 1} ^ {K} \left(p _ {n k} - y _ {n k}\right) ^ {2} + \lambda \sum_ {i = 1} ^ {I} \alpha_ {i}, \tag {11}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
where $p_{nk}$ is the pignistic probability of class $\omega_k$ for instance $n$ , $N$ is the number of training instances, and $y_{nk} = 1$ if the true class of instance $n$ is $\omega_k$ , and $y_{nk} = 0$ otherwise. The second term on the right-hand side of (11) is a regularization term, and $\lambda$ is hyperparameter that can be tuned by cross-validation.
|
| 166 |
+
|
| 167 |
+
The idea of applying the above model to features extracted by a convolutional neural network (CNN) was first proposed by Tong et al. in [50]. In this approach, the ENN module becomes a "evidential layer", which is plugged into the output of a CNN instead of the usual softmax layer. The feature extraction and evidential modules are trained simultaneously. A similar approach was applied in [43] to semantic segmentation. In the next section, we present an alternative approach based on a radial basis function (RBF) network and weights of evidence.
|
| 168 |
+
|
| 169 |
+
# 3.2. Radial basis function network
|
| 170 |
+
|
| 171 |
+
As shown in [49], the calculations performed in the softmax layer of a feedforward neural network can be interpreted in terms of combination of evidence by Dempster's rule. The output class probabilities can be seen as normalized plausibilities according to an underlying belief function. Applying these ideas to a radial basis function (RBF) network, it is possible to derive an alternative evidential classifier with properties similar to those of the ENN model recalled in Section 3.1.
|
| 172 |
+
|
| 173 |
+
Consider an RBF network with $I$ prototype (hidden) units. The activation of hidden unit $i$ is
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
s _ {i} = \exp \left(- \gamma_ {i} d _ {i} ^ {2}\right), \tag {12}
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
where, as before, $d_{i} = \| \pmb{x} - \pmb{\pi}_{i} \|$ is the Euclidean distance between input vector $\pmb{x}$ and prototype $\pmb{\pi}_{i}$ , and $\gamma_{i} > 0$ is a scale parameter. For the application considered in this paper, we only need to consider the case of binary classification with $K = 2$ and $\Omega = \{\omega_{1}, \omega_{2}\}$ . (The case where $K > 2$ is also analyzed in [49]). Let $v_{i}$ be the weight of the connection between hidden unit $i$ and the output unit, and let $w_{i} = s_{i} v_{i}$ be the product of the output of unit $i$ and weight $v_{i}$ . The quantities $w_{i}$ can be interpreted as weights of evidence for class $\omega_{1}$ or $\omega_{2}$ , depending on the sign of $v_{i}$ :
|
| 180 |
+
|
| 181 |
+
- If $v_{i} \geq 0$ , $w_{i}$ a weight of evidence for class $\omega_{1}$ ;
|
| 182 |
+
- If $v_{i} < 0$ , $-w_{i}$ is a weight of evidence for class $\omega_{2}$ .
|
| 183 |
+
|
| 184 |
+
To each prototype $i$ can, thus, be associated the following simple mass function:
|
| 185 |
+
|
| 186 |
+
$$
|
| 187 |
+
m _ {i} = \{\omega_ {1} \} ^ {w _ {i} ^ {+}} \oplus \{\omega_ {2} \} ^ {w _ {i} ^ {-}},
|
| 188 |
+
$$
|
| 189 |
+
|
| 190 |
+
where $w_{i}^{+} = \max (0,w_{i})$ and $w_{i}^{-} = -\min (0,w_{i})$ denote, respectively, the positive and negative parts of $w_{i}$ . Combining the evidence of all prototypes in favor of $\omega_{1}$ or $\omega_{2}$ by Dempster's rule, we get the mass function
|
| 191 |
+
|
| 192 |
+
$$
|
| 193 |
+
m = \bigoplus_ {i = 1} ^ {I} m _ {i} = \left\{\omega_ {1} \right\} ^ {w ^ {+}} \oplus \left\{\omega_ {2} \right\} ^ {w ^ {-}}, \tag {13}
|
| 194 |
+
$$
|
| 195 |
+
|
| 196 |
+
with $w^{+} = \sum_{i=1}^{I} w_{i}^{+}$ and $w^{-} = \sum_{i=1}^{I} w_{i}^{-}$ . In [49], the normalized plausibility of $\omega_{1}$ corresponding to mass function $m$ was shown to have the following expression:
|
| 197 |
+
|
| 198 |
+
$$
|
| 199 |
+
p \left(\omega_ {1}\right) = \frac {P l \left(\left\{\omega_ {1} \right\}\right)}{P l \left(\left\{\omega_ {1} \right\}\right) + P l \left(\left\{\omega_ {2} \right\}\right)} = \frac {1}{1 + \exp \left(- \sum_ {i = 1} ^ {I} v _ {i} s _ {i}\right)}, \tag {14}
|
| 200 |
+
$$
|
| 201 |
+
|
| 202 |
+
i.e., it is the output of a unit with a logistic activation function. When training an RBF network with a logistic output unit, we thus actually combine evidence from each of the prototypes, but the combined mass function remains latent. In [49], mass function $m$ defined by (13) was shown to have the following expression:
|
| 203 |
+
|
| 204 |
+
$$
|
| 205 |
+
m \left(\left\{\omega_ {1} \right\}\right) = \frac {\left[ 1 - \exp \left(- w ^ {+}\right) \right] \exp \left(- w ^ {-}\right)}{1 - \kappa} \tag {15a}
|
| 206 |
+
$$
|
| 207 |
+
|
| 208 |
+
$$
|
| 209 |
+
m \left(\left\{\omega_ {2} \right\}\right) = \frac {\left[ 1 - \exp \left(- w ^ {-}\right) \right] \exp \left(- w ^ {+}\right)}{1 - \kappa} \tag {15b}
|
| 210 |
+
$$
|
| 211 |
+
|
| 212 |
+
$$
|
| 213 |
+
m (\Omega) = \frac {\exp (- w ^ {+} - w ^ {-})}{1 - \kappa} = \frac {\exp (- \sum_ {i = 1} ^ {I} | w _ {i} |)}{1 - \kappa}, \tag {15c}
|
| 214 |
+
$$
|
| 215 |
+
|
| 216 |
+
where
|
| 217 |
+
|
| 218 |
+
$$
|
| 219 |
+
\kappa = [ 1 - \exp (- w ^ {+}) ] [ 1 - \exp (- w ^ {-}) ] \tag {15d}
|
| 220 |
+
$$
|
| 221 |
+
|
| 222 |
+
is the degree of conflict between mass functions $\{\omega_1\}^{w^+}$ and $\{\omega_2\}^{w^-}$ .
|
| 223 |
+
|
| 224 |
+
In the approach, we thus simply need to train a standard RBF network with $I$ prototype layers and one output unit with a logistic activation function, by minimizing a loss function such as, e.g., the regularized cross-entropy loss
|
| 225 |
+
|
| 226 |
+
$$
|
| 227 |
+
L _ {C E} (\boldsymbol {\theta}) = - \sum_ {n = 1} ^ {N} \left(y _ {n} \log p _ {n} + \left(1 - y _ {n}\right) \log \left(1 - p _ {n}\right)\right) + \lambda \sum_ {i = 1} ^ {I} w _ {i} ^ {2}, \tag {16}
|
| 228 |
+
$$
|
| 229 |
+
|
| 230 |
+
where $p_n$ is the normalized plausibility of class $\omega_1$ computed from (14) for instance $n$ , $y_n$ is class label of instance $n$ ( $y_n = 1$ if the true class of instance $n$ is $\omega_1$ , and $y_n = 0$ otherwise), and $\lambda$ is a hyperparameter. We note that increasing $\lambda$ has the effect of decreasing the weights of evidence and, thus, obtaining less informative mass functions.
|
| 231 |
+
|
| 232 |
+
# 3.3. Comparison between the two models
|
| 233 |
+
|
| 234 |
+
To compare the RBF model described in Section 3.2 with the ENN model recalled in Section 3.1, we consider the two-class dataset shown in Figure 4. The two classes are randomly distributed around half circles with Gaussian noise and are separated by a nonlinear boundary. A learning set of size $N = 300$ and a test set of size 1000 were generated from the same distribution.
|
| 235 |
+
|
| 236 |
+
An ENN and a RBF network were initialized with $I = 6$ prototypes generated by the $k$ -means algorithm and were trained on the learning data. Figures 5a and 5b show, respectively, the test error rate and the mean uncertainty (defined as the average mass assigned to the frame $\Omega$ ), as functions of hyperparameter $\lambda$ in (11) and (16), for 10 different runs of both
|
| 237 |
+
|
| 238 |
+

|
| 239 |
+
Figure 4: Simulated data.
|
| 240 |
+
|
| 241 |
+
algorithms with different initializations. As expected, uncertainty increases with $\lambda$ for both models, but the ENN model appears to be less sensitive to $\lambda$ as compared to the RBF model. Both models achieve similar minimum error rates for $\lambda$ around $10^{-3}$ , and have similar mean uncertainties for $\lambda = 10^{-4}$ .
|
| 242 |
+
|
| 243 |
+
As shown in [31], the robustness of the ENN model arises from the fact that, when the input $\pmb{x}$ is far from all prototypes, the output mass function $m$ is close to the vacuous mass function. This property, in particular, makes the network capable of detecting observations generated from a distribution that is not represented in the learning set. From (15c), we can expect the RBF network model to have a similar property: if $\pmb{x}$ is far from all prototypes, all weights of evidence $w_{i}$ will be small and the mass $m(\Omega)$ will be close to unity. To compare the mass functions computed by the two models, not only in regions of high density where training data are present, but also in regions of low density, we introduced a third class in the test set, as shown in Figure 6. Figure 7 shows scatter plots of masses on each of the focal sets computed for the two models trained with $\lambda = 10^{-3}$ and applied to an extended dataset composed of the learning data and the third class. We can see that the mass functions are quite similar. Contour plots shown in Figure 6 confirm this similarity.
|
| 244 |
+
|
| 245 |
+
# 4. Proposed model
|
| 246 |
+
|
| 247 |
+
The main idea of this work is to hybridize a deep medical image segmentation model with one of the evidential classifiers introduced in Section 3. Figure 8 shows the global lymphoma segmentation architecture, composed of an encoder-decoder feature extraction module (UNet), and an evidential layer based one of the two models described in Section 3.
|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
(a)
|
| 251 |
+
|
| 252 |
+

|
| 253 |
+
(b)
|
| 254 |
+
Figure 5: Test error rates (a) and mean uncertainty (b) for the ENN and RBF models, as functions of regularization parameter $\lambda$ .
|
| 255 |
+
|
| 256 |
+
The input is the concatenated PET-CT image volume provided as a tensor of size $2 \times 256 \times 256 \times 128$ , where 2 corresponds to the number of modality channels, and $256 \times 256 \times 128$ is the size of each input volume. The PET-CT image volumes are first fed into the feature extraction module, which outputs high-level features in the form of a tensor of size $256 \times 256 \times 128 \times H$ , where $H$ is the number of features computed at each voxel. This tensor is then fed into the evidential layer, which outputs mass functions representing evidence about the class of each voxel, resulting in a tensor of size $256 \times 256 \times 128 \times (K + 1)$ , where $K + 1$ is the number of masses (one for each class and one for the frame of discernment $\Omega$ ). The whole network is trained end-to-end by minimizing a regularized Dice loss. The different components of this model are described in greater detail below.
|
| 257 |
+
|
| 258 |
+
Feature extraction module. The feature extraction module is based on a UNet [10] with residual encoder and decoder layers [51], as shown in Figure 9. Each down-sampling layer (marked in blue) is composed of convolution, normalization, dropout and activation blocks. Each up-sampling layer (marked in green) is composed of transpose convolution, normalization, dropout and activation blocks. The last layer (marked in yellow) is the bottom connection which does not down or up-sample the data. In the experiments reported in Section 5, the channels (number of filters) were set as $(8,16,32,64,128)$ with kernel size equal to 5 and convolutional strides equal to $(2,2,2,2)$ . The spatial dimension, input channel and output channel of the module were set, respectively, as 3, 2, and the number $H$ of extracted features. (Experiments with several values of $H$ are reported in Section 5.2). The dropout rate was set as 0 and no padding operation was applied. Instance normalization [52] was used to perform intensity normalization across the width, height and depth of a single fea
|
| 259 |
+
|
| 260 |
+

|
| 261 |
+
$\mathsf{m}(\{\omega_1\})$
|
| 262 |
+
|
| 263 |
+

|
| 264 |
+
$\mathsf{m}(\{\omega_1\})$
|
| 265 |
+
(b)
|
| 266 |
+
|
| 267 |
+

|
| 268 |
+
(a)
|
| 269 |
+
$\mathsf{m}(\{\omega_2\})$
|
| 270 |
+
(c)
|
| 271 |
+
|
| 272 |
+

|
| 273 |
+
$\mathsf{m}(\{\omega_2\})$
|
| 274 |
+
(d)
|
| 275 |
+
|
| 276 |
+

|
| 277 |
+
m(Ω)
|
| 278 |
+
(e)
|
| 279 |
+
Figure 6: Contours of the mass assigned to $\{\omega_1\}$ , $\{\omega_2\}$ and $\Omega$ by the RBF (left column) and ENN (right column) models. The training data are displayed in blue and red, and the third class (absent from the training data) is shown in green. Training was done with $\lambda = 0.001$ for the two models.
|
| 280 |
+
|
| 281 |
+

|
| 282 |
+
m(Ω)
|
| 283 |
+
(f)
|
| 284 |
+
|
| 285 |
+

|
| 286 |
+
(a)
|
| 287 |
+
|
| 288 |
+

|
| 289 |
+
(b)
|
| 290 |
+
|
| 291 |
+

|
| 292 |
+
(c)
|
| 293 |
+
Figure 7: Masses computed by the RBF network (horizontal axis) versus the ENN model (vertical axis) for the extended dataset.
|
| 294 |
+
|
| 295 |
+

|
| 296 |
+
Figure 8: Global lymphoma segmentation model.
|
| 297 |
+
|
| 298 |
+
ture map of a single example. The Parametric Rectified Linear Unit (PReLU) function [53], which generalizes the traditional rectified unit with a slope for negative values, was used as the activation function. For each input voxel, the feature extraction module outputs a $1 \times H$ feature vector, which is fed into the evidential layer.
|
| 299 |
+
|
| 300 |
+
Evidential layer. A probabilistic network with a softmax output layer may assign voxels a high probability of belonging to one class while the segmentation uncertainty is actually high because, e.g., the feature vector describing that voxel is far away from feature vectors presented during training. Here, we propose to plug-in one of the evidential classifiers described in Section 3 at the output of the feature extraction module. The ENN or RBF classifier then takes as inputs the high-level feature vectors computed by the UNet and computes, for each voxel $n$ , a mass function $m_{n}$ on the frame $\Omega = \{\omega_{1},\omega_{2}\}$ , where $\omega_{1}$ and $\omega_{2}$ denote, respectively, the background and the lymphoma class. We will use the names "ENN-UNet" and "RBF-UNet" to designate the two variants of the architecture.
|
| 301 |
+
|
| 302 |
+
Loss function. The whole network is trained end-to-end by minimizing a regularized Dice loss. We use the Dice loss instead of the original cross-entropy loss in UNet because the quality of the segmentation is finally assessed by the Dice coefficient. The Dice loss is defined as
|
| 303 |
+
|
| 304 |
+
$$
|
| 305 |
+
\operatorname {l o s s} _ {D} = 1 - \frac {2 \sum_ {n = 1} ^ {N} S _ {n} G _ {n}}{\sum_ {n = 1} ^ {N} S _ {n} + \sum_ {n = 1} ^ {N} G _ {n}}, \tag {17}
|
| 306 |
+
$$
|
| 307 |
+
|
| 308 |
+
where $N$ is the number of voxels in the image volume, $S_{n}$ is the output pignistic probability of the tumor class (i.e., $m_{n}(\{\omega_{2}\}) + m_{n}(\Omega) / 2$ ) for voxel $n$ , and $G_{n}$ is ground truth for voxel $n$ , defined as $G_{n} = 1$ if voxel $n$ corresponds to a tumor, and $G_{n} = 0$ . The regularized loss function is
|
| 309 |
+
|
| 310 |
+
$$
|
| 311 |
+
\operatorname {l o s s} = \operatorname {l o s s} _ {D} + \lambda R, \tag {18}
|
| 312 |
+
$$
|
| 313 |
+
|
| 314 |
+
where $\lambda$ is the regularization coefficient and $R$ is a regularizer defined either as $R = \sum_{i}\alpha_{i}$ if the ENN classifier is used in the ES module, or as $R = \sum_{i}v_{i}^{2}$ if the RBF classifier is used.
|
| 315 |
+
|
| 316 |
+

|
| 317 |
+
Figure 9: Feature extraction module.
|
| 318 |
+
|
| 319 |
+
The regularization term allows us to decrease the influence of unimportant prototypes and avoid overfitting.
|
| 320 |
+
|
| 321 |
+
# 5. Experiments
|
| 322 |
+
|
| 323 |
+
The model introduced in Section 4 was applied to a set of PET-CT data recorded on patients with lymphomas $^2$ . The experimental settings are first described in Section 5.1. A sensitivity analysis with respect to the main hyperparameters is first reported in Section 5.2. We then compare the segmentation accuracy and calibration of our models with those of state-of-the-art models in Sections 5.3 and 5.4, respectively.
|
| 324 |
+
|
| 325 |
+
# 5.1. Experimental settings
|
| 326 |
+
|
| 327 |
+
Dataset. The dataset considered in this paper contains 3D images from 173 patients who were diagnosed with large B-cell lymphomas and underwent PET-CT examination. (The study was approved as a retrospective study by the Henri Becquerel Center Institutional Review Board). The lymphomas in mask images were delineated manually by experts and considered as ground truth. All PET/CT data were stored in the DICOM (Digital Imaging and Communication in Medicine) format. The size and spatial resolution of PET and CT images and the corresponding mask images vary due to the use of different imaging machines and operations. For CT images, the size varies from $267 \times 512 \times 512$ to $478 \times 512 \times 512$ . For PET images, the size varies from $276 \times 144 \times 144$ to $407 \times 256 \times 256$ .
|
| 328 |
+
|
| 329 |
+
Pre-processing. Several pre-processing methods were used to process the PET/CT data. At first, the data in DICOM format were transferred into the NIFTI (Neuroimaging Informatics Technology Initiative) format for further processing. Second, the PET, CT and mask images were normalized: (1) for PET images, we applied a random intensity shift and scale of each channel with the shift value of 0 and scale value of 0.1; (2) for CT images, the shift and scale values were set to 1000 and 1/2000; (3) for mask images, the intensity value was normalized into the [0,1] interval by replacing the outside value by 1. Third, PET and CT images were resized to $256 \times 256 \times 128$ by linear interpolation, and mask images were resized to $256 \times 256 \times 128$ by nearest neighbor interpolation. Lastly, the registration of CT and PET images was performed by B-spline interpolation. All the prepossessing methods can be found in the SimpleITK [54][55] toolkit. During training, PET and CT images were concatenated as a two-channel input. We randomly selected $80\%$ of the data for training, $10\%$ for validation and $10\%$ for testing. This partition was fixed and used in all the experiments reported below.
|
| 330 |
+
|
| 331 |
+
Parameter initialization. For the evidential layer module, we considered two variants based on the ENN classifier recalled in Section 3.1 on the one hand, and on an RBF network as described in Section 3.2 on the other hand. Both approaches are based on prototypes in the space of features extracted by the UNet module. When using ENN or RBF classifiers as stand-alone classifiers, prototypes are usually initialized by a clustering algorithm such as the $k$ -means. Here, this approach is not so easy, because the whole network is trained in an end-to-end way, and the features are constructed during the training process. However, $k$ -means initialization can still be performed by a four-step process:
|
| 332 |
+
|
| 333 |
+
1. A standard UNet architecture (with a softmax output layer) is trained end-to-end;
|
| 334 |
+
2. The $k$ -means algorithm is run in the space of features extracted by the trained UNet;
|
| 335 |
+
3. The evidential layer is trained alone, starting from the initial prototypes computed by the $k$ -means;
|
| 336 |
+
4. The whole model (feature extraction module and evidential layer) is fine-tuned by end-to-end learning with a small learning step.
|
| 337 |
+
|
| 338 |
+
As an alternative method, we also considered training the feature extraction module and the evidential layer simultaneously, in which case the prototypes were initialized randomly from a normal distribution with zero mean and identity covariance matrix. For the ENN module, the initial values of parameters $\alpha_{i}$ and $\gamma_{i}$ were set, respectively, at 0.5 and 0.01, and membership degrees $u_{ik}$ were initialized randomly by drawing uniform random numbers and normalizing. For the RBF module, the initial value of the scale parameter $\gamma_{i}$ of RBF was set to 0.01, and the weight $v_{i}$ were drawn randomly from a standard normal distribution.
|
| 339 |
+
|
| 340 |
+
Learning algorithm. Each model was trained on the learning set with 100 epochs using the Adam optimization algorithm. The initial learning rate was set to $10^{-3}$ . An adjusted learning rate schedule was applied by reducing the learning rate when the training loss did not decrease in 10 epochs. The model with the best performance on the validation set was saved as the final model for testing. All methods were implemented in Python with the
|
| 341 |
+
|
| 342 |
+
PyTorch-based medical image framework MONAI, and were trained and tested on a desktop with a 2.20GHz Intel(R) Xeon(R) CPU E5-2698 v4 and a Tesla V100-SXM2 graphics card with 32 GB GPU memory.
|
| 343 |
+
|
| 344 |
+
Evaluation criteria. The evaluation criteria most commonly used to assess the quality of medical image segmentation algorithms are the Dice score, Sensitivity and Precision. These criteria are defined as follows:
|
| 345 |
+
|
| 346 |
+
$$
|
| 347 |
+
\mathsf {D i c e} (P, T) = \frac {2 \times T P}{F P + 2 \times T P + F N},
|
| 348 |
+
$$
|
| 349 |
+
|
| 350 |
+
$$
|
| 351 |
+
\mathsf {S e n s i t i v i t y} (P, T) = \frac {T P}{T P + F N},
|
| 352 |
+
$$
|
| 353 |
+
|
| 354 |
+
$$
|
| 355 |
+
\operatorname {P r e c i s i o n} (P, T) = \frac {T P}{T P + F P},
|
| 356 |
+
$$
|
| 357 |
+
|
| 358 |
+
where $TP$ , $FP$ , and $FN$ denote, respectively, the numbers of true positive, false positive, false negative voxels (See Figure 10). The reported results in the following sections were obtained by calculating these three criteria for each test 3D image and then averaging over the patients. The Dice score is a global measure of segmentation performance. It is equal to twice the volume of the intersection between the predicted and actual tumor regions, divided by the sum of the volumes of these regions. Sensitivity is the proportion, among actual tumor voxels, of voxels correctly predicted as tumor. Precision is the proportion, among predicted tumor voxels, of voxels that actually belong to the tumor region; it is, thus, an estimate of the probability that the model is correct when it predicts that a voxel is in a lymphoma region. We note that neither sensitivity, nor precision are global performance criteria. We can increase sensitivity by predicting the tumor class more often (at the expense of misclassifying a lot of background pixels), and we can increase precision by being very cautious and predicting the tumor class only when it has a high probability (at the expense of missing a lot of tumor voxels). These two criteria, thus, have to be considered jointly. Finally, we can also remark that a forth criterion can also be defined: specificity, which is the proportion, among background voxels, of voxels correctly predicted as background (i.e., $TN / (TN + FP)$ ). However, as there are much more background voxels than tumor ones, this criterion is not informative in tumor segmentation applications (it is always very close to 1).
|
| 359 |
+
|
| 360 |
+
In addition to quality of the segmentation, we also wish to evaluate the calibration of output probabilities or belief functions (see Section 5.4). For that purpose, we will use an additional evaluation criterion, the Expected Calibration Error (ECE) [56]. The output pignistic probabilities from the evidential layer are first discretized into $R$ equally spaced bins $B_{r}$ , $r = 1,\dots ,R$ (we used $R = 10$ ). The accuracy of bin $B_{r}$ is defined as
|
| 361 |
+
|
| 362 |
+
$$
|
| 363 |
+
\operatorname {a c c} \left(B _ {r}\right) = \frac {1}{\mid B _ {r} \mid} \sum_ {i \in B _ {r}} \mathbf {1} \left(P _ {i} = G _ {i}\right), \tag {19}
|
| 364 |
+
$$
|
| 365 |
+
|
| 366 |
+

|
| 367 |
+
Figure 10: Geometric interpretation of the numbers of true positive (TP), false positive (FP), true negative (TN) and false negative (TN) used for the definition of evaluation criteria.
|
| 368 |
+
|
| 369 |
+
where $P_{i}$ and $G_{i}$ are, respectively, the predicted and true class labels for sample $i$ . The average confidence of bin $B_{r}$ is defined as
|
| 370 |
+
|
| 371 |
+
$$
|
| 372 |
+
\operatorname {c o n f} \left(B _ {r}\right) = \frac {1}{\mid B _ {r} \mid} \sum_ {i \in B _ {r}} S _ {i}, \tag {20}
|
| 373 |
+
$$
|
| 374 |
+
|
| 375 |
+
where $S_{i}$ is the confidence for sample $i$ . The ECE is the weighted average of the difference in accuracy and confidence of the bins:
|
| 376 |
+
|
| 377 |
+
$$
|
| 378 |
+
E C E = \sum_ {r = 1} ^ {R} \frac {\left| B _ {r} \right|}{N} \left| \operatorname {a c c} \left(B _ {r}\right) - \operatorname {c o n f} \left(B _ {r}\right) \right|, \tag {21}
|
| 379 |
+
$$
|
| 380 |
+
|
| 381 |
+
where $N$ is the total number of elements in all bins, and $|B_r|$ is the number of elements in bin $B_r$ . A model is perfectly calibrated when $\mathsf{acc}(B_r) = \mathsf{conf}(B_r)$ for all $r \in \{1, \ldots, R\}$ . Through the bin-size weighting in the ECE metric, the highly confident and accurate background voxels significantly affect the results. Because our dataset has imbalanced foreground and background proportions, we only considered voxels belonging to the tumor to calculate the ECE, similar to [57][58]. For each patient in the test set, we defined a bounding box covering the lymphoma region and calculated the ECE in this bounding box. We are interested in the patient-level ECE and thus reported the mean patient ECE instead of the voxel-level ECE (i.e., considering all voxels in the test set to calculate the ECE).
|
| 382 |
+
|
| 383 |
+
# 5.2. Sensitivity analysis
|
| 384 |
+
|
| 385 |
+
We analyzed the sensitivity of the results to the main design hyperparameters, which are: the number $H$ of extracted features, the number $I$ of prototypes and the regulation coefficient $\lambda$ . The influence of the initialization method was also studied. In all the experiments reported in this section as well as in Section 5.3, learning in each of the configurations was repeated five times with different random initial conditions.
|
| 386 |
+
|
| 387 |
+
Influence of the number of features. Table 1 shows the means and standard deviations (over five runs) of the three performance indices for ENN-UNet and RBF-UNet with different numbers of features $(H\in \{2,5,8\})$ . The number of prototypes and the regularization coefficient were set, respectively, to $I = 10$ and $\lambda = 0$ . The prototypes were initialized
|
| 388 |
+
|
| 389 |
+
Table 1: Means and standard deviations (over five runs) of the performance measures for different input dimensions $H$ , with $I = 10$ randomly initialized prototypes and $\lambda = 0$ . The best values are shown in bold.
|
| 390 |
+
|
| 391 |
+
<table><tr><td rowspan="2">Model</td><td rowspan="2">H</td><td colspan="2">Dice score</td><td colspan="2">Sensitivity</td><td colspan="2">Precision</td></tr><tr><td>Mean</td><td>SD</td><td>Mean</td><td>SD</td><td>Mean</td><td>SD</td></tr><tr><td rowspan="3">ENN-UNet</td><td>2</td><td>0.833</td><td>0.009</td><td>0.819</td><td>0.019</td><td>0.872</td><td>0.018</td></tr><tr><td>5</td><td>0.831</td><td>0.012</td><td>0.817</td><td>0.016</td><td>0.870</td><td>0.011</td></tr><tr><td>8</td><td>0.829</td><td>0.006</td><td>0.816</td><td>0.010</td><td>0.877</td><td>0.019</td></tr><tr><td rowspan="3">RBF-UNet</td><td>2</td><td>0.824</td><td>0.009</td><td>0.832</td><td>0.008</td><td>0.845</td><td>0.016</td></tr><tr><td>5</td><td>0.825</td><td>0.006</td><td>0.817</td><td>0.016</td><td>0.862</td><td>0.010</td></tr><tr><td>8</td><td>0.821</td><td>0.011</td><td>0.813</td><td>0.010</td><td>0.862</td><td>0.022</td></tr></table>
|
| 392 |
+
|
| 393 |
+
Table 2: Means and standard deviations (over five runs) of the performance measures for different values of the regularization coefficient $\lambda$ , with $I = 10$ randomly initialized prototypes and $H = 2$ features. The best values are shown in bold.
|
| 394 |
+
|
| 395 |
+
<table><tr><td rowspan="2">Model</td><td rowspan="2">λ</td><td colspan="2">Dice score</td><td colspan="2">Sensitivity</td><td colspan="2">Precision</td></tr><tr><td>Mean</td><td>SD</td><td>Mean</td><td>SD</td><td>Mean</td><td>SD</td></tr><tr><td rowspan="3">ENN-UNet</td><td>0</td><td>0.833</td><td>0.009</td><td>0.819</td><td>0.019</td><td>0.872</td><td>0.018</td></tr><tr><td>1e-4</td><td>0.822</td><td>0.007</td><td>0.818</td><td>0.026</td><td>0.839</td><td>0.035</td></tr><tr><td>1e-2</td><td>0.823</td><td>0.004</td><td>0.817</td><td>0.023</td><td>0.856</td><td>0.023</td></tr><tr><td rowspan="3">RBF-UNet</td><td>0</td><td>0.824</td><td>0.009</td><td>0.832</td><td>0.008</td><td>0.845</td><td>0.016</td></tr><tr><td>1e-4</td><td>0.825</td><td>0.011</td><td>0.811</td><td>0.022</td><td>0.869</td><td>0.020</td></tr><tr><td>1e-2</td><td>0.829</td><td>0.010</td><td>0.818</td><td>0.022</td><td>0.867</td><td>0.016</td></tr></table>
|
| 396 |
+
|
| 397 |
+
randomly. ENN-UNet achieves the highest Dice score and sensitivity with $H = 2$ features, but the highest precision with $H = 8$ . However, the differences are small and concern only the third decimal point. Similarly, RBF-UNet had the best values of the Dice score and precision for $H = 5$ features, but again the differences are small. Overall, it seems that only two features are sufficient to discriminate between tumor and background voxels.
|
| 398 |
+
|
| 399 |
+
Influence of the regularization coefficient. In the previous experiment, the networks were trained without regularization. Tables 2 and 3 show the performances of ENN-UNet and RBF-UNet for different values of $\lambda$ , with $I = 10$ randomly initialized prototypes and, respectively, $H = 2$ and $H = 8$ inputs. With both settings, ENN-UNet does not benefit from regularization (the best results are obtained with $\lambda = 0$ ). In contrast, RBF-UNet is more sensitive to regularization, and achieves the highest Dice score with $\lambda = 0.01$ . This finding confirms the remark already made in Section 3.3, where it was observed that an ENN classifier seems to be less sensitive to regularization than an RBF classifier (see Figure 5a).
|
| 400 |
+
|
| 401 |
+
Influence of the number of prototypes. The number $I$ of prototypes is another hyperparameter that may impact segmentation performance. Table 4 shows the performances of ENN-UNet and RBF-UNet with 10 and 20 randomly initialized prototypes, the other hyperparameters being fixed at $H = 2$ and $\lambda = 0$ . Increasing the number of prototypes beyond 10 does not seem to improve the performance of ENN-UNet, while it does slightly improve
|
| 402 |
+
|
| 403 |
+
Table 3: Means and standard deviations (over five runs) of the performance measures for different values of the regularization coefficient $\lambda$ , with $I = 10$ randomly initialized prototypes and $H = 8$ features. The best values are shown in bold.
|
| 404 |
+
|
| 405 |
+
<table><tr><td rowspan="2">Model</td><td rowspan="2">λ</td><td colspan="2">Dice score</td><td colspan="2">Sensitivity</td><td colspan="2">Precision</td></tr><tr><td>Mean</td><td>SD</td><td>Mean</td><td>SD</td><td>Mean</td><td>SD</td></tr><tr><td rowspan="3">ENN-UNet</td><td>0</td><td>0.829</td><td>0.006</td><td>0.811</td><td>0.010</td><td>0.877</td><td>0.019</td></tr><tr><td>1e-4</td><td>0.827</td><td>0.008</td><td>0.809</td><td>0.019</td><td>0.873</td><td>0.024</td></tr><tr><td>1e-2</td><td>0.822</td><td>0.009</td><td>0.807</td><td>0.021</td><td>0.867</td><td>0.011</td></tr><tr><td rowspan="3">RBF-UNet</td><td>0</td><td>0.821</td><td>0.010</td><td>0.813</td><td>0.010</td><td>0.862</td><td>0.022</td></tr><tr><td>1e-4</td><td>0.827</td><td>0.004</td><td>0.830</td><td>0.005</td><td>0.852</td><td>0.012</td></tr><tr><td>1e-2</td><td>0.832</td><td>0.006</td><td>0.825</td><td>0.022</td><td>0.867</td><td>0.020</td></tr></table>
|
| 406 |
+
|
| 407 |
+
Table 4: Means and standard deviations (over five runs) of the performance measures for different numbers $I$ of randomly initialized prototypes,with $H = 2$ features and $\lambda = 0$ . The best values are shown in bold.
|
| 408 |
+
|
| 409 |
+
<table><tr><td rowspan="2">Model</td><td rowspan="2">I</td><td colspan="2">Dice score</td><td colspan="2">Sensitivity</td><td colspan="2">Precision</td></tr><tr><td>Mean</td><td>SD</td><td>Mean</td><td>SD</td><td>Mean</td><td>SD</td></tr><tr><td rowspan="2">ENN-UNet</td><td>10</td><td>0.833</td><td>0.009</td><td>0.819</td><td>0.019</td><td>0.872</td><td>0.018</td></tr><tr><td>20</td><td>0.823</td><td>0.007</td><td>0.804</td><td>0.006</td><td>0.864</td><td>0.012</td></tr><tr><td rowspan="2">RBF-UNet</td><td>10</td><td>0.824</td><td>0.009</td><td>0.832</td><td>0.008</td><td>0.845</td><td>0.016</td></tr><tr><td>20</td><td>0.830</td><td>0.007</td><td>0.810</td><td>0.012</td><td>0.867</td><td>0.010</td></tr></table>
|
| 410 |
+
|
| 411 |
+
the performance of RBF-UNet in terms of Dice score and precision, at the expense of an increased computing time.
|
| 412 |
+
|
| 413 |
+
Influence of the prototype initialization method. Finally, we compared the two initialization methods mentioned in Section 5.1. For $k$ -means initialization, in the first step, a UNet model was trained with the following settings: kernel size=5, channels = (8, 16, 32, 64, 128) and strides=(2, 2, 2, 2). The spatial dimension, input and output channel were set, respectively, 3, 2, and 2. This pre-trained UNet was used to extract $H = 2$ features, and 10 prototypes were obtained by running the $k$ -means algorithm in the space of extracted features. These prototypes were fed into ENN or RBF layers, which were trained separately, with fixed features. For this step, the learning rate was set to $10^{-2}$ . Finally, the whole model was finetuned end-to-end, with a smaller learning rate equal to $10^{-4}$ . Table 5 shows the performances of ENN-UNet and RBF-UNet with random and $k$ -means initialization. Both ENN-UNet and RBF-UNet achieve a higher Dice score when using the $k$ -means initialization method, and the variability of the results is also reduced with this method.
|
| 414 |
+
|
| 415 |
+
Not only does the $k$ -means initialization method slightly improve the performances of ENN-UNet and RBF-UNet quantitatively, but it also tends to position the prototypes in regions of high data density. As a result, a high output mass $m(\Omega)$ signals that the input data is atypical. In that sense, the output mass function is more interpretable. This point is illustrated by Figures 11 and 12, which show the contours, in the two-dimensional feature space, of the masses assigned to the background, the tumor class and the frame of discern
|
| 416 |
+
|
| 417 |
+
Table 5: Means and standard deviations (over five runs) of the performance measures for different initialization methods, with $I = 10$ prototypes, $H = 2$ features and $\lambda = 0$ . The best values are shown in bold.
|
| 418 |
+
|
| 419 |
+
<table><tr><td rowspan="2">Model</td><td rowspan="2">Initialization</td><td colspan="2">Dice score</td><td colspan="2">Sensitivity</td><td colspan="2">Precision</td></tr><tr><td>Mean</td><td>SD</td><td>Mean</td><td>SD</td><td>Mean</td><td>SD</td></tr><tr><td rowspan="2">ENN-UNet</td><td>Random</td><td>0.833</td><td>0.009</td><td>0.819</td><td>0.019</td><td>0.872</td><td>0.018</td></tr><tr><td>k-means</td><td>0.846</td><td>0.002</td><td>0.830</td><td>0.004</td><td>0.879</td><td>0.008</td></tr><tr><td rowspan="2">RBF-UNet</td><td>Random</td><td>0.824</td><td>0.009</td><td>0.832</td><td>0.008</td><td>0.845</td><td>0.016</td></tr><tr><td>k-means</td><td>0.839</td><td>0.003</td><td>0.824</td><td>0.001</td><td>0.879</td><td>0.008</td></tr></table>
|
| 420 |
+
|
| 421 |
+
ment when using $k$ -means initialization (with $\lambda = 10^{-2}$ and $I = 10$ ) with, respectively, ENN-UNet and RBF-Unet. For both models, the prototypes are well distributed over the two classes, and the mass on $\Omega$ decreases with the distance to the data, as expected. In contrast, when using random initialization (as shown in Figure 13 for the ENN-UNet model - results are similar with the RBF-Unet model), the prototypes are located in the background region, and the mass $m(\Omega)$ does not have a clear meaning (although the decision boundary still ensures a good discrimination between the two classes).
|
| 422 |
+
|
| 423 |
+
From this sensitivity analysis, we can conclude that the performances of both ENN-Unet and RBF-Unet are quite robust to the values of the hyperparameters, and that the two models achieve comparable performances. The $k$ -means initialization method seems to yield better results, both quantitatively and qualitatively. The next section is devoted to a comparison with alternative models.
|
| 424 |
+
|
| 425 |
+
# 5.3. Comparative analysis: segmentation accuracy
|
| 426 |
+
|
| 427 |
+
In this section, we compare the performances of the ENN-UNet and RBF-UNet models with those of the baseline model, UNet [10], as well as three state-of-the-art models reviewed in Section 1: VNet [11], SegResNet [12] and nnUNet [13]. For all compared methods, the same learning set and pre-processing steps were used. All the compared methods were trained with the Dice loss function (17). Details about the optimization algorithm were given in Section 5.1. All methods were implemented based on the MONAI framework<sup>3</sup> and can be called directly. For UNet, the kernel size was set as 5 and the channels were set to $(8,16,32,64,128)$ with strides $= (2,2,2,2)$ . For nnUNet, the kernel size was set as $(3,(1,1,3),3,3)$ and the upsample kernel size was set as $(2,2,1)$ with strides $((1,1,1),2,2,1)$ . For SegResNet [12] and VNet [11], we used the pre-defined model without changing any parameter. The spatial dimension, input channel and output channel were set, respectively, 3, 2, and 2 for the four compared models. As for other hyperparameters not mentioned here, we used the pre-defined value given in MONAI. As shown by the sensitivity analysis performed in Section 5.2, the best results for ENN-UNet and RBF-UNet are achieved with $\lambda = 0$ , $I = 10$ , $H = 2$ and $k$ -means initialization.
|
| 428 |
+
|
| 429 |
+

|
| 430 |
+
(a)
|
| 431 |
+
|
| 432 |
+

|
| 433 |
+
(b)
|
| 434 |
+
|
| 435 |
+

|
| 436 |
+
(c)
|
| 437 |
+
Figure 11: Contours in feature space of the masses assigned to the background (a), the tumor class (b) and the frame of discernment (c) by the ENN-UNet model initialized by $k$ -means. Training was done with $\lambda = 10^{-2}$ , $H = 2$ and $I = 10$ . Sampled feature vectors from the tumor and background classes are marked in gray and red, respectively.
|
| 438 |
+
|
| 439 |
+

|
| 440 |
+
(a)
|
| 441 |
+
|
| 442 |
+

|
| 443 |
+
(b)
|
| 444 |
+
|
| 445 |
+

|
| 446 |
+
(c)
|
| 447 |
+
Figure 12: Contours in feature space of the masses assigned to the background (a), the tumor class (b) and the frame of discernment (c) by the RBF-UNet model initialized by $k$ -means. Training was done with $\lambda = 10^{-2}$ , $H = 2$ and $I = 10$ . Sampled feature vectors from the tumor and background classes are marked in gray and red, respectively.
|
| 448 |
+
|
| 449 |
+

|
| 450 |
+
(a)
|
| 451 |
+
|
| 452 |
+

|
| 453 |
+
(b)
|
| 454 |
+
|
| 455 |
+

|
| 456 |
+
(c)
|
| 457 |
+
Figure 13: Contours in feature space of the masses assigned to the background (a), the tumor class (b) and the frame of discernment (c) by the ENN-UNet model initialized randomly. Training was done with $\lambda = 10^{-2}$ , $H = 2$ and $I = 10$ . Sampled feature vectors from the tumor and background classes are marked in gray and red, respectively.
|
| 458 |
+
|
| 459 |
+
Table 6: Means and standard deviations (over five runs) of the performance measures for ENN-UNet, RBF-UNet and four reference methods. The best result is shown in bold, and the second best is underlined.
|
| 460 |
+
|
| 461 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">Dice score</td><td colspan="2">Sensitivity</td><td colspan="2">Precision</td></tr><tr><td>Mean</td><td>SD</td><td>Mean</td><td>SD</td><td>Mean</td><td>SD</td></tr><tr><td>UNet [51]</td><td>0.753</td><td>0.054</td><td>0.782</td><td>0.048</td><td>0.896</td><td>0.047</td></tr><tr><td>nnUNet [13]</td><td>0.817</td><td>0.008</td><td>0.838</td><td>0.028</td><td>0.879</td><td>0.032</td></tr><tr><td>VNet [11]</td><td>0.820</td><td>0.016</td><td>0.831</td><td>0.021</td><td>0.901</td><td>0.056</td></tr><tr><td>SegResNet [12]</td><td>0.825</td><td>0.015</td><td>0.832</td><td>0.042</td><td>0.876</td><td>0.051</td></tr><tr><td>ENN-UNet</td><td>0.846</td><td>0.002</td><td>0.830</td><td>0.004</td><td>0.879</td><td>0.008</td></tr><tr><td>RBF-UNet</td><td>0.839</td><td>0.003</td><td>0.824</td><td>0.001</td><td>0.879</td><td>0.008</td></tr></table>
|
| 462 |
+
|
| 463 |
+
Table 7: Conover-Iman test of multiple comparisons between the Dice scores obtained by the six models: t-test statistics and p-values. P-values less than 0.01 are printed in bold.
|
| 464 |
+
|
| 465 |
+
<table><tr><td></td><td>ENN-UNet</td><td>nnUnet</td><td>RBF-UNet</td><td>SegResNet</td><td>UNet</td></tr><tr><td>nnUnet</td><td>6.759</td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>0.0000</td><td></td><td></td><td></td><td></td></tr><tr><td>RBF-UNet</td><td>2.156</td><td>-4.602</td><td></td><td></td><td></td></tr><tr><td></td><td>0.0857</td><td>0.0004</td><td></td><td></td><td></td></tr><tr><td>SegResNet</td><td>5.349</td><td>-1.410</td><td>3.193</td><td></td><td></td></tr><tr><td></td><td>0.0001</td><td>0.3282</td><td>0.0088</td><td></td><td></td></tr><tr><td>UNet</td><td>10.283</td><td>3.524</td><td>8.127</td><td>4.934</td><td></td></tr><tr><td></td><td>0.0000</td><td>0.0043</td><td>0.0000</td><td>0.0002</td><td></td></tr><tr><td>VNet</td><td>6.054</td><td>-0.705</td><td>3.898</td><td>0.705</td><td>-4.229</td></tr><tr><td></td><td>0.0000</td><td>0.8091</td><td>0.0019</td><td>0.8669</td><td>0.0009</td></tr></table>
|
| 466 |
+
|
| 467 |
+
The means and standard deviations of the Dice score, sensitivity and precision over five runs with random initialization for the six methods are shown in Table 6, and the raw values are plotted in Figure 14. We can see that ENN-UNet and RBF-UNet achieve, respectively, the highest and the second highest mean Dice score. A Kruskal-Wallis test performed on the whole data concludes to a significant difference between the distributions of the Dice score for the six methods (p-value $= 0.0001743$ ), while the differences are not significant for sensitivity (p-value $= 0.2644$ ) and precision (p-value $= 0.9496$ ). Table 7 shows the results of the Conover-Iman test of multiple comparisons [59][60] with Benjamini-Yekutieli adjustment [61]. We can see that the differences between the Dice scores obtained by ENN-UNet and RBF-UNet on the one hand, and the four other methods on the other hand are highly significant (p-values $< 10^{-2}$ ), while the difference between ENN-UNet and RBF-UNet is only weakly significant (p-value $= 0.0857$ ).
|
| 468 |
+
|
| 469 |
+
Figure 15 shows two examples of segmentation results obtained by ENN-UNet and UNet, corresponding to large and isolated lymphomas. We can see, in these two examples, that UNet is more conservative (it correctly detects only a subset of the tumor voxels), which may explain why it has a relatively high precision. However, the tumor regions predicted
|
| 470 |
+
|
| 471 |
+

|
| 472 |
+
(a)
|
| 473 |
+
|
| 474 |
+

|
| 475 |
+
(b)
|
| 476 |
+
|
| 477 |
+

|
| 478 |
+
(c)
|
| 479 |
+
Figure 14: Values of the Dice score (a), sensitivity (b) and precision (c) for five runs of the six methods.
|
| 480 |
+
|
| 481 |
+
by ENN-UNet better overlap the ground-truth tumor region, which is also reflected by the higher Dice score.
|
| 482 |
+
|
| 483 |
+
# 5.4. Comparative analysis: calibration
|
| 484 |
+
|
| 485 |
+
Besides segmentation accuracy, another important issue concerns the quality of uncertainty quantification. Monte-Carlo dropout (MCD) [25] is a state-of-the-art technique for improving uncertainty quantification capabilities of deep networks. In this section, we compare the ECE (21) achieved by UNet (the baseline), SegResNet (the best alternative method found in Section 5.3), and our proposals: ENN-UNet, and RBF-UNet, with and without MCD. For the four methods, the dropout rate was set to 0.5 and the sample number was set to 20; we averaged the 20 output probabilities (the pignistic probabilities for the two evidential models) at each voxel as the final output of the model.
|
| 486 |
+
|
| 487 |
+
The results are reported in Table 8. We can see that MCD enhances the segmentation performance (measured by the Dice index) of UNet et SegResNet, and improves the calibration of all methods, except SegResNet. Overall, the smallest average ECE is achieved by RBF-UNet and ENN-UNet with MCD, but the standard deviations are quite large. A Kruskal-Wallis test concludes to a significant difference between the distributions of ECE for the eight methods $(\mathrm{p - value} = 0.01)$ . The p-values of the Conover-Iman test of multiple comparisons with Benjamini-Yekutieli adjustment reported in Table 9 show significant differences between the ECE of RBF-UNet with MCD one the one hand, and those of RBF-UNet without MCD, SegResNet with MCD, and UNet without MCD on the other hand. We also tested the pairwise differences between the ECE values obtained by RBF-UNet and ENN-UNet with MCD on the one hand, and UNet with and without MCD as well as SegResNet with and without MCD on the other hand using the Wilcoxon rank sum test. The corresponding p-values are shown in Table 10. We find significant differences between the ECE RBF-UNet with MCD and those of the other methods, but only a weakly significant difference between ENN-UNet with MCD and UNet without MCD. In summary, there is some evidence that MCD improves calibration, even for evidential models, and that the best calibration is achieved by the RBF-UNet model, but this evidence is not fully conclusive due to the limited size of the dataset; our findings will have to be confirmed by further experiments with larger datasets.
|
| 488 |
+
|
| 489 |
+
# 6. Conclusion
|
| 490 |
+
|
| 491 |
+
An evidential framework for segmenting lymphomas from 3D PET-CT images with uncertainty quantification has been proposed in this paper. Our architecture is based on the concatenation of a UNet, which extracts high-level features from the input images, and an evidential segmentation module, which computes output mass functions for each voxel. Two versions of this evidential module, both involving prototypes, have been studied: one is based on the ENN model initially proposed as a stand-alone classifier in [31], while the other one relies on an RBF layer and the addition of weight of evidence. The whole model is trained end-to-end by minimizing the Dice loss. The initialization of prototypes has been shown to be a crucial step in this approach. The best method found has been to pre-train
|
| 492 |
+
|
| 493 |
+

|
| 494 |
+
|
| 495 |
+

|
| 496 |
+
|
| 497 |
+

|
| 498 |
+
|
| 499 |
+

|
| 500 |
+
PET image
|
| 501 |
+
|
| 502 |
+

|
| 503 |
+
|
| 504 |
+

|
| 505 |
+
|
| 506 |
+

|
| 507 |
+
|
| 508 |
+

|
| 509 |
+
ENN-UNet
|
| 510 |
+
|
| 511 |
+

|
| 512 |
+
|
| 513 |
+

|
| 514 |
+
|
| 515 |
+

|
| 516 |
+
|
| 517 |
+

|
| 518 |
+
UNet
|
| 519 |
+
Figure 15: Two examples of segmentation results by ENN-UNet and UNet. The first and the second row are, respectively, representative of large and isolated small lymphomas. The three columns correspond, from left to right, to the PET images and the segmentation results obtained by ENN-UNet and UNet. The white and red region represent, respectively, the ground truth and the segmentation result.
|
| 520 |
+
|
| 521 |
+
Table 8: Means and standard deviations (over five runs) of the Dice score and ECE for UNet, SegResNet, ENN-UNet andRBF-UNet, with and without MCD. The best results are shown in bold, the second best are underlined.
|
| 522 |
+
|
| 523 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">Dice score</td><td colspan="2">ECE(%)</td></tr><tr><td>Mean</td><td>SD</td><td>Mean</td><td>SD</td></tr><tr><td>UNet</td><td>0.754</td><td>0.054</td><td>2.22</td><td>0.205</td></tr><tr><td>SegResNet</td><td>0.825</td><td>0.015</td><td>1.97</td><td>0.488</td></tr><tr><td>ENN-UNet</td><td>0.846</td><td>0.002</td><td>1.99</td><td>0.110</td></tr><tr><td>RBF-UNet</td><td>0.839</td><td>0.003</td><td>2.12</td><td>0.028</td></tr><tr><td>UNet with MC</td><td>0.828</td><td>0.005</td><td>1.93</td><td>0.337</td></tr><tr><td>SegResNet with MC</td><td>0.844</td><td>0.009</td><td>2.53</td><td>0.973</td></tr><tr><td>ENN-UNet with MC</td><td>0.841</td><td>0.003</td><td>1.53</td><td>0.075</td></tr><tr><td>RBF-UNet with MC</td><td>0.840</td><td>0.003</td><td>1.52</td><td>0.041</td></tr></table>
|
| 524 |
+
|
| 525 |
+
Table 9: Conover-Iman test of multiple comparisons between the ECE obtained by UNet, SegResNet, ENN and RBF, with and without MCD: t-test statistics and p-values. P-values less than 0.01 are printed in bold.
|
| 526 |
+
|
| 527 |
+
<table><tr><td></td><td>ENN</td><td>ENN-MC</td><td>RBF</td><td>RBF-MC</td><td>SegRes</td><td>SegRes-MC</td><td>UNet</td></tr><tr><td rowspan="2">ENN-MC</td><td>0.926</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>1.0000</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="2">RBF</td><td>-1.191</td><td>-2.118</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>0.7403</td><td>0.2892</td><td></td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="2">RBF-MC</td><td>2.812</td><td>1.886</td><td>4.004</td><td></td><td></td><td></td><td></td></tr><tr><td>0.1145</td><td>0.3419</td><td>0.0095</td><td></td><td></td><td></td><td></td></tr><tr><td rowspan="2">SegRes</td><td>0.695</td><td>-0.232</td><td>1.886</td><td>-2.117</td><td></td><td></td><td></td></tr><tr><td>1.0000</td><td>1.0000</td><td>0.3761</td><td>0.3305</td><td></td><td></td><td></td></tr><tr><td rowspan="2">SegRes-MC</td><td>-0.860</td><td>-1.787</td><td>0.331</td><td>-3.673</td><td>-1.555</td><td></td><td></td></tr><tr><td>1.0000</td><td>0.3530</td><td>1.0000</td><td>0.0159</td><td>0.4756</td><td></td><td></td></tr><tr><td rowspan="2">UNet</td><td>-1.357</td><td>-2.283</td><td>-0.165</td><td>-4.169</td><td>-2.051</td><td>-0.496</td><td></td></tr><tr><td>0.6337</td><td>0.2677</td><td>1.0000</td><td>0.0119</td><td>0.2962</td><td>1.0000</td><td></td></tr><tr><td rowspan="2">UNet-MC</td><td>0.430</td><td>-0.496</td><td>1.621</td><td>-2.382</td><td>-0.265</td><td>1.290</td><td>1.787</td></tr><tr><td>1.0000</td><td>1.0000</td><td>0.4507</td><td>0.2564</td><td>1.0000</td><td>0.6667</td><td>0.3824</td></tr></table>
|
| 528 |
+
|
| 529 |
+
Table 10: P-values for the Wilcoxon rank sum test applied to the comparison of ECE obtained by ENN-Unet and RBF Unet with MCD on the one hand, and the four other methods on the other hand (UNet and SegResNet with and without MCD).
|
| 530 |
+
|
| 531 |
+
<table><tr><td></td><td>UNet</td><td>UNet-MC</td><td>SegRes</td><td>SegRes-MC</td></tr><tr><td>ENN-MC</td><td>0.095</td><td>0.67</td><td>0.69</td><td>0.31</td></tr><tr><td>RBF-MC</td><td>0.0079</td><td>0.012</td><td>0.055</td><td>0.0079</td></tr></table>
|
| 532 |
+
|
| 533 |
+
a UNet with a softmax output layer, initialize the prototype with the $k$ -means algorithm in the space of extracted features, train the evidential layer separately, and fine-tune the whole network. Our model has been shown to outperform the baseline UNet model as well as other state-of-the-art segmentation method on a dataset of 173 patients with lymphomas. Preliminary results also suggest the outputs of the evidential models (in particular, the one with an RBF layer) are better calibrated and that calibration error can be further decreased by Monte Carlo dropout. These results, however, will have to be confirmed by further experiments with larger datasets.
|
| 534 |
+
|
| 535 |
+
This work can be extended in many directions. One of them is to further evaluate the approach by applying it to other medical image segmentation problems. One of the potential problems that may arise is related to the dimensionality of the feature space. In the application considered in this paper, good results where obtained with only two extracted features. If some other learning tasks require a much larger number of features, we may need a much higher number of prototypes and learning may be slow. This issue could be addressed by adapting the loss function as proposed, e.g., in [62]. We also plan to further study the calibration properties of the belief functions computed by our approach (using calibration measures specially designed for belief functions), as well as the novelty detection capability of our model.
|
| 536 |
+
|
| 537 |
+
# Acknowledgements
|
| 538 |
+
|
| 539 |
+
This work was supported by the China Scholarship Council (No. 201808331005). It was carried out in the framework of the Labex MS2T, which was funded by the French Government, through the program "Investments for the future" managed by the National Agency for Research (Reference ANR-11-IDEX-0004-02)
|
| 540 |
+
|
| 541 |
+
# References
|
| 542 |
+
|
| 543 |
+
# References
|
| 544 |
+
|
| 545 |
+
[1] Y. S. Jhanwar, D. J. Straus, The role of PET in lymphoma, Journal of Nuclear Medicine 47 (8) (2006) 1326-1334.
|
| 546 |
+
[2] H. Zaidi, I. El Naqa, PET-guided delineation of radiation therapy treatment volumes: a survey of image segmentation techniques, European Journal of Nuclear Medicine and Molecular Imaging 37 (11) (2010) 2165-2187.
|
| 547 |
+
[3] H. Ilyas, N. G. Mikhaeel, J. T. Dunn, F. Rahman, H. Møller, D. Smith, S. F. Barrington, Defining the optimal method for measuring baseline metabolic tumour volume in diffuse large B cell lymphoma, European journal of nuclear medicine and molecular imaging 45 (7) (2018) 1142-1154.
|
| 548 |
+
[4] F. Eude, M. N. Toledano, P. Vera, H. Tilly, S.-D. Mihailescu, S. Becker, Reproducibility of baseline tumour metabolic volume measurements in diffuse large B-cell lymphoma: Is there a superior method?, Metabolites 11 (2) (2021) 72.
|
| 549 |
+
[5] D. Onoma, S. Ruan, S. Thureau, et al., Segmentation of heterogeneous or small FDG PET positive tissue based on a 3d-locally adaptive random walk algorithm, Computerized Medical Imaging and Graphics 38 (8) (2014) 753–763.
|
| 550 |
+
[6] H. Hu, P. Decazes, P. Vera, H. Li, S. Ruan, Detection and segmentation of lymphomas in 3D PET images via clustering with entropy-based optimization strategy, International journal of computer assisted radiology and surgery 14 (10) (2019) 1715-1724.
|
| 551 |
+
|
| 552 |
+
[7] H. Li, H. Jiang, S. Li, et al., DenseX-net: an end-to-end model for lymphoma segmentation in whole-body PET/CT images, IEEE Access 8 (2019) 8004-8018.
|
| 553 |
+
[8] H. Hu, L. Shen, T. Zhou, et al., Lymphoma segmentation in PET images based on multi-view and conv3d fusion strategy, in: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), IEEE, 2020, pp. 1197-1200.
|
| 554 |
+
[9] J. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic segmentation, in: Proceedings of the IEEE conference on computer vision and pattern recognition, Boston, USA, 2015, pp. 3431-3440.
|
| 555 |
+
[10] O. Ronneberger, P. Fischer, T. Brox, U-Net: convolutional networks for biomedical image segmentation, in: N. Navab, J. Hornegger, W. M. Wells, A. F. Frangi (Eds.), Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, Springer International Publishing, Cham, 2015, pp. 234–241.
|
| 556 |
+
[11] F. Miletari, N. Navab, S.-A. Ahmadi, V-net: Fully convolutional neural networks for volumetric medical image segmentation, in: 2016 fourth international conference on 3D vision, IEEE, 2016, pp. 565-571.
|
| 557 |
+
[12] A. Myronenko, 3D MRI brain tumor segmentation using autoencoder regularization, in: International MICCAI Brain lesion Workshop, Springer, 2018, pp. 311-320.
|
| 558 |
+
[13] F. Isensee, J. Petersen, A. Klein, D. Zimmerer, et al., Nnu-net: Self-adapting framework for u-net-based medical image segmentation, arXiv preprint arXiv:1809.10486.
|
| 559 |
+
[14] P. Blanc-Durand, S. Jégou, S. Kanoun, et al., Fully automatic segmentation of diffuse large B cell lymphoma lesions on 3D FDG-PET/CT for total metabolic tumour volume prediction using a convolutional neural network., European Journal of Nuclear Medicine and Molecular Imaging (2020) 1-9.
|
| 560 |
+
[15] L. Huang, T. Denceux, D. Tonnelet, P. Decazes, S. Ruan, Deep pet/ct fusion with dempster-shafer theory for lymphoma segmentation, in: C. Lian, X. Cao, I. Rekik, X. Xu, P. Yan (Eds.), Machine Learning in Medical Imaging, Springer International Publishing, Cham, 2021, pp. 30-39.
|
| 561 |
+
[16] G. Shafer, A mathematical theory of evidence, Vol. 42, Princeton University Press, 1976.
|
| 562 |
+
[17] E. Hüllermeier, W. Waegeman, Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods, Machine Learning 110 (3) (2021) 457-506.
|
| 563 |
+
[18] J. Quinonero-Candela, M. Sugiyama, N. D. Lawrence, A. Schwaighofer, Dataset shift in machine learning, MIT Press, 2009.
|
| 564 |
+
[19] R. Mehta, T. Christinck, T. Nair, P. Lemaitre, D. Arnold, T. Arbel, Propagating uncertainty across cascaded medical imaging tasks for improved deep learning inference, in: Uncertainty for Safe Utilization of Machine Learning in Medical Imaging and Clinical Image-Based Procedures, Springer, 2019, pp. 23-32.
|
| 565 |
+
[20] W. J. Maddox, P. Izmailov, T. Garipov, D. P. Vetrov, A. G. Wilson, A simple baseline for Bayesian uncertainty in deep learning, Advances in Neural Information Processing Systems 32 (2019) 13153-13164.
|
| 566 |
+
[21] L. Yu, S. Wang, X. Li, C.-W. Fu, P.-A. Heng, Uncertainty-aware self-ensembling model for semi-supervised 3d left atrium segmentation, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2019, pp. 605–613.
|
| 567 |
+
[22] F. C. Ghesu, B. Georgescu, A. Mansoor, Y. Yoo, E. Gibson, R. Vishwanath, A. Balachandran, J. M. Balter, Y. Cao, R. Singh, et al., Quantifying and leveraging predictive uncertainty for medical image assessment, Medical Image Analysis 68 (2021) 101855.
|
| 568 |
+
[23] G. E. Hinton, D. van Camp, Keeping the neural networks simple by minimizing the description length of the weights, in: Proceedings of the Sixth Annual Conference on Computational Learning Theory, COLT '93, Association for Computing Machinery, New York, NY, USA, 1993, p. 5?13.
|
| 569 |
+
[24] D. J. MacKay, A practical Bayesian framework for backpropagation networks, Neural computation 4 (3) (1992) 448-472.
|
| 570 |
+
[25] Y. Gal, Z. Ghahramani, Dropout as a Bayesian approximation: Representing model uncertainty in deep learning, in: International Conference on Machine Learning, PMLR, 2016, pp. 1050-1059.
|
| 571 |
+
[26] D. Tran, M. W. Dusenberry, M. van der Wilk, D. Hafner, Bayesian layers: A module for neural network uncertainty, arXiv preprint arXiv:1812.03973.
|
| 572 |
+
[27] A. P. Dempster, Upper and lower probability inferences based on a sample from a finite univariate
|
| 573 |
+
|
| 574 |
+
population, Biometrika 54 (3-4) (1967) 515-528.
|
| 575 |
+
[28] T. Denœux, D. Dubois, H. Prade, Representations of uncertainty in artificial intelligence: Beyond probability and possibility, in: P. Marquis, O. Papini, H. Prade (Eds.), A Guided Tour of Artificial Intelligence Research, Vol. 1, Springer Verlag, 2020, Ch. 4, pp. 119-150.
|
| 576 |
+
[29] P. Smets, The combination of evidence in the transferable belief model, IEEE Transactions on Pattern Analysis and Machine Intelligence 12 (5) (1990) 447-458.
|
| 577 |
+
[30] T. Denoeux, A k-nearest neighbor classification rule based on Dempster-Shafer theory, IEEE Transactions on Systems, Man, and Cybernetics 25 (5) (1995) 804-813. doi:10.1109/21.376493.
|
| 578 |
+
[31] T. Denceux, A neural network classifier based on Dempster-Shafer theory, IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 30 (2) (2000) 131-150.
|
| 579 |
+
[32] T. Denceux, M.-H. Masson, Evclus: evidential clustering of proximity data, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 34 (1) (2004) 95-109.
|
| 580 |
+
[33] F. Pichon, D. Mercier, E. Lefevre, F. Delmotte, Proposition and learning of some belief function contextual correction mechanisms, International Journal of Approximate Reasoning 72 (2016) 4-42.
|
| 581 |
+
[34] F. Pichon, D. Dubois, T. Denceux, Quality of information sources in information fusion, in: E. Bosse, G. L. Rogova (Eds.), Information Quality in Information Fusion and Decision Making, Springer International Publishing, Cham, 2019, pp. 31-49.
|
| 582 |
+
[35] H. Chen, S. Le Hégarat-Mascle, E. Aldea, Belief functions clustering for epipole localization, International Journal of Approximate Reasoning 137 (2021) 146-165.
|
| 583 |
+
[36] T. Denceux, O. Kanjanatarakul, S. Sriboonchitta, A new evidential k-nearest neighbor rule based on contextual discounting with partially supervised learning, International Journal of Approximate Reasoning 113 (2019) 287-302.
|
| 584 |
+
[37] C. Gong, Z. gang Su, P. hong Wang, Q. Wang, Y. You, Evidential instance selection for k-nearest neighbor classification of big data, International Journal of Approximate Reasoning 138 (2021) 123-144.
|
| 585 |
+
[38] A. Imoussaten, L. Jacquin, Cautious classification based on belief functions theory and imprecise relabelling, International Journal of Approximate Reasoning 142 (2022) 130-146.
|
| 586 |
+
[39] T. Denoeux, NN-EVCLUS: Neural network-based evidential clustering, Information Sciences 572 (2021) 297-330.
|
| 587 |
+
[40] V. Antoine, J. A. Guerrero, J. Xie, Fast semi-supervised evidential clustering, International Journal of Approximate Reasoning 133 (2021) 116-132.
|
| 588 |
+
[41] C. Lian, S. Ruan, T. Denceux, H. Li, P. Vera, Joint tumor segmentation in PET-CT images using co-clustering and fusion based on belief functions, IEEE Transactions on Image Processing 28 (2) (2018) 755–766.
|
| 589 |
+
[42] L. Huang, S. Ruan, T. Denoeux, Belief function-based semi-supervised learning for brain tumor segmentation, arXiv preprint arXiv:2102.00097.
|
| 590 |
+
[43] Z. Tong, P. Xu, T. Denoeux, Evidential fully convolutional network for semantic segmentation, Applied Intelligence 51 (2021) 6376-6399.
|
| 591 |
+
[44] L. Huang, S. Ruan, P. Decazes, T. Denceux, Evidential segmentation of 3D PET/CT images, in: T. Denceux, E. Lefevre, Z. Liu, F. Pichon (Eds.), Belief Functions: Theory and Applications, Springer International Publishing, Cham, 2021, pp. 159-167.
|
| 592 |
+
[45] P. Smets, R. Kennes, The Transferable Belief Model, Artificial Intelligence 66 (1994) 191-243.
|
| 593 |
+
[46] T. Denoeux, Analysis of evidence-theoretic decision rules for pattern classification, Pattern Recognition 30 (7) (1997) 1095-1107.
|
| 594 |
+
[47] L. Ma, T. Denoeux, Partial classification in the belief function framework, Knowledge-Based Systems 214 (2021) 106742. URL http://www.sciencedirect.com/science/article/pii/S0950705121000058
|
| 595 |
+
[48] T. Denoeux, Decision-making with belief functions: A review, International Journal of Approximate Reasoning 109 (2019) 87-110.
|
| 596 |
+
[49] T. Denoux, Logistic regression, neural networks and Dempster-Shafer theory: A new perspective, Knowledge-Based Systems 176 (2019) 54-67.
|
| 597 |
+
|
| 598 |
+
[50] Z. Tong, P. Xu, T. Denceux, An evidential classifier based on Dempster-Shafer theory and deep learning, Neurocomputing 450 (2021) 275-293.
|
| 599 |
+
[51] E. Kerfoot, J. Clough, I. Oksuz, J. Lee, A. P. King, J. A. Schnabel, Left-ventricle quantification using residual u-net, in: International Workshop on Statistical Atlases and Computational Models of the Heart, Springer, 2018, pp. 371–380.
|
| 600 |
+
[52] D. Ulyanov, A. Vedaldi, V. Lempitsky, Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 6924-6932.
|
| 601 |
+
[53] K. He, X. Zhang, S. Ren, J. Sun, Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, in: Proceedings of the IEEE international conference on computer vision, 2015, pp. 1026-1034.
|
| 602 |
+
[54] B. Lowekamp, D. Chen, L. Ibanez, D. Blezek, The design of SimpleITK, Frontiers in Neuroinformatics 7 (2013) 45. doi:10.3389/fninf.2013.00045. URL https://www.frontiersin.org/article/10.3389/fninf.2013.00045
|
| 603 |
+
[55] Z. Yaniv, B. C. Lowekamp, H. J. Johnson, R. Beare, SimpleITK image-analysis notebooks: a collaborative environment for education and reproducible research, Journal of Digital Imaging 31 (3) (2018) 290-303. doi:10.1007/s10278-017-0037-8. URL https://doi.org/10.1007/s10278-017-0037-8
|
| 604 |
+
[56] C. Guo, G. Pleiss, Y. Sun, K. Q. Weinberger, On calibration of modern neural networks, in: International Conference on Machine Learning, PMLR, 2017, pp. 1321-1330.
|
| 605 |
+
[57] A. Jungo, F. Balsiger, M. Reyes, Analyzing the quality and challenges of uncertainty estimations for brain tumor segmentation, Frontiers in Neuroscience 14 (2020) 282.
|
| 606 |
+
[58] A.-J. Rousseau, T. Becker, J. Bertels, M. B. Blaschko, D. Valkenborg, Post training uncertainty calibration of deep networks for medical image segmentation, in: 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), IEEE, 2021, pp. 1052-1056.
|
| 607 |
+
[59] W. J. Conover, R. L. Iman, On multiple-comparisons procedures, Tech. Rep. LA-7677-MS, Los Alamos Scientific Laboratory (1979).
|
| 608 |
+
[60] A. Dinno, conover.test: Conover-Iman Test of Multiple Comparisons Using Rank Sums, r package version 1.1.5 (2017). URL https://CRAN.R-project.org/package=conover.test
|
| 609 |
+
[61] Y. Benjamini, D. Yekutieli, The control of the false discovery rate in multiple testing under dependency, The Annals of Statistics 29 (4) (2001) 1165-1188.
|
| 610 |
+
[62] A. Hryniewski, A. Wong, Deeplabnet: End-to-end learning of deep radial basis networks, Journal of Computational Vision and Imaging Systems 5 (1) (2020) 1. URL https://openjournals.uwaterloo.ca/index.php/vsl/article/view/1663
|
2201.13xxx/2201.13078/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dffa239d5149d60d0a6e187bc8705265b08046e25d8e58ed233eadee0fd97715
|
| 3 |
+
size 1528270
|
2201.13xxx/2201.13078/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.13xxx/2201.13100/e7d5763b-dffe-40a2-928b-5f73672ed49e_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.13xxx/2201.13100/e7d5763b-dffe-40a2-928b-5f73672ed49e_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.13xxx/2201.13100/e7d5763b-dffe-40a2-928b-5f73672ed49e_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a1c6dd0d01a36ec93e90ee362e25ebf59c42eddbfcc013415c8d8b875da6ffd9
|
| 3 |
+
size 2717432
|
2201.13xxx/2201.13100/full.md
ADDED
|
@@ -0,0 +1,491 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Yuge Shi $^{1}$ N. Siddharth $^{2}$ Philip H.S. Torr $^{1}$ Adam R. Kosiorek $^{3}$
|
| 2 |
+
|
| 3 |
+
# Abstract
|
| 4 |
+
|
| 5 |
+
We propose ADIOS, a masked image modeling (MIM) framework for self-supervised learning, which simultaneously learns a masking function and an image encoder using an adversarial objective. The image encoder is trained to minimise the distance between representations of the original and that of a masked image. The masking function, conversely, aims at maximising this distance. ADIOS consistently improves on state-of-the-art self-supervised learning (SSL) methods on a variety of tasks and datasets—including classification on ImageNet100 and STL10, transfer learning on CIFAR10/100, Flowers102 and iNaturalist, as well as robustness evaluated on the backgrounds challenge (Xiao et al., 2021)—while generating semantically meaningful masks. Unlike modern MIM models such as MAE, BEiT and iBOT, ADIOS does not rely on the image-patch tokenisation construction of Vision Transformers, and can be implemented with convolutional backbones. We further demonstrate that the masks learned by ADIOS are more effective in improving representation learning of SSL methods than masking schemes used in popular MIM models. Code is available at https://github.com/YugeTen/adios.
|
| 6 |
+
|
| 7 |
+
# 1. Introduction
|
| 8 |
+
|
| 9 |
+
The goal of Masked image modeling (MIM) is to learn image representations, in a self-supervised fashion, by occluding parts of the input images. MIM is inspired by significant advances in natural language modelling such as BERT (Devlin et al., 2019), where the model is trained to fill-in words randomly removed from a sentence (Fig. 1, top row). Recent work, including MAE (He et al., 2021)
|
| 10 |
+
|
| 11 |
+
<sup>1</sup>University of Oxford <sup>2</sup>The University of Edinburgh & The Alan Turing Institute <sup>3</sup>DeepMind. Correspondence to: Yuge Shi <yshi@robots.ox.ac.uk>, Adam Kosiorek <adamrk@deepmind.com>.
|
| 12 |
+
|
| 13 |
+
Proceedings of the $39^{th}$ International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s).
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
Masked Language Model
|
| 17 |
+
|
| 18 |
+

|
| 19 |
+
Masked Image Models
|
| 20 |
+
Figure 1. Self-supervised language, and vision, models learn representations by imputing data removed by masking. BERT: random word masks; Context encoder: random, fix-shaped mask; BEiT: random 'blockwise' masking; MAE: randomly mask out $75\%$ of the image; ADIOS: multiple masks $(N = 3)$ generated by an adversarially trained masking model, post-processed with fully connected conditional random fields (Krahenbuhl & Koltun, 2011).
|
| 21 |
+
|
| 22 |
+
and BEiT (Bao et al., 2021), show that these gains are at least partially transferable to vision. The task of a MIM model is therefore similar to BERT, e.g., given an image of a bird in Fig. 1, it needs to reason about what the bird might be sitting on or what colour the bird's belly is given the visible context (bottom row). However, while missing words describe whole semantic entities (e.g. "head"), the masks used for context encoder (Pathak et al. (2016), which pioneered MIM), BEiT and MAE typically have no such constraint (Fig. 1 bottom, left to right). Imputation under such schemes is conceptually simpler, as random masking only partially obscures meaningful visual entities, which allows easier inference of missing values by leveraging strong correlations at the local-pixel level<sup>1</sup>.
|
| 23 |
+
|
| 24 |
+
To narrow the gap between pixel masking and word masking, we posit that one needs to occlude whole entities in the image. This encourages the model to perform imputation by complex semantic reasoning using the unmasked context (e.g. given a bird with a yellow body, it is likely to have a yellow head) rather than leveraging simple local correlations, which can benefit representation learning. Interestingly, He et al. (2021) are motivated by similar hypothesis and propose to occlude a large fraction (up to $75\%$ ) of the image, removing complete entities by a higher chance, which they find is essential for good performance. Here, we
|
| 25 |
+
|
| 26 |
+
suggest that it is actually what is masked, not so much how much is masked, that is crucial for effective self-supervised representation learning.
|
| 27 |
+
|
| 28 |
+
To this end, we investigate learning to mask with an adversarial objective, where an occlusion model is asked to make reasoning about missing parts of the scene more difficult. This novel representation-learning algorithm, called Adversarial Inference-Occlusion Self-supervision (ADIOS), can identify and mask out regions of correlated pixels within an image (Fig. 1, bottom right), which brings it closer to the word-masking regime in natural language. And as we shall see in Section 3, it consistently improves performance of state-of-the-art self-supervised learning (SSL) algorithms.
|
| 29 |
+
|
| 30 |
+
Some MIM methods employ a generative component for representation learning, by learning to reconstruct the masked image. However, it has been shown (Bao et al., 2021; Ramesh et al., 2021) that pixel-level reconstruction tasks waste modelling capacity on high-frequency details over low-frequency structure, leading to subpar performance. We hence frame ADIOS as an encoder-only framework that minimises the distance between the representations of the original image and the masked image. The occlusion model, which is trained adversarially to the encoder, tries to minimise this same distance. We further discuss in Section 2.1 that, compared to the generative setup, the encoder-only setup optimises a functionally superior objective for representation learning. Note that the encoder objective is compatible with many recent augmentation-based Siamese self-supervised learning (SSL; Chen et al. (2020); Chen & He (2021)) methods. We show that ADIOS consistently improves performance of these SSL objectives, showcasing the generality of our approach.
|
| 31 |
+
|
| 32 |
+
Our main contributions are as follows,
|
| 33 |
+
|
| 34 |
+
1. A novel adversarial Siamese-style MIM framework, that unlike other MIM methods is not limited to using ViT as backbone—advantageous given recent discoveries of modernised-convnet superiority over ViTs (Liu et al., 2022; Touvron et al., 2021);
|
| 35 |
+
2. Qualitative and quantitative analyses showing that masks generated by ADIOS are semantically meaningful;
|
| 36 |
+
3. Analysis of how different masking schemes affect representation-learning performance of SSL models. We find models trained with ADIOS and ground-truth object masks significantly outperform other masking schemes/no mask, demonstrating the efficacy of semantically meaningful masks for representation learning.
|
| 37 |
+
|
| 38 |
+
# 2. Methodology
|
| 39 |
+
|
| 40 |
+
Set up ADIOS consists of two components, inference model $\mathcal{I}$ and occlusion model $\mathcal{M}$ (see Fig. 2). Given an RGB image $x$ , the occlusion model produces an image
|
| 41 |
+
|
| 42 |
+
sized mask $\pmb{m} = \mathcal{M}(\pmb{x})$ with values in [0, 1]. The inference model $\mathcal{I}$ takes original image $\pmb{x}$ and an occluded image $\pmb{x}^m = \pmb{x} \odot \pmb{m}$ ( $\odot$ is the Hadamard product) as inputs, generating representations for both, which we denote as $z$ and $z^m$ . The two models are learnt by solving for
|
| 43 |
+
|
| 44 |
+
$$
|
| 45 |
+
\mathcal {I} ^ {\star}, \mathcal {M} ^ {\star} = \arg \min _ {\mathcal {I}} \max _ {\mathcal {M}} \mathcal {L} (\boldsymbol {x}; \mathcal {I}, \mathcal {M}). \tag {1}
|
| 46 |
+
$$
|
| 47 |
+
|
| 48 |
+
We will now discuss different choices for $\mathcal{I}$ and $\mathcal{M}$ .
|
| 49 |
+
|
| 50 |
+
# 2.1. Inference model $\mathcal{I}$
|
| 51 |
+
|
| 52 |
+
As discussed in Section 1, the inference model should minimise some distance between the original and masked images. Here, we discuss potential forms of this objective, arriving at our final framework using augmentation-based
|
| 53 |
+
|
| 54 |
+

|
| 55 |
+
Figure 2. ADIOS Architecture.
|
| 56 |
+
|
| 57 |
+
Distance in pixel space One option would be to inpaint the masked image with the inference model, and train $\mathcal{I}$ by minimising the distance between the inpainted image and the original image in pixel space. More specifically, we can define $\mathcal{I}$ as an auto-encoder consisting of an encoder and decoder, which
|
| 58 |
+
|
| 59 |
+

|
| 60 |
+
Figure 3. Inpainting.
|
| 61 |
+
|
| 62 |
+
takes the masked image $\pmb{x}^m$ as input and produces inpainted image $\hat{x}$ (see Fig. 3). The model can be trained using the following reconstruction loss
|
| 63 |
+
|
| 64 |
+
$$
|
| 65 |
+
\mathcal {L} _ {\mathrm {A E}} (\boldsymbol {x}; \mathcal {I}, \mathcal {M}) = \mathcal {D} (\boldsymbol {x}, \hat {\boldsymbol {x}}) = \mathcal {D} (\boldsymbol {x}, \mathcal {I} (\boldsymbol {x} \odot \mathcal {M} (\boldsymbol {x}))), \tag {2}
|
| 66 |
+
$$
|
| 67 |
+
|
| 68 |
+
where $\mathcal{D}$ denotes some distance metric defined in pixel space. Minimising (2) encourages the auto-encoder to impute the missing part of the image as accurately as possible. $\mathcal{L}_{\mathrm{AE}}$ can then be used in (1) to train the inference-occlusion model.
|
| 69 |
+
|
| 70 |
+
Distance in representation space An interesting question for auto-encoding $\mathcal{I}$ is: where does the imputation happen? Multiple hypotheses exist: ① Encoder only: The encoder $q$ completely recovers the missing information in the masked image, and in the ideal case, $q(\mathcal{M}(\pmb{x})) = q(\pmb{x})$ ; the decoder faithfully reconstructs these representations. ② Decoder only: The encoder faithfully extracts all information from the masked image, and the decoder reasons to inpaint missing pixel from the representations; ③ Both: The encoder and decoder both inpaint parts of the image.
|
| 71 |
+
|
| 72 |
+
Given these scenarios, ① is clearly best suited for representation learning, as it requires the encoder to reason about the missing parts based on observed context, beyond just extracting image features. With representation learning, rather
|
| 73 |
+
|
| 74 |
+
than inpainting, being our end goal, the key challenge lies in designing an objective targeting scenario ①, such that we learn the most expressive version of encoder $q$ .
|
| 75 |
+
|
| 76 |
+
A key feature in ① is that when the encoder recovers all information of the original image, $q(\pmb{x}^m) = q(\pmb{x})$ , the features extracted from the partially observed image $\pmb{x}^m$ should in principle be the same as those extracted from the un-masked image $\pmb{x}$ . We thus propose an inference model $\mathcal{I}$ consisting of only the encoder, which extracts representation $z = \mathcal{I}(x)$ , $z \in \mathbb{R}^d$ . Our objective can thus be written as
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
\mathcal {L} _ {\mathrm {E N C}} (\boldsymbol {x}; \mathcal {I}, \mathcal {M}) = \mathcal {D} (\boldsymbol {z}, \boldsymbol {z} ^ {m}) = \mathcal {D} (\mathcal {I} (\boldsymbol {x}), \mathcal {I} (\boldsymbol {x} \odot \mathcal {M} (\boldsymbol {x}))) \tag {3}
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
where $\mathcal{D}$ is some distance metric defined in $\mathbb{R}^d$ . Not only does $\mathcal{L}_{\mathrm{ENC}}$ encourages the learning of more expressive encoder that can infer missing information, optimising this objective also does not involve generative component $p$ , which is redundant for representation learning.
|
| 83 |
+
|
| 84 |
+
# SSL framework
|
| 85 |
+
|
| 86 |
+
(3) can be realised by simply optimising a Siamese network (Bromley et al., 1993). However, the objective can be trivially minimised when the representations for all
|
| 87 |
+
|
| 88 |
+

|
| 89 |
+
Figure 4. Left: SimCLR. Right: SimCLR + ADIOS.
|
| 90 |
+
|
| 91 |
+
inputs "collapse" to a constant. This phenomenon, known as latent collapse, has been addressed in many ways in augmentation-based SSL.
|
| 92 |
+
|
| 93 |
+
Let us take SimCLR (Chen et al., 2020) as an example (see Fig. 4, left); given a minibatch of $M$ input images $\pmb{x} = \{\pmb{x}_i\}_{i=1}^M$ , two sets of random augmentations $A$ and $B$ are applied to each image in $\pmb{x}$ , yielding $\pmb{x}^A$ and $\pmb{x}^B$ . The same encoding function $\mathcal{I}$ is used to extract representations from both sets of augmented views, yielding $\pmb{z}^A = \mathcal{I}(\pmb{x}^A)$ and $\pmb{z}^B = \mathcal{I}(\pmb{x}^B)$ . The objective of SimCLR is defined as
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
\mathcal {L} _ {\operatorname {S i m C L R}} (\boldsymbol {x}; \mathcal {I}) = \log \frac {\exp \left(\mathcal {D} \left(\boldsymbol {z} _ {i} ^ {A} , \boldsymbol {z} _ {i} ^ {B}\right)\right)}{\sum_ {i \neq j} \exp \left(\mathcal {D} \left(\boldsymbol {z} _ {i} ^ {A} , \boldsymbol {z} _ {j} ^ {B}\right)\right)}, \tag {4}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
where $\mathcal{D}$ denotes the negative cosine similarity $^2$ . Intuitively, the objective minimises the distance between representations of the two augmented views of the same image (i.e. $z_{i}^{A}, z_{i}^{B}$ ), while repulsing the representations of different images (i.e. $z_{i}^{A}, z_{j}^{B}$ ). This effectively prevents the representations of different images from collapsing to the same constant, while optimising an objective similar to (3).
|
| 100 |
+
|
| 101 |
+
We can use the SimCLR objective for our model by masking one of the augmented images and then follow the exact same pipeline (see Fig. 4, right). More specifically, we replace $z^{A}$
|
| 102 |
+
|
| 103 |
+
by $z^{A,m} = \mathcal{I}(x^{A}\odot m^{A})$ , where $m^A = \mathcal{M}(x^A)$ is a mask generated by the occlusion model given $x^{A}$ . Following (4), we can write the SimCLR-ADIOS objective as
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\begin{array}{l} \mathcal {L} _ {\mathrm {S i m C L R}} ^ {\mathrm {A D I O S}} (\boldsymbol {x}; \mathcal {I}, \mathcal {M}) = \log \frac {\exp (\mathcal {D} (\boldsymbol {z} _ {i} ^ {A , \boldsymbol {m}} , \boldsymbol {z} _ {i} ^ {B}))}{\sum_ {i \neq j} \exp (\mathcal {D} (\boldsymbol {z} _ {i} ^ {A , \boldsymbol {m}} , \boldsymbol {z} _ {j} ^ {B}))} \\ = \log \frac {\exp \left(\mathcal {D} \left(\mathcal {I} \left(\boldsymbol {x} _ {i} ^ {A} \odot \mathcal {M} \left(\boldsymbol {x} _ {i} ^ {A}\right)\right) , \mathcal {I} \left(\boldsymbol {x} _ {i} ^ {B}\right)\right)\right)}{\sum_ {i \neq j} \exp \left(\mathcal {D} \left(\mathcal {I} \left(\boldsymbol {x} _ {i} ^ {A} \odot \mathcal {M} \left(\boldsymbol {x} _ {i} ^ {A}\right)\right) , \mathcal {I} \left(\boldsymbol {x} _ {j} ^ {B}\right)\right)\right)}. \tag {5} \\ \end{array}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
Again, we can use (5) in (1) to train the inference-occlusion model. Crucially, any SSL method that compares two augmented image views includes a term like (3), and can be plugged in to our framework. We conduct experiments using SimCLR, BYOL (Grill et al., 2020), and SimSiam (Chen & He, 2021) objectives and show significant improvement on downstream task performance with each method. Refer to Appendix A for the ADIOS objective used for BYOL and SimSiam, as well as more details on the SimCLR objective.
|
| 110 |
+
|
| 111 |
+
# 2.2.Occlusion model $\mathcal{M}$
|
| 112 |
+
|
| 113 |
+
For simplicity, we only consider the single-mask-generating case in the discussion above. In practice, since an image typically contains multiple components, we generate $N > 1$ masks to challenge the model to reason about relations between different components—empirical performance confirm benefits of doing so.
|
| 114 |
+
|
| 115 |
+
There are many parametric forms $\mathcal{M}$ could employ. For instance, one could consider generating multiple masks sequentially in an auto-regressive manner as seen in Engelcke et al. (2021). However, we find that the simplest setup suffices, where $\mathcal{M}:\mathbb{R}^{c\times w\times h}\mapsto \mathbb{R}^{N\times w\times h}$ consists of a learnable neural network and a pixelwise softmax layer $\sigma$ applied across $N$ masks to ensure that the sum of a given pixel across all the masks equals 1. We use U-Net as the backbone of our occlusion model—see Appendix B for more details. Note that we experimented with binarising the masks during training, but found that this did not yield improvements, and hence used real-valued masks directly.
|
| 116 |
+
|
| 117 |
+
# 2.3. Putting it together
|
| 118 |
+
|
| 119 |
+
Here we present ADIOS in its complete form. This includes the $N$ -mask occlusion model, which generates masks $\{m^{(n)}\}_{n=1}^N$ from the RGB image $x$ . The
|
| 120 |
+
|
| 121 |
+

|
| 122 |
+
Figure 5. ADIOS, $N > 1$
|
| 123 |
+
|
| 124 |
+
inference model computes a loss $\mathcal{L}^{(n)}(\pmb {x};\mathcal{I},\mathcal{M})$ for each $\pmb{m}^{(n)}$ and the final loss is computed by averaging across $N$
|
| 125 |
+
|
| 126 |
+
masks
|
| 127 |
+
|
| 128 |
+
$$
|
| 129 |
+
\mathcal {I} ^ {\star}, \mathcal {M} ^ {\star} = \arg \min _ {\mathcal {I}} \max _ {\mathcal {M}} \frac {1}{N} \sum_ {n = 1} ^ {N} \mathcal {L} ^ {(n)} (\boldsymbol {x}; \mathcal {I}, \mathcal {M}). \tag {6}
|
| 130 |
+
$$
|
| 131 |
+
|
| 132 |
+
Sparsity penalty A trivial solution exists for this objective (6), where, for masks $\{\pmb{m}^{(1)},\dots,\pmb{m}^{(N)}\}$ some mask $\pmb{m}^{(n)}$ occludes everything, with the other $\{N\}_{\backslash n}$ masks not occluding anything. To avoid such degenerate solutions, we introduce a sparsity penalty $p_n$ in the form of $1 / \sin (\cdot)$ that discourages the occlusion model from generating all-one or all-zero masks; specifi
|
| 133 |
+
|
| 134 |
+

|
| 135 |
+
Figure 6. Penalty.
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
p _ {n} = \sin \left(\frac {\pi}{h w} \sum_ {i = 1} ^ {h} \sum_ {j = 1} ^ {w} \boldsymbol {m} _ {i j} ^ {(n)}\right) ^ {- 1}. \tag {7}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
Note that, $p_n$ goes to infinity as $m^{(n)}$ approaches all-one or all-zero (see Fig. 6). Minimising $p_n$ with respect to $\mathcal{M}$ encourages the occlusion model to generate semantically meaningful mask, while avoiding degenerate solutions.
|
| 142 |
+
|
| 143 |
+
Final objective Let $\lambda$ the scaling of the penalty term. Our complete objective reads as
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
\mathcal {I} ^ {\star}, \mathcal {M} ^ {\star} = \arg \min _ {\mathcal {I}} \max _ {\mathcal {M}} \frac {1}{N} \sum_ {n = 1} ^ {N} \left(\mathcal {L} ^ {(n)} (\boldsymbol {x}; \mathcal {I}, \mathcal {M}) - \lambda p _ {n}\right). \tag {8}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
Lightweight ADIOS Despite its strong empirical performance, we note that the training objective in (8) requires $N$ forward passes, which can be computationally expensive as we increase $N$ for more complex data. We therefore develop a lightweight version of ADIOS, where we randomly sample one from the $N$ generated masks to be applied to the input image. Doing so disassociates the computational cost of the model from the number of generated masks, and the only cost increase comes from applying the mask generation model once, which is inexpensive ( $10\%$ the size of ResNet18). We name this single-forward pass version of our model ADIOS-s, and write the objective as
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\mathcal {I} ^ {\star}, \mathcal {M} ^ {\star} = \arg \min _ {\mathcal {I}} \max _ {\mathcal {M}} \left(\mathcal {L} ^ {(k)} (\boldsymbol {x}; \mathcal {I}, \mathcal {M}) - \lambda \frac {1}{N} \sum_ {n = 1} ^ {N} p _ {n}\right),
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
\text {w h e r e} k \sim \text {U n i f o r m} (\{1, 2 \dots , N \}). \tag {9}
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
# 3. Evaluation of Representations
|
| 160 |
+
|
| 161 |
+
Set up We evaluate ADIOS with three different SSL objectives: SimCLR (Chen et al., 2020), BYOL (Grill et al., 2020), and SimSiam (Chen & He, 2021). Each set of quantitative results is reported as an average over three random trials. We summarise our training setup in Appendix C.
|
| 162 |
+
|
| 163 |
+
Table 1. Top-1 classification accuracy ( $k$ -NN and Linear Probing) on Imagenet100-S, STL10. Improvements of ADIOS that are more than $3\%$ are marked in bold.
|
| 164 |
+
|
| 165 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">ImageNet100-S</td><td colspan="2">STL10</td></tr><tr><td>k-NN</td><td>Linear</td><td>k-NN</td><td>Linear</td></tr><tr><td colspan="5">Backbone: ViT-Tiny</td></tr><tr><td>SimCLR</td><td>40.0 (±0.28)</td><td>40.2 (±0.47)</td><td>72.9 (±0.27)</td><td>76.0 (±0.33)</td></tr><tr><td>+ADIOS</td><td>42.0 (±1.32)</td><td>43.1 (±0.71)</td><td>73.4 (±0.28)</td><td>79.7 (±0.88)</td></tr><tr><td>SimSiam</td><td>35.2 (±1.12)</td><td>36.8 (±1.82)</td><td>66.7 (±0.10)</td><td>67.5 (±0.02)</td></tr><tr><td>+ADIOS</td><td>38.8 (±2.73)</td><td>40.1 (±0.59)</td><td>67.9 (±0.75)</td><td>68.8 (±0.25)</td></tr><tr><td>BYOL</td><td>38.1 (±0.61)</td><td>39.7 (±0.50)</td><td>71.9 (±0.12)</td><td>72.1 (±0.32)</td></tr><tr><td>+ADIOS</td><td>47.1 (±0.35)</td><td>49.2 (±0.94)</td><td>74.5 (±0.58)</td><td>75.9 (±0.63)</td></tr><tr><td colspan="5">Backbone: ResNet-18</td></tr><tr><td>SimCLR</td><td>54.1 (±0.09)</td><td>55.1 (±0.15)</td><td>83.7 (±0.48)</td><td>85.1 (±0.12)</td></tr><tr><td>+ADIOS</td><td>55.1 (±0.43)</td><td>55.9 (±0.21)</td><td>85.8 (±0.10)</td><td>86.1 (±0.36)</td></tr><tr><td>SimSiam</td><td>58.6 (±0.31)</td><td>59.5 (±0.31)</td><td>84.3 (±0.81)</td><td>84.8 (±0.72)</td></tr><tr><td>+ADIOS</td><td>61.0 (±0.29)</td><td>60.4 (±0.19)</td><td>84.6 (±0.35)</td><td>86.4 (±0.24)</td></tr><tr><td>BYOL</td><td>56.2 (±0.79)</td><td>56.3 (±0.10)</td><td>83.6 (±0.09)</td><td>84.3 (±0.13)</td></tr><tr><td>+ADIOS</td><td>60.2 (±0.82)</td><td>61.4 (±0.14)</td><td>84.8 (±0.19)</td><td>85.6 (±0.24)</td></tr></table>
|
| 166 |
+
|
| 167 |
+
# 3.1. Classification
|
| 168 |
+
|
| 169 |
+
We evaluate the performance of ADIOS on STL10, as well as a downsized version of ImageNet100 (Tian et al., 2020), from resolution $224 \times 224$ to $96 \times 96$ . We refer to our version of the dataset as ImageNet100-S. We also evaluate the performance of ADIOS-s on the original ImageNet100 dataset. Both ImageNet100 and STL10 are derived from ImageNet1k (Russakovsky et al., 2015): ImageNet100 contains data from 100 ImageNet classes, and STL10 is derived from 10 object classes of ImageNet, with 5,000 labelled images and 100,000 unlabelled images. Due to computational constraints we were unable to evaluate on ImageNet-1k; we leave this for future work.
|
| 170 |
+
|
| 171 |
+
For ADIOS, we provide results using ResNet18 (He et al., 2016) and ViT-Tiny (Dosovitskiy et al., 2021) backbones on three classification tasks: linear evaluation, $k$ -NN and clustering. Through hyperparameter search, we use $N = 4$ masking slots for ImageNet100-S and $N = 6$ for STL10. For ADIOS-s we provide results using ResNet18 as backbone on linear evaluation.
|
| 172 |
+
|
| 173 |
+
Linear evaluation and $k$ -NN We study the utility of learned representation by classifying the features using both linear and a $k$ -nearest neighbour ( $k$ -NN) classifiers. Following the protocol in Zhou et al. (2021), we sweep over different numbers of nearest neighbours for $k$ -NN and different learning rates for the linear classifier.
|
| 174 |
+
|
| 175 |
+
Results are presented in Tab. 1, where each $+ADIOS$ entry represents the ADIOS framework applied to the SSL objective in the row above. For instance, the top coloured block shows results of SimCLR and SimCLR $+ADIOS$ . Models using ADIOS consistently outperform their respective SSL baselines beyond the margin of error; in some cases achieving significant improvements of $3 - 9\%$ (in bold).
|
| 176 |
+
|
| 177 |
+
Table 2. Top-1 classification accuracy of linear probing on ImageNet100. Improvements of more than $3\%$ are marked in bold.
|
| 178 |
+
|
| 179 |
+
<table><tr><td>SimCLR</td><td>+ADIOS-s</td><td>SimSiam</td><td>+ADIOS-s</td><td>BYOL</td><td>+ADIOS-s</td></tr><tr><td>77.5 (±0.10)</td><td>76.1 (±0.50)</td><td>76.4 (±0.07)</td><td>77.2 (±0.09)</td><td>74.3 (±0.16)</td><td>80.8 (±0.60)</td></tr></table>
|
| 180 |
+
|
| 181 |
+
Notably, the ViT-Tiny models perform significantly worse than ResNet-18 models, which is unsurprising given that ViT-Tiny uses half the number of parameters of ResNet-18.
|
| 182 |
+
|
| 183 |
+
The best Top-1 accuracy on ImageNet100-S is $61.4\%$ , achieved by $\mathrm{BYOL + ADIOS}$ using linear evaluation, surpassing its baseline $\mathrm{BYOL}$ by $5.1\%$ ; For STL10, the best performing model is SimSiam+ADIOS using linear evaluation with an accuracy of $86.4\%$ , while SimSiam evaluates at $84.8\%$ . Significantly, ADIOS improves all metrics on both backbones and both datasets. Interestingly, the degree of improvement varies by method, and can result in order change between respective SSL methods.
|
| 184 |
+
|
| 185 |
+
We also run experiments on the original ImageNet100 dataset with the single-forward pass version of our model, ADIOS-s. The results in Tab. 2 show that this much cheaper model also achieves impressive performance, especially when applied to BYOL with a performance boost of more than $6\%$ . This result further demonstrates the efficiency of our approach, and the reduced computational cost allows for the potential of scaling to larger datasets.
|
| 186 |
+
|
| 187 |
+
Clustering Following Bao et al. (2021); Zhou et al. (2021), we also evaluate the trained models using standard clustering metrics, including adjusted random index (ARI) and Fowlkes-Mallows index (FMI), both of which computes the similarity between clusterings, as well as normalised mutual information (NMI). We assign pseudo-labels to the representation of each image using k-means, and evaluate the three metrics on the clusters formed by the pseudo-labels vs. the true labels. Results in Tab. 3 are consistent with our previous findings. ADIOS improves the performance of baseline SSL methods on all three metrics for both datasets.
|
| 188 |
+
|
| 189 |
+
Findings ADIOS significantly and consistently improves the quality of representation learned under a range of setups, across two datasets, two backbone architectures, three SSL methods and on five different metrics, highlighting the effectiveness and versatility of our approach.
|
| 190 |
+
|
| 191 |
+
# 3.2. Transfer learning
|
| 192 |
+
|
| 193 |
+
We study the downstream performance of models trained on ImageNet100-S, on four different datasets including CIFAR10, CIFAR100 (Krizhevsky et al., 2009), Flowers102 (Nilsback & Zisserman, 2008), and iNaturalist (Horn et al., 2018). CIFAR10 and CIFAR100 resolutions are 32, while those of Flowers102 and iNaturalist are 96. We only use the
|
| 194 |
+
|
| 195 |
+
Table 3. Clustering performance on Imagenet100-S, STL10.
|
| 196 |
+
|
| 197 |
+
<table><tr><td rowspan="2">Method</td><td rowspan="2">Backbone</td><td colspan="3">Metrics</td></tr><tr><td>FMI ↑</td><td>ARI ↑</td><td>NMI ↑</td></tr><tr><td colspan="5">Dataset: ImageNet100-S</td></tr><tr><td>SimCLR</td><td>ViT-Tiny</td><td>0.105 (±1e-3)</td><td>0.095 (±1e-3)</td><td>0.432 (±3e-3)</td></tr><tr><td>+ADIOS</td><td>ViT-Tiny</td><td>0.120 (±1e-3)</td><td>0.110 (±1e-3)</td><td>0.442 (±4e-3)</td></tr><tr><td>SimSiam</td><td>ViT-Tiny</td><td>0.077 (±9e-4)</td><td>0.067 (±2e-3)</td><td>0.389 (±3e-3)</td></tr><tr><td>+ADIOS</td><td>ViT-Tiny</td><td>0.098 (±1e-2)</td><td>0.087 (±9e-4)</td><td>0.425 (±3e-3)</td></tr><tr><td>BYOL</td><td>ViT-Tiny</td><td>0.098 (±8e-3)</td><td>0.088 (±8e-3)</td><td>0.418 (±4e-3)</td></tr><tr><td>+ADIOS</td><td>ViT-Tiny</td><td>0.132 (±3e-3)</td><td>0.123 (±1e-3)</td><td>0.458 (±4e-3)</td></tr><tr><td>SimCLR</td><td>ResNet18</td><td>0.151 (±3e-3)</td><td>0.135 (±4e-3)</td><td>0.515 (±6e-3)</td></tr><tr><td>+ADIOS</td><td>ResNet18</td><td>0.175 (±1e-3)</td><td>0.161 (±4e-3)</td><td>0.539 (±3e-3)</td></tr><tr><td>SimSiam</td><td>ResNet18</td><td>0.167 (±2e-3)</td><td>0.136 (±6e-3)</td><td>0.553 (±8e-3)</td></tr><tr><td>+ADIOS</td><td>ResNet18</td><td>0.179 (±1e-3)</td><td>0.161 (±1e-3)</td><td>0.553 (±1e-3)</td></tr><tr><td>BYOL</td><td>ResNet18</td><td>0.170 (±1e-3)</td><td>0.158 (±3e-3)</td><td>0.530 (±4e-3)</td></tr><tr><td>+ADIOS</td><td>ResNet18</td><td>0.179 (±6e-4)</td><td>0.156 (±2e-3)</td><td>0.561 (±2e-3)</td></tr><tr><td colspan="5">Dataset: STL10</td></tr><tr><td>SimCLR</td><td>ViT-Tiny</td><td>0.349 (±5e-3)</td><td>0.269 (±6e-3)</td><td>0.410 (±2e-3)</td></tr><tr><td>+ADIOS</td><td>ViT-Tiny</td><td>0.351 (±4e-3)</td><td>0.271 (±8e-3)</td><td>0.417 (±6e-3)</td></tr><tr><td>SimSiam</td><td>ViT-Tiny</td><td>0.296 (±3e-3)</td><td>0.177 (±1e-3)</td><td>0.341 (±4e-3)</td></tr><tr><td>+ADIOS</td><td>ViT-Tiny</td><td>0.320 (±3e-3)</td><td>0.235 (±5e-3)</td><td>0.349 (±0e-0)</td></tr><tr><td>BYOL</td><td>ViT-Tiny</td><td>0.349 (±5e-3)</td><td>0.269 (±5e-3)</td><td>0.410 (±5e-3)</td></tr><tr><td>+ADIOS</td><td>ViT-Tiny</td><td>0.355 (±4e-2)</td><td>0.276 (±3e-3)</td><td>0.422 (±4e-3)</td></tr><tr><td>SimCLR</td><td>ResNet18</td><td>0.338 (±2e-3)</td><td>0.166 (±9e-4)</td><td>0.512 (±5e-3)</td></tr><tr><td>+ADIOS</td><td>ResNet18</td><td>0.437 (±6e-3)</td><td>0.309 (±9e-3)</td><td>0.585 (±8e-3)</td></tr><tr><td>SimSiam</td><td>ResNet18</td><td>0.392 (±2e-3)</td><td>0.242 (±7e-3)</td><td>0.552 (±3e-3)</td></tr><tr><td>+ADIOS</td><td>ResNet18</td><td>0.412 (±8e-3)</td><td>0.249 (±7e-3)</td><td>0.558 (±2e-4)</td></tr><tr><td>BYOL</td><td>ResNet18</td><td>0.429 (±5e-3)</td><td>0.328 (±9e-3)</td><td>0.525 (±8e-3)</td></tr><tr><td>+ADIOS</td><td>ResNet18</td><td>0.508 (±6e-3)</td><td>0.422 (±1e-2)</td><td>0.588 (±9e-3)</td></tr></table>
|
| 198 |
+
|
| 199 |
+
ResNet-18 models here as they clearly outperform the ViT-Tiny models in our previous experiments. Detailed setup and hyperparameters are given in Appendix D.
|
| 200 |
+
|
| 201 |
+
Tab. 4 reports classification accuracy under two different transfer-learning setups, including $F.T$ : fine-tuning the entire model and $Lin$ : freeze the encoder weights and re-train the linear classifier only. As a comparison, we also show the results of training from scratch on each dataset.
|
| 202 |
+
|
| 203 |
+
Results show that ADIOS improves transfer learning performance on all four datasets, under both linear evaluation and fine-tuning. Bigger improvements, of $>3\%$ (marked in bold) occur mostly under linear evaluation, indicating that compared to baseline SSL models, the ADIOS-pre-trained representations are much easier to linearly separate. Notably, all 6 models' fine-tuning performance exceeds training from scratch, demonstrating the benefits of pretraining.
|
| 204 |
+
|
| 205 |
+
One might also notice that different from the other datasets, there exists large discrepancies between the linear evaluation vs. fine-tuning performance on CIFAR10 and CIFAR100. This is because He et al. (2016) suggests a slightly different architecture for CIFAR with smaller kernel size in the first layer to suit its small image size (see details in Appendix F). We use CIFAR-ResNet for fine-tuning, however we have to use the original architecture for linear evaluation in order to use the pretrained weights, which leads to poor performance.
|
| 206 |
+
|
| 207 |
+
Table 4. Classification accuracy of transfer learning by re-training linear classifier only (Lin.) and fine-tuning (FT). More than $3\%$ improvements by ADIOS are marked in bold.
|
| 208 |
+
|
| 209 |
+
<table><tr><td rowspan="2">Method</td><td colspan="2">CIFAR10</td><td colspan="2">CIFAR100</td><td colspan="2">Flowers102</td><td colspan="2">iNaturalist</td></tr><tr><td>Lin.</td><td>F.T.</td><td>Lin.</td><td>F.T.</td><td>Lin.</td><td>F.T.</td><td>Lin.</td><td>F.T.</td></tr><tr><td>SimCLR</td><td>30.1</td><td>91.3</td><td>10.2</td><td>70.0</td><td>42.5</td><td>45.6</td><td>69.4</td><td>82.1</td></tr><tr><td>+ADIOS</td><td>34.6</td><td>93.4</td><td>11.0</td><td>71.8</td><td>50.2</td><td>50.6</td><td>72.5</td><td>84.3</td></tr><tr><td>SimSiam</td><td>35.3</td><td>92.4</td><td>13.2</td><td>65.0</td><td>38.7</td><td>55.0</td><td>72.3</td><td>85.0</td></tr><tr><td>+ADIOS</td><td>39.3</td><td>94.3</td><td>13.3</td><td>71.0</td><td>44.9</td><td>59.0</td><td>75.9</td><td>86.2</td></tr><tr><td>BYOL</td><td>29.9</td><td>88.0</td><td>13.3</td><td>52.3</td><td>49.1</td><td>58.6</td><td>72.7</td><td>85.1</td></tr><tr><td>+ADIOS</td><td>39.2</td><td>90.4</td><td>14.0</td><td>62.0</td><td>51.7</td><td>60.1</td><td>73.1</td><td>85.7</td></tr><tr><td>Scratch</td><td>-</td><td>85.5</td><td>-</td><td>49.8</td><td>-</td><td>30.6</td><td>-</td><td>73.8</td></tr></table>
|
| 210 |
+
|
| 211 |
+
# 3.3. Robustness
|
| 212 |
+
|
| 213 |
+
It is likely that the adversarial masks learned by ADIOS target informative image features, including spurious correlations if present in a dataset. It is therefore interesting to ask if ADIOS representations are robust to changes in such spurious correlations. To answer this, we evaluate models pretrained on ImageNet100-S on the backgrounds challenge (Xiao et al., 2021), where 7 different
|
| 214 |
+
|
| 215 |
+
types of variation on a subset of ImageNet data are used to measure the impact of foreground and background on model decision making. Examples of such variations can be seen in Fig. 7, where the original figure's (Orig.) background is replaced by background from another image in the same class (M.S.), from a random image in any class (M.R.) or from an image in the next class (N.R.). We perform linear evaluation on the pretrained models on these variations, and report the classification accuracy in Tab. 5.
|
| 216 |
+
|
| 217 |
+
Our results show that all three SSL-ADIOS models outperform their respective baselines on all variations, demonstrating that ADIOS-learned representations are more robust to changes in both foreground and background.
|
| 218 |
+
|
| 219 |
+
It is useful to examine how ADIOS behaves in different testing conditions regardless of the SSL objective used. To this end, the bottom row of Tab. 5 contains the performance gain of ADIOS averaged over all three SSL models. We witness the biggest gains on the M.R. and M.N. conditions (bottom row, Fig. 7). That is, when any deterministic relation between labels and backgrounds is severed. The improvement is the lowest for the original images and M.S. condition (top row, Fig. 7), both of which preserve the relation between labels and background. This demonstrates that ADIOS depends less on background information that are spuriously correlated with object labels when making predictions. This is not surprising—as we describe in Section 4.1—ADIOS'
|
| 220 |
+
|
| 221 |
+
Table 5. Accuracy on different variations of the backgrounds challenge, evaluating model robustness. Example variations in Fig. 7.
|
| 222 |
+
|
| 223 |
+
<table><tr><td rowspan="2">Method</td><td colspan="7">Variations</td><td rowspan="2">Orig.</td></tr><tr><td>O.BB.</td><td>O.BT.</td><td>N.F.</td><td>O.F.</td><td>M.S.</td><td>M.R.</td><td>M.N.</td></tr><tr><td>SimCLR</td><td>20.1</td><td>34.8</td><td>44.3</td><td>41.6</td><td>67.1</td><td>45.9</td><td>41.0</td><td>78.8</td></tr><tr><td>+ADIOS</td><td>20.7</td><td>36.7</td><td>45.5</td><td>43.5</td><td>68.0</td><td>47.9</td><td>43.7</td><td>79.1</td></tr><tr><td>SimSiam</td><td>29.5</td><td>39.1</td><td>43.8</td><td>52.1</td><td>69.9</td><td>43.9</td><td>40.8</td><td>78.4</td></tr><tr><td>+ADIOS</td><td>33.1</td><td>41.0</td><td>46.2</td><td>54.7</td><td>71.5</td><td>47.2</td><td>43.5</td><td>80.3</td></tr><tr><td>BYOL</td><td>25.9</td><td>38.4</td><td>46.0</td><td>51.6</td><td>71.3</td><td>45.6</td><td>42.7</td><td>79.8</td></tr><tr><td>+ADIOS</td><td>27.7</td><td>39.0</td><td>48.5</td><td>51.7</td><td>72.1</td><td>47.8</td><td>44.1</td><td>80.6</td></tr><tr><td>Avg. gain</td><td>+2.0</td><td>+1.4</td><td>+2.0</td><td>+1.5</td><td>+1.1</td><td>+2.5</td><td>+2.3</td><td>+1.0</td></tr></table>
|
| 224 |
+
|
| 225 |
+
generated masks tend to occlude backgrounds, encouraging the model to focus on the foreground objects.
|
| 226 |
+
|
| 227 |
+
# 4. Analysis on Learned Masks
|
| 228 |
+
|
| 229 |
+
Here, we look at the masks generated by ADIOS' occlusion model when trained on ImageNet100-S, STL10, and CLEVR (Johnson et al., 2017)—a dataset of rendered 3D objects such as cubes and spheres. We use $N = 4$ masking slots for CLEVR and include training details in Appendix E. The top row of each image block in Fig. 8 shows the original image. The bottom row displays the generated masks, with each colour representing one masking slot. See Section 4.1 for a detailed analysis. We also quantitatively analyse ADIOS' masks in Section 4.2.
|
| 230 |
+
|
| 231 |
+
# 4.1. Mask Generation
|
| 232 |
+
|
| 233 |
+
For realistic, single-object datasets such as STL10 and ImageNet100-S, ADIOS manages to mask out different compositions of the image. In the STL10 dataset, each image clearly shows a 'foreground' object and a 'background' (Fig. 8a). In this setting, ADIOS learns to mask specific object parts like the wings or the tail of a bird ( $4^{\text{th}}$ column) and the mouth or the face of a horse ( $5^{\text{th}}$ column). In case of the ImageNet100-S dataset, however, it is often not obvious if the image features any particular entity. Hence, ADIOS tends to occlude complete entities like the animal and the ant in the in the $7^{\text{th}}$ and $8^{\text{th}}$ columns of Fig. 8c.
|
| 234 |
+
|
| 235 |
+
In CLEVR (simple rendered objects), ADIOS is usually able to put the background into a separate slot; the remaining slots split all present objects into 2-3 groups, with a tendency of applying a single mask to objects of the same colour (see Fig. 8b). While not a focus on our work, and in contrast to prior art (Greff et al., 2019; Engelcke et al., 2020), ADIOS does not produce perfect segmentations. It is however an interesting research direction, given that better segmentations could further improve representation learning performance as we show in Section 4.2.
|
| 236 |
+
|
| 237 |
+
Summary Qualitative results show that ADIOS can generate semantically-meaningful masks. Crucially, the generated masks focus on different levels of detail, depending on
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
(a) STL10, $N = 6$
|
| 241 |
+
|
| 242 |
+

|
| 243 |
+
(b) CLEVR, $N = 4$
|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
(c) ImageNet100-S, $N = 4$
|
| 247 |
+
Figure 8. Masks generated by ADIOS during training on each dataset. $N$ denotes the number of masks. Top row: original image; Bottom row: generated masks, each color represents one mask.
|
| 248 |
+
|
| 249 |
+
the dataset. This may explain some of the ADIOS' performance gains in the robustness experiments of Section 3.3, as semantic perturbations are baked into the training process.
|
| 250 |
+
|
| 251 |
+
# 4.2. Comparing Masking Schemes
|
| 252 |
+
|
| 253 |
+
The premise of our work is that for masked image modeling, what is masked, matters. In this section, we further investigate this by comparing the representation learning performance of SSL models trained under an ADIOS-like MIM framework, but with non-parametric masks including ground-truth semantic masks and random masks.
|
| 254 |
+
|
| 255 |
+
In Fig. 9 we outline the different masking schemes investigated in this section on the CLEVR dataset, including: a) ground-truth object segmentation masks (provided with the dataset), b) foreground-background masks (where the foreground is the union of all objects), c) ground-truth, box-shaped masks, d) shuffled ground-truth object segmentation masks (i.e. one image uses the ground-truth mask of another), e) random mask occluding $75\%$ of the image, as in MAE (He et al., 2021), f) blockwise mask occluding $30\%$ of the image, same as BEiT (Bao et al., 2021). Note that out of these masking schemes, a-c include semantic information, while d-f do not. In addition, we perform similar experi
|
| 256 |
+
|
| 257 |
+
ments on ImageNet100-S and STL10, however since we do not have information of the ground-truth object segmentation, we only compare the performance of ADIOS against using the random masking scheme in MAE and BEiT (i.e. e and f).
|
| 258 |
+
|
| 259 |
+
The representation learning performance of different masking schemes on ImageNet100-S and STL10 is evaluated by its top-1 classification accuracy under linear probing. To evaluate this on CLEVR, we set up a challenging multi-label classification task. Namely, we predict 24 binary labels, each indicating presence of a particular colour and shape $(8 \times 3)$ combination in the image. We report the F1 score (harmonic mean of the precision and recall) under different weighted average of subpopulation (defined by labels), where 'micro' evaluates F1 across the entire dataset, 'macro' evaluates an unweighted average of per-label F1 score, and 'weighted' scale the per-label F1 score by number of examples when taking average.
|
| 260 |
+
|
| 261 |
+
Tabs. 6 and 7 contains results averaged across these three SSL methods, each with three random trials (i.e. each entry in Tabs. 6 and 7 is averaged over nine runs). This is to marginalize out particular SSL methods, and therefore better understand effect of each of the masking schemes.
|
| 262 |
+
|
| 263 |
+

|
| 264 |
+
(a) Ground-truth object masks
|
| 265 |
+
|
| 266 |
+

|
| 267 |
+
(b) Foreground-background masks
|
| 268 |
+
|
| 269 |
+

|
| 270 |
+
(c) Ground-truth box-shaped masks
|
| 271 |
+
|
| 272 |
+

|
| 273 |
+
(d) Shuffled ground-truth masks
|
| 274 |
+
|
| 275 |
+

|
| 276 |
+
(e) MAE (He et al., 2021) masks
|
| 277 |
+
|
| 278 |
+

|
| 279 |
+
(f) BEiT (Bao et al., 2021) masks
|
| 280 |
+
Figure 9. Masking schemes used to compare against ADIOS.
|
| 281 |
+
|
| 282 |
+
The results clearly show the advantage of using semantic masks over non-semantic masks for representation learning. In Tab. 6, we show that ADIOS significantly outperform random masking schemes used in MAE and BEiT; additionally, the ground-truth object masks $(G.T.)$ in Tab. 7 achieves the best performance on all three metrics, closely followed by ADIOS with comparable F1-macro, F1-weighted score, and slightly lower F1-micro score. We compare this to the randomly shuffled ground-truth mask (Shuffle), covering on average the same image fraction as the ground-truth, but where there is far less semantic consistency to the image content. The shuffled masks perform much worse on all three metrics, supporting our hypothesis from Section 1 that what is masked is more important that how much is masked.
|
| 283 |
+
|
| 284 |
+
The remaining masking schemes behave as expected, with the semantically-informed ones outperforming the nonsemantic ones. Perhaps surprisingly, random masks, including the ones used in MAE and BEiT, can hurt representation learning under this ADIOS-like MIM framework: in both
|
| 285 |
+
|
| 286 |
+
Table 6. Top-1 classification accuracy on ImageNet100-S and STL10 under different masking schemes, averaged over three runs of SimCLR, SimSiam and BYOL respectively. Best results for each metric in bold.
|
| 287 |
+
|
| 288 |
+
<table><tr><td rowspan="2">Mask type</td><td rowspan="2">Condition</td><td colspan="2">Dataset</td></tr><tr><td>ImageNet100-S</td><td>STL10</td></tr><tr><td rowspan="2">Random</td><td>e) MAE</td><td>43.7 (±0.43)</td><td>78.4 (±0.91)</td></tr><tr><td>f) BEiT</td><td>46.4 (±0.67)</td><td>80.7 (±1.00)</td></tr><tr><td>Learned</td><td>ADIOS</td><td>59.2 (±2.92)</td><td>86.0 (±0.40)</td></tr><tr><td>None</td><td>-</td><td>57.0 (±2.27)</td><td>84.7 (±0.40)</td></tr></table>
|
| 289 |
+
|
| 290 |
+
Table 7. Multi-label classification on CLEVR under different masking schemes, averaged over three runs of SimCLR, SimSiam and BYOL respectively. Best results for each metric in bold.
|
| 291 |
+
|
| 292 |
+
<table><tr><td rowspan="2">Mask type</td><td rowspan="2">Condition</td><td colspan="3">Metric</td></tr><tr><td>F1-macro ↑</td><td>F1-micro ↑</td><td>F1-weighted ↑</td></tr><tr><td rowspan="3">Semantic</td><td>a) G.T.</td><td>0.373 (±7e-3)</td><td>0.401 (±2e-4)</td><td>0.460 (±1e-2)</td></tr><tr><td>b) FG./BG.</td><td>0.346 (±7e-3)</td><td>0.365 (±2e-4)</td><td>0.402 (±1e-3)</td></tr><tr><td>c) Box</td><td>0.347 (±2e-4)</td><td>0.391 (±3e-5)</td><td>0.457 (±5e-2)</td></tr><tr><td rowspan="3">Random</td><td>d) Shuffle</td><td>0.332 (±6e-3)</td><td>0.360 (±8e-4)</td><td>0.418 (±1e-3)</td></tr><tr><td>e) MAE</td><td>0.309 (±8e-4)</td><td>0.336 (±3e-4)</td><td>0.391 (±9e-4)</td></tr><tr><td>f) BEiT</td><td>0.274 (±1e-3)</td><td>0.307 (±2e-4)</td><td>0.395 (±7e-3)</td></tr><tr><td>Learned</td><td>ADIOS</td><td>0.377 (±2e-3)</td><td>0.385 (±9e-4)</td><td>0.451 (±1e-3)</td></tr><tr><td>None</td><td>-</td><td>0.352 (±9e-3)</td><td>0.359 (±2e-4)</td><td>0.373 (±2e-5)</td></tr></table>
|
| 293 |
+
|
| 294 |
+
Tabs. 6 and 7, the performance of random masking schemes are much lower even compared to the baseline where no mask is applied. We do note that MAE and BEiT both contain many components beyond their masking schemes that are critical for their successful learning as reported in the respective works. These include image decoders, discrete-VAE tokeniser and the ViTencoder. Our evaluation focus exclusively on the masking schemes, and suggests that semantically meaningful masks lead to better representations, while random masks do not.
|
| 295 |
+
|
| 296 |
+
Summary Our experiments here show two things. Firstly, that semantically meaningful masks can be used as an effective form of augmentation for SSL models, but the same cannot be said for random masks. And secondly, that the representations learned from using the masks generated by ADIOS are comparable in quality to those learned from using ground-truth object masks.
|
| 297 |
+
|
| 298 |
+
# 5. Related Work
|
| 299 |
+
|
| 300 |
+
Augmentation based SSL Recent work has seen rapid development in SSL utilising image augmentation, with the core idea being that the representation of two augmented views of the same image should be similar. This involves a range of work that adopts a contrastive framework where the positive sample pairs (i.e. two views of the same image) are attracted and negative pairs (i.e. two views of different images) are repulsed (Chen et al., 2020; Gansbeke et al.,
|
| 301 |
+
|
| 302 |
+
2020; He et al., 2020; Chen et al., 2020; Caron et al., 2020), as well as non-contrastive approaches (Grill et al., 2020; Ermolov et al., 2021; Zbontar et al., 2021; Chen & He, 2021; Bardes et al., 2021) that are able to prevent latent collapse without negative pairs, which is considered to be computationally expensive.
|
| 303 |
+
|
| 304 |
+
Learning augmentations Several models have proposed to learn augmentation policies with supervision signals (Cubuk et al., 2019; Hataya et al., 2020) that is more favourable to the task at hand. Tamkin et al. (2021) applies these for SSL and proposes to learn perturbation to the input image with a viewmaker model that is trained adversarially to the main encoder network. Different from our work where masks are generated to occlude different components of the image, their learned perturbation is $l_{p}$ -bounded and provides more "color-jitter" style augmentation to the input. Koyama et al. (2021) also proposes to learn mask-like augmentations by maximising a lower bound to the mutual information between image and representation while regularising the entropy of the augmentation. However their experiment is limited to an edited MNIST dataset, and they only consider learning augmentations for one SSL algorithm, SimCLR.
|
| 305 |
+
|
| 306 |
+
Masked image models More recently, models such as MAE (He et al., 2021), BEiT (Bao et al., 2021), iBOT (Zhou et al., 2021) have been motivated by masked language models like BERT, as we ourselves do, and achieved highly competitive results on SSL. All three methods make use of vision transformers and propose to "inpaint" images occluded by random masks in one way or another: MAE employs an autoencoder to inpaint images that are heavily masked, with the decoder discarded after pretraining. On the other hand, BEiT and iBOT both utilise tokenisers to first transform image patches into visual tokens, with BEiT using off-the-shelf pretrained tokenizer and iBOT training the tokeniser online. Similar to our work, rather than performing reconstruction in pixel space, they minimise the distance between the visual tokens of the complete image vs. the masked image. Recent work has also seen the application of random masks to modalities beyond vision including speech and text, achieving strong performance in all these domains (Baevski et al.). Our work is significantly different from all above since we employ semantically meaningful masks from the occluder model, jointly learned with the encoding model. Moreover, our model does not rely on additional components such as tokenisers/image decoder, and since the construction of the model does not require splitting the image into patches, is also not limited to using vision transformers as the backbone architecture.
|
| 307 |
+
|
| 308 |
+
# 6. Conclusion
|
| 309 |
+
|
| 310 |
+
We propose a novel MIM framework named ADIOS, which learns a masking function alongside an image encoder in
|
| 311 |
+
|
| 312 |
+
an adversarial manner. We show, in extensive experiments, that our model consistently outperforms SSL baselines on representation learning tasks, while producing semanticallymeaningful masks. We also provide detailed analysis on using different forms of occlusion as augmentation for SSL in general. We find that the best representation learning performance results from using semantically-meaningful masks, especially ground-truth ones, and that masks generated by the ADIOS' occlusion model are almost as good.
|
| 313 |
+
|
| 314 |
+
One caveat of our model is that the memory and computation cost increases linearly with the number of masking slots $N$ , since the model requires $N$ forward passes before each gradient update. This likely can be addressed by randomly sampling one mask for each forward pass, but is left to future work. Additionally, we want to investigate ADIOS performance on larger datasets such as ImageNet-1K or 22K and larger backbones like ViT-L, ViT-H. However, we believe that, as it is, ADIOS' strong performance on a variety of tasks under versatile conditions provides valuable insights on the design of masked image models. Future work on masked image modeling should consider not only objectives and model architectures, but also mask design—this work shows that semantic masks are significantly more helpful than random ones in aiding representation learning.
|
| 315 |
+
|
| 316 |
+
# 7. Acknowledgements
|
| 317 |
+
|
| 318 |
+
We would like to thank Sjoerd van Steenkiste, Klaus Greff, Thomas Kipf, Matt Botvinick, Adam Golinski, Hyunjik Kim and Geoffrey E. Hinton for helpful discussions in the early stages of this project. We would also like to thank Benjamin A. Stanley for discussions on the sparsity penalty term.
|
| 319 |
+
|
| 320 |
+
YS and PHST were supported by the UKRI grant: Turing AI Fellowship EP/W002981/1 and EPSRC/MURI grant: EP/N019474/1. We would also like to thank the Royal Academy of Engineering and FiveAI. YS was additionally supported by Remarkdip through their PhD Scholarship Programme.
|
| 321 |
+
|
| 322 |
+
# References
|
| 323 |
+
|
| 324 |
+
Baevski, A., Hsu, W.-N., Xu, Q., Babu, A., Gu, J., and Auli, M. Data2vec: A general framework for self-supervised learning in speech, vision and language. URL https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/. Accessed: 2022-01-27.
|
| 325 |
+
Bao, H., Dong, L., and Wei, F. Beit: Bert pre-training of image transformers. ArXiv preprint, abs/2106.08254, 2021. URL https://arxiv.org/abs/2106.08254.
|
| 326 |
+
Bardes, A., Ponce, J., and LeCun, Y. Vicreg: Variance-invariance-covariance regularization for self-supervised learning. *ArXiv* preprint, abs/2105.04906, 2021. URL https://arxiv.org/abs/2105.04906.
|
| 327 |
+
Bromley, J., Bentz, J. W., Bottou, L., Guyon, I., Lecun, Y., Moore, C., Säckinger, E., and Shah, R. Signature verification using a "siamese" time delay neural network. International Journal of Pattern Recognition and Artificial Intelligence, 7(4):669-688, 1993.
|
| 328 |
+
Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P., and Joulin, A. Unsupervised learning of visual features by contrasting cluster assignments. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/black/70feb62b69f16e0238f741fab228fec2-Abstract.html.
|
| 329 |
+
Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. E. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pp. 1597-1607. PMLR, 2020. URL http://proceedings.mlr.press/v119/chen20j.html.
|
| 330 |
+
Chen, X. and He, K. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15750-15758, 2021.
|
| 331 |
+
Chen, X., Fan, H., Girshick, R. B., and He, K. Improved baselines with momentum contrastive learning. ArXiv preprint, abs/2003.04297, 2020. URL https://arxiv.org/abs/2003.04297.
|
| 332 |
+
Cubuk, E. D., Zoph, B., Mané, D., Vasudevan, V., and Le, Q. V. Autoaugment: Learning augmentation strategies
|
| 333 |
+
|
| 334 |
+
from data. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pp. 113-123. Computer Vision Foundation / IEEE, 2019. doi: 10.1109/CVPR.2019.00020. URL http://openaccess.thecvf.com/content_CVPR_2019/html/Cubuk_AutoAugment_Learning_Augmentation_Strategies_From_Data_CVPR_2019_paper.html.
|
| 335 |
+
da Costa, V. G. T., Fini, E., Nabi, M., Sebe, N., and Ricci, E. Solo-learn: A library of self-supervised methods for visual representation learning, 2021. URL https://github.com/vturrisi/solo-learn.
|
| 336 |
+
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, Minneapolis, Minnesota, 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423.
|
| 337 |
+
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An image is worth 16x16 words: Transformers for image recognition at scale. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=YicbFdNTTy.
|
| 338 |
+
Engelcke, M., Kosiorek, A. R., Jones, O. P., and Posner, I. GENESIS: generative scene inference and sampling with object-centric latent representations. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=BkxfaTVFwH.
|
| 339 |
+
Engelcke, M., Jones, O. P., and Posner, I. Genesis-v2: Inferring unordered object representations without iterative refinement. *ArXiv* preprint, abs/2104.09958, 2021. URL https://arxiv.org/abs/2104.09958.
|
| 340 |
+
Ermolov, A., Siarohin, A., Sangineto, E., and Sebe, N. Whitening for self-supervised representation learning. In Meila, M. and Zhang, T. (eds.), Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp. 3015-3024. PMLR, 2021. URL http://proceedings.mlrpress/v139/ermolov21a.html.
|
| 341 |
+
|
| 342 |
+
Gansbeke, W. V., Vandenhende, S., Georgoulis, S., Proesmans, M., and Gool, L. V. Scan: Learning to classify images without labels. In ECCV (10), pp. 268-285, 2020.
|
| 343 |
+
Greff, K., Kaufman, R. L., Kabra, R., Watters, N., Burgess, C., Zoran, D., Matthew, L., Botvinick, M., and Lerchner, A. Multi-object representation learning with iterative variational inference. In Chaudhuri, K. and Salakhutdinov, R. (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pp. 2424-2433. PMLR, 2019. URL http://proceedings.mlr.press/v97/greff19a.html.
|
| 344 |
+
Grill, J., Strub, F., Altché, F., Tallec, C., Richemond, P. H., Buchatskaya, E., Doersch, C., Pires, B. Á., Guo, Z., Azar, M. G., Piot, B., Kavukcuoglu, K., Munos, R., and Valko, M. Bootstrap your own latent - A new approach to self-supervised learning. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/black5d8e70142b17b8192b2958e-Abstract.html.
|
| 345 |
+
Hataya, R., Zdenek, J., Yoshizoe, K., and Nakayama, H. Faster autoaugment: Learning augmentation strategies using backpropagation. In 16th European Conference on Computer Vision, ECCV 2020, pp. 1-16, 2020.
|
| 346 |
+
He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pp. 770-778. IEEE Computer Society, 2016. doi: 10.1109/CVPR.2016.6.90. URL https://doi.org/10.1109/CVPR.2016.90.
|
| 347 |
+
He, K., Fan, H., Wu, Y., Xie, S., and Girshick, R. B. Momentum contrast for unsupervised visual representation learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pp. 9726-9735. IEEE, 2020. doi: 10.1109/CVPR42600.2020.00975. URL https://doi.org/10.1109/CVPR42600.2020.00975.
|
| 348 |
+
He, K., Chen, X., Xie, S., Li, Y., Dolkar, P., and Girshick, R. Masked autoencoders are scalable vision learners. ArXiv, abs/2111.06377, 2021.
|
| 349 |
+
Horn, G. V., Aodha, O. M., Song, Y., Cui, Y., Sun, C., Shepard, A., Adam, H., Perona, P., and Belongie, S. J. The inaturalist species classification and detection dataset.
|
| 350 |
+
|
| 351 |
+
In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 8769-8778. IEEE Computer Society, 2018. doi: 10.1109/CVPR.2018.00914. URL http://openaccess.thecvf.com/content_cvpr_2018/html/Van_Horn_The_INaturalist_Species_CVPR_2018_paper.html.
|
| 352 |
+
Johnson, J., Hariharan, B., van der Maaten, L., Fei-Fei, L., Zitnick, C. L., and Girshick, R. B. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pp. 1988-1997. IEEE Computer Society, 2017. doi: 10.1109/CVPR.2017.215. URL https://doi.org/10.1109/CVPR.2017.215.
|
| 353 |
+
Koyama, M., Minami, K., Miyato, T., and Gal, Y. Contrastive representation learning with trainable augmentation channel. ArXiv preprint, abs/2111.07679, 2021. URL https://arxiv.org/abs/2111.07679.
|
| 354 |
+
Krahenbuhl, P. and Koltun, V. Efficient inference in fully connected crfs with gaussian edge potentials. In Shawe-Taylor, J., Zemel, R. S., Bartlett, P. L., Pereira, F. C. N., and Weinberger, K. Q. (eds.), Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011. Proceedings of a meeting held 12-14 December 2011, Granada, Spain, pp. 109-117, 2011. URL https://proceedings.neurips.cc/paper/2011/hash/beda24c1e1b46055cff2c39c98fd6fc1-Abstract.html.
|
| 355 |
+
Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. 2009.
|
| 356 |
+
Liu, Z., Mao, H., Chao-Yuan, W., Feichtenhofer, C., Darrell, T., and Xie, S. A convnet for the 2020s. arXiv preprint arXiv: 2201.03545, 2022.
|
| 357 |
+
Nilsback, M.-E. and Zisserman, A. Automated flower classification over a large number of classes. In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pp. 722-729. IEEE, 2008.
|
| 358 |
+
Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., and Efros, A. A. Context encoders: Feature learning by inpainting. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pp. 2536-2544. IEEE Computer Society, 2016. doi: 10.1109/CVPR.2016.278. URL https://doi.org/10.1109/CVPR.2016.278.
|
| 359 |
+
Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., and Sutskever, I. Zero-shot text-to-image generation. In Meila, M. and Zhang, T.
|
| 360 |
+
|
| 361 |
+
(eds.), Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp. 8821-8831. PMLR, 2021. URL http://proceedings.mlr.press/v139/ramesh21a.html.
|
| 362 |
+
Ronneberger, O., Fischer, P., and Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234-241. Springer, 2015.
|
| 363 |
+
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211-252, 2015.
|
| 364 |
+
Tamkin, A., Wu, M., and Goodman, N. D. Viewmaker networks: Learning views for unsupervised representation learning. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=enoVQWLsfyL.
|
| 365 |
+
Tian, Y., Krishnan, D., and Isola, P. Contrastive multiview coding. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XI 16, pp. 776-794. Springer, 2020.
|
| 366 |
+
Touvron, H., Cord, M., El-Nouby, A., Bojanowski, P., Joulin, A., Synnaeve, G., and Jégou, H. Augmenting convolutional networks with attention-based aggregation. *ArXiv* preprint, abs/2112.13692, 2021. URL https://arxiv.org/abs/2112.13692.
|
| 367 |
+
Xiao, K. Y., Engstrom, L., Ilyas, A., and Madry, A. Noise or signal: The role of image backgrounds in object recognition. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=gl3D-xY7wLq.
|
| 368 |
+
Zbontar, J., Jing, L., Misra, I., LeCun, Y., and Deny, S. Barlow twins: Self-supervised learning via redundancy reduction. In Meila, M. and Zhang, T. (eds.), Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp. 12310-12320. PMLR, 2021. URL http://proceedings.mlr.press/v139/zbontar21a.html.
|
| 369 |
+
Zhou, J., Wei, C., Wang, H., Shen, W., Xie, C., Yuille, A., and Kong, T. ibot: Image bert pre-training with online tokenizer. ArXiv preprint, abs/2111.07832, 2021. URL https://arxiv.org/abs/2111.07832.
|
| 370 |
+
|
| 371 |
+
# A. ADIOS Objectives
|
| 372 |
+
|
| 373 |
+
In this section we detail the objective used to optimise ADIOS+SimCLR, ADIOS+SimSiam and ADIOS+BYOL.
|
| 374 |
+
|
| 375 |
+
# A.1. SimCLR
|
| 376 |
+
|
| 377 |
+
We show a simplified version of SimCLR architecture in Fig. 4. In reality, the encoder $\mathcal{I}$ of SimCLR further factorises into two networks, including a base encoder $f(\cdot)$ which extracts the representation
|
| 378 |
+
|
| 379 |
+
$h$ used for downstream tasks, followed by a pro
|
| 380 |
+
|
| 381 |
+
jection head $g(\cdot)$ which maps $\pmb{h}$ to the final embedding that's used to compute the objective in (4).
|
| 382 |
+
|
| 383 |
+
We visualise this in Fig. 10. It is helpful to establish SimCLR's architecture as it lays the foundation for both SimSiam and BYOL.
|
| 384 |
+
|
| 385 |
+
# A.2. SimSiam
|
| 386 |
+
|
| 387 |
+
Chen & He (2021) proposes SimSiam, an SSL method that can learn meaningful representations without negative examples. The forward pass of SimSiam also contains a base encoder and a projection head; however, different from SimCLR, for one of the augmentation streams the projection head is removed with a stop gradient operation applied to the base encoder (See Fig. 11). Authors find empirically that these two alterations are essential for preventing latent collapse in the absence of negative examples. The final objective of SimSiam is written as
|
| 388 |
+
|
| 389 |
+
$$
|
| 390 |
+
\mathcal {L} _ {\text {S i m S i a m}} (\boldsymbol {x}; \mathcal {I}) = \frac {1}{2} \left(\mathcal {D} \left(\boldsymbol {z} ^ {A}, \boldsymbol {h} ^ {B}\right) + \mathcal {D} \left(\boldsymbol {z} ^ {B}, \boldsymbol {h} ^ {A}\right)\right), \tag {10}
|
| 391 |
+
$$
|
| 392 |
+
|
| 393 |
+
where $\mathcal{D}$ denotes the negative cosine similarity, $\mathcal{I} = g\circ f$ and $z = g(f(x))$ while $h = f(x)$ . Note that the loss is the average of two distances due to the asymmetrical model.
|
| 394 |
+
|
| 395 |
+
Following the same intuition in Section 2, to adapt the objective for the masks learned by the occi
|
| 396 |
+
|
| 397 |
+

|
| 398 |
+
Figure 11. Left: SimSiam. Right: SimSiam + ADIOS.
|
| 399 |
+
|
| 400 |
+
ADIOS framework, we apply the fusion model to one of the views.
|
| 401 |
+
|
| 402 |
+
We can therefore arrive at our final objective,
|
| 403 |
+
|
| 404 |
+
$$
|
| 405 |
+
\mathcal {L} _ {\text {S i m S i a m}} ^ {\text {A D I O S}} (\boldsymbol {x}; \mathcal {I}, \mathcal {M}) = \frac {1}{2} \left(\mathcal {D} \left(\boldsymbol {z} ^ {A, m}, \boldsymbol {h} ^ {B}\right) + \mathcal {D} \left(\boldsymbol {z} ^ {B, m}, \boldsymbol {h} ^ {A}\right)\right), \tag {11}
|
| 406 |
+
$$
|
| 407 |
+
|
| 408 |
+
where $z^{*,m} = g(f(\pmb{x}^{*,m}))$ .
|
| 409 |
+
|
| 410 |
+
# A.3.BYOL
|
| 411 |
+
|
| 412 |
+
BYOL (Grill et al., 2020) is another SSL method that avoids the need of negative examples by performing an iterative online update. Similar to SimSiam, BYOL also adopts an assymmetrical forward pass, however different from other approaches, the networks for two different augmentations
|
| 413 |
+
|
| 414 |
+

|
| 415 |
+
Figure 12. Left: BYOL + ADIOS.
|
| 416 |
+
|
| 417 |
+

|
| 418 |
+
BYOL. Right:
|
| 419 |
+
|
| 420 |
+
do not share weights. See Fig. 12 for a visualisation.
|
| 421 |
+
|
| 422 |
+
For the sake of clarity, we denote the parametrisation of the two networks as $\theta$ and $\phi$ . The $\theta$ network is appended with an additional "predictor" $q_{\theta}$ , and is updated via gradient descent using the following objective, which is evaluated using the output of the $\theta$ network $y_{\theta}$ and output of the $\phi$ network $z_{\phi}$
|
| 423 |
+
|
| 424 |
+
$$
|
| 425 |
+
\mathcal {L} _ {\mathrm {B Y O L}} (\boldsymbol {x}; \theta) = \frac {1}{2} \left(\mathcal {D} \left(\boldsymbol {y} _ {\theta} ^ {A}, \boldsymbol {z} _ {\phi} ^ {B}\right) + \mathcal {D} \left(\boldsymbol {y} _ {\theta} ^ {B}, \boldsymbol {z} _ {\phi} ^ {A}\right)\right), \tag {12}
|
| 426 |
+
$$
|
| 427 |
+
|
| 428 |
+
where $\mathcal{D}$ denotes the mean squared error. Again, the objective is the average between the two terms due to the assymmetrical architecture.
|
| 429 |
+
|
| 430 |
+
On the other hand, $\phi$ is optimised using the following update rule
|
| 431 |
+
|
| 432 |
+
$$
|
| 433 |
+
\phi \leftarrow \tau \phi + (1 - \tau) \phi , \tag {13}
|
| 434 |
+
$$
|
| 435 |
+
|
| 436 |
+
where $\tau \in [0,1)$ controls the smoothness of the update.
|
| 437 |
+
|
| 438 |
+
To develop the ADIOS objective for BYOL, let us denote $\mathcal{I}$ as the composition of the two networks $\{\mathcal{I}_{\theta},\mathcal{I}_{\phi}\}$ . We can then write
|
| 439 |
+
|
| 440 |
+
$$
|
| 441 |
+
\mathcal {L} _ {\text {B Y O L}} ^ {\text {A D I O S}} (\boldsymbol {x}; \mathcal {I} _ {\theta}, \mathcal {M}) = \frac {1}{2} \left(\mathcal {D} \left(\boldsymbol {y} _ {\theta} ^ {A, m}, \boldsymbol {z} _ {\phi} ^ {B}\right) + \mathcal {D} \left(\boldsymbol {y} _ {\theta} ^ {B, m}, \boldsymbol {z} _ {\phi} ^ {A}\right)\right), \tag {14}
|
| 442 |
+
$$
|
| 443 |
+
|
| 444 |
+
where $\pmb{y}_{\theta}^{*,m} = q_{\theta}(g_{\theta}(f_{\theta}(x^{*,m})))$ . Both $\mathcal{I}_{\theta}$ and $\mathcal{M}$ are optimised through the min-max objective in (6), whereas $\mathcal{I}_{\phi}$ is updated by (13).
|
| 445 |
+
|
| 446 |
+
<table><tr><td>Down</td></tr><tr><td>3x3 conv. 8 stride 1 pad 1 & GroupNorm & ReLU</td></tr><tr><td>3x3 conv. 8 stride 1 pad 1 & GroupNorm & ReLU</td></tr><tr><td>3x3 conv. 16 stride 1 pad 1 & GroupNorm & ReLU</td></tr><tr><td>3x3 conv. 16 stride 1 pad 1 & GroupNorm & ReLU</td></tr><tr><td>3x3 conv. 16 stride 1 pad 1 & GroupNorm & ReLU</td></tr><tr><td>MLP</td></tr><tr><td>F.C. 128 & ReLU</td></tr><tr><td>F.C. 128 & ReLU</td></tr><tr><td>F.C. 256 & ReLU</td></tr><tr><td>Up</td></tr><tr><td>3x3 conv. 16 stride 1 pad 1 & GroupNorm & ReLU</td></tr><tr><td>3x3 conv. 16 stride 1 pad 1 & GroupNorm & ReLU</td></tr><tr><td>3x3 conv. 8 stride 1 pad 1 & GroupNorm & ReLU</td></tr><tr><td>3x3 conv. 8 stride 1 pad 1 & GroupNorm & ReLU</td></tr><tr><td>3x3 conv. 8 stride 1 pad 1 & GroupNorm & ReLU</td></tr><tr><td>Occlusion Head</td></tr><tr><td>1x1 conv. N stride 1 pad 1 & SoftMax</td></tr></table>
|
| 447 |
+
|
| 448 |
+
Table 8. U-Net Architecture.
|
| 449 |
+
Table 9. Hyperparameters for pretraining, used for all models.
|
| 450 |
+
|
| 451 |
+
<table><tr><td>Name</td><td>Value</td></tr><tr><td>Optimiser</td><td>SGD</td></tr><tr><td>Momentum</td><td>0.9</td></tr><tr><td>Scheduler</td><td>warmup cosine</td></tr><tr><td>Epochs</td><td>500</td></tr><tr><td>Batch size</td><td>128</td></tr></table>
|
| 452 |
+
|
| 453 |
+
# B. Backbone of Occlusion Model
|
| 454 |
+
|
| 455 |
+
We use U-Net (Ronneberger et al., 2015) as the backbone of the occlusion model, which is commonly used for semantic segmentation. The model consists of a downsampling network, an MLP, and an upsampling network. We further apply an occlusion head layer with $1 \times 1$ kernel to map the output of U-Net to $N$ masks. Refer to Tab. 8 for the architecture we used for our experiments.
|
| 456 |
+
|
| 457 |
+
# C. Setups of Classification Tasks
|
| 458 |
+
|
| 459 |
+
We develop our model using solo-learn (da Costa et al., 2021), a library for state-of-the-art self-supervised learning methods. For backbones we use ResNet-18 (He et al., 2016) and ViT-Tiny (Dosovitskiy et al., 2021) with patch size 16x16. Hyperparameters including optimiser, momentum, scheduler, epochs and batch size are shared across all models, as seen in Tab. 9. We perform hyperparameter search on the learning rate of encoder for all models, and we include the optimal values used to generate the reported results in Tab. 10; for the ADIOS models we also run search for the learning rate of the occlusion model, the penalty scaling $\lambda$ and number of masks $N$ . Refer to Tab. 11 for the values used for these parameters.
|
| 460 |
+
|
| 461 |
+
Table 10. Learning rates for SimCLR, SimSiam and BYOL.
|
| 462 |
+
|
| 463 |
+
<table><tr><td>Architecture</td><td>Dataset</td><td>SimCLR</td><td>SimSiam</td><td>BYOL</td></tr><tr><td>ResNet18</td><td>ImageNet100-S</td><td>0.15</td><td>0.25</td><td>0.25</td></tr><tr><td>ResNet18</td><td>STL10</td><td>0.15</td><td>0.23</td><td>0.31</td></tr><tr><td>ViT-Tiny</td><td>ImageNet100-S</td><td>0.15</td><td>0.25</td><td>0.25</td></tr><tr><td>ViT-Tiny</td><td>STL10</td><td>0.15</td><td>0.11</td><td>0.23</td></tr></table>
|
| 464 |
+
|
| 465 |
+
Table 11. ADIOS hyperparameters for classification tasks.
|
| 466 |
+
|
| 467 |
+
<table><tr><td></td><td>SimCLR+ADIOS</td><td>SimSiam+ADIOS</td><td>BYOL+ADIOS</td></tr><tr><td colspan="4">Dataset: ImageNet100-S, backbone: ResNet18</td></tr><tr><td>Enc. lr</td><td>0.13</td><td>0.85</td><td>0.24</td></tr><tr><td>Occ. lr</td><td>0.02</td><td>0.08</td><td>0.07</td></tr><tr><td>λ</td><td>0.57</td><td>0.29</td><td>0.40</td></tr><tr><td>N</td><td>4</td><td>4</td><td>4</td></tr><tr><td colspan="4">Dataset: ImageNet100-S, backbone: ViT-Tiny</td></tr><tr><td>Enc. lr</td><td>0.12</td><td>0.50</td><td>0.21</td></tr><tr><td>Occ. lr</td><td>0.03</td><td>0.07</td><td>0.33</td></tr><tr><td>λ</td><td>0.89</td><td>0.72</td><td>0.95</td></tr><tr><td>N</td><td>4</td><td>4</td><td>4</td></tr><tr><td colspan="4">Dataset: STL10, backbone: ResNet18</td></tr><tr><td>Enc. lr</td><td>0.21</td><td>0.52</td><td>0.49</td></tr><tr><td>Occ. lr</td><td>0.33</td><td>0.29</td><td>0.06</td></tr><tr><td>λ</td><td>0.29</td><td>0.79</td><td>0.72</td></tr><tr><td>N</td><td>6</td><td>6</td><td>6</td></tr><tr><td colspan="4">Dataset: STL10, backbone: ViT-Tiny</td></tr><tr><td>Enc. lr</td><td>0.14</td><td>0.56</td><td>0.29</td></tr><tr><td>Occ. lr</td><td>0.09</td><td>0.09</td><td>0.60</td></tr><tr><td>λ</td><td>0.50</td><td>0.12</td><td>0.18</td></tr><tr><td>N</td><td>6</td><td>6</td><td>6</td></tr></table>
|
| 468 |
+
|
| 469 |
+
# D. Setups of Transfer Learning
|
| 470 |
+
|
| 471 |
+
We fine-tune all the models using SGD with a momentum of 0.9 with cosine learning rate decay. Following protocol in Dosovitskiy et al. (2021), we use batch size 512 and no weight decay. We also run a small grid search on the learning rate with values including $\{0.001, 0.003, 0.01, 0.03\}$ .
|
| 472 |
+
|
| 473 |
+
# E. Setups of CLEVR
|
| 474 |
+
|
| 475 |
+
CLEVR (Johnson et al., 2017) is a dataset of rendered 3D objects. The dataset contains detailed attributes for each object, including shape, color, position, rotation, texture as well as mask, and is commonly used in visual question answering and multi-object representation learning. Utilising the rich annotations of CLEVR, we construct a challenging multi-label classification task, which we use to evaluate the quality of representations learned under different masking scheme.
|
| 476 |
+
|
| 477 |
+
For hyperparameters including optimiser, momentum, scheduler, epochs and batch size, we follow the same setup in Tab. 9. We perform hyperparameter search on the learning rate of encoder and occlusion model, as well as the penalty scaling $\lambda$ and number of masks $N$ , which we list in Tab. 12.
|
| 478 |
+
|
| 479 |
+
Table 12. Hyperparameters used for CLEVR.
|
| 480 |
+
|
| 481 |
+
<table><tr><td></td><td>SimCLR</td><td>+ADIOS</td><td>SimSiam</td><td>+ADIOS</td><td>BYOL</td><td>+ADIOS</td></tr><tr><td>Enc. Ir</td><td>0.2</td><td>0.3</td><td>0.7</td><td>0.5</td><td>0.5</td><td>0.4</td></tr><tr><td>Occ. Ir</td><td>-</td><td>0.1</td><td>-</td><td>0.1</td><td>-</td><td>0.3</td></tr><tr><td>λ</td><td>-</td><td>0.2</td><td>-</td><td>0.8</td><td>-</td><td>0.9</td></tr><tr><td>N</td><td>-</td><td>4</td><td>-</td><td>4</td><td>-</td><td>4</td></tr></table>
|
| 482 |
+
|
| 483 |
+
# F. CIFAR ResNet
|
| 484 |
+
|
| 485 |
+
As we mentioned, the authors of ResNet (He et al., 2016) proposes a slightly different architecture for CIFAR due to their small image size. The difference between the CIFAR-ResNet and standard ResNet lies in the first convolutional block (before layer1), which we provide a side by side comparison of in Tab. 13 and Tab. 14.
|
| 486 |
+
|
| 487 |
+
<table><tr><td>Table 13. Standard ResNet.</td></tr><tr><td>7x7 conv. 64 stride 2 pad 3</td></tr><tr><td>BatchNorm</td></tr><tr><td>ReLU</td></tr><tr><td>3x3 MaxPool stride 2 pad 1</td></tr></table>
|
| 488 |
+
|
| 489 |
+
<table><tr><td>Table 14. CIFAR ResNet.</td></tr><tr><td>3x3 conv. 64 stride 1 pad 2</td></tr><tr><td>BatchNorm</td></tr><tr><td>ReLU</td></tr><tr><td>-</td></tr></table>
|
| 490 |
+
|
| 491 |
+
As we see, the kernel size of the convolutional layer is smaller and the MaxPool operation is removed to suit the small image size. We adopt this CIFAR-optimal ResNet for fine-tuning, but stick to the original architecture for linear evaluation to utilise the weights learned during pre-training, which resulted in the considerable underperformance on this metric for all 6 models.
|
2201.13xxx/2201.13100/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:90bac4cead4b840726f955b6777afe9ccd30f593c9a359dbea57dbba4ad42af9
|
| 3 |
+
size 981661
|
2201.13xxx/2201.13100/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.13xxx/2201.13117/ef37afad-eb01-4970-bb57-94e62038b1d4_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.13xxx/2201.13117/ef37afad-eb01-4970-bb57-94e62038b1d4_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.13xxx/2201.13117/ef37afad-eb01-4970-bb57-94e62038b1d4_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b2a2cc2d4130c2a320bfa8a2de3ef6aa4739bbe2f693ac8db5327dd420670afd
|
| 3 |
+
size 775820
|
2201.13xxx/2201.13117/full.md
ADDED
|
@@ -0,0 +1,814 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Alexander G. D. G. Matthews<sup>1</sup> Michael Arbel<sup>2</sup> Danilo J. Rezende<sup>1</sup> Arnaud Doucet<sup>1</sup>
|
| 2 |
+
|
| 3 |
+
# Abstract
|
| 4 |
+
|
| 5 |
+
We propose Continual Repeated Annealed Flow Transport Monte Carlo (CRAFT), a method that combines a sequential Monte Carlo (SMC) sampler (itself a generalization of Annealed Importance Sampling) with variational inference using normalizing flows. The normalizing flows are directly trained to transport between annealing temperatures using a KL divergence for each transition. This optimization objective is itself estimated using the normalizing flow/SMC approximation. We show conceptually and using multiple empirical examples that CRAFT improves on Annealed Flow Transport Monte Carlo (Arbel et al., 2021), on which it builds and also on Markov chain Monte Carlo (MCMC) based Stochastic Normalizing Flows (Wu et al., 2020). By incorporating CRAFT within particle MCMC, we show that such learnt samplers can achieve impressively accurate results on a challenging lattice field theory example.
|
| 6 |
+
|
| 7 |
+
# 1. Introduction
|
| 8 |
+
|
| 9 |
+
There are few algorithmic problems richer or more fundamental than the task of drawing samples from an unnormalized distribution and approximating its normalizing constant. In this paper we combine two important methods for this task specifically Sequential Monte Carlo (SMC) (Del Moral et al., 2006) and variational inference with Normalizing flows (NFs) (Rezende and Mohamed, 2015).
|
| 10 |
+
|
| 11 |
+
The first of these components, SMC samplers, are an importance sampling based algorithm. They are a principled way to combine annealing, resampling and Markov chain Monte Carlo (MCMC) and as such enjoy continuing popularity (Dai et al., 2020). One potential disadvantage of
|
| 12 |
+
|
| 13 |
+
SMC samplers is that at each successive temperature step in a SMC sampler there is importance sampling- albeit between adjacent distributions. For high dimensional target distributions this can lead either to estimators with high error or require many temperatures. Another potential disadvantage is that for finite particle numbers, SMC estimates of expectation w.r.t. the target distribution are biased. This can be mitigated by using the SMC sampler inside a Particle MCMC outer loop (Andrieu et al., 2010) at the cost of adding repeated SMC sampler calls. It is particularly desirable in this Particle MCMC context for the base SMC sampler to be fast and output low variance estimates of the normalizing constant.
|
| 14 |
+
|
| 15 |
+
The other component of our method is variational inference with NFs. Flow methods learn a differentiable invertible transformation (diffeomorphism) typically with a tractable Jacobian (Papamakarios et al., 2021). The learnt mappings are both flexible and fast to evaluate but there are still some challenges around using flows. The tractability of training with reverse Kullback-Leibler divergence comes at the cost of its well known mode seeking behaviour. Further placing the full burden of modelling on a parameterized flow can lead to high memory usage in contrast to non-parametric sampling algorithms. Finally, as diffeomorphisms, flows preserve topological properties of their input. This property can represent a challenge whenever there is topological mismatch between the, typically, simple base distribution and the complex target (Cornish et al., 2020).
|
| 16 |
+
|
| 17 |
+
Methods combining annealed samplers with normalizing flows have the potential to fix the limitations of each component part. The use of flows can reduce the variance from importance sampling. The MCMC steps can reduce the topological and representational burden on the flows. Used correctly, annealing can reduce the mode seeking behaviour of the variational objective. Resampling can be used to focus computation on promising samples. The idea of combining fixed transformations with annealing goes back as far as (Vaikuntanathan and Jarzynski, 2011). It is the training of the flows that still presents a challenge and the reason that such combined approaches have not met their full potential. Two recent papers target this problem (Wu et al., 2020; Arbel et al., 2021), but as we shall relate, neither is entirely satisfactory or as well suited to the task as the algorithm we propose.
|
| 18 |
+
|
| 19 |
+
CRAFT is a method for estimating an SMC sampler augmented with interleaved normalizing flows. Unlike many other approaches in this area that incorporate sampling ideas, it is able to cope with full MCMC steps with Metropolis accept/reject corrections and SMC resampling steps. The CRAFT objective is not a standard variational bound. Rather it uses a KL divergence for each transition between temperatures. We find that as well as outperforming other methods for estimating the flows in practice (Section 4.1 and Section 4.2), our learnt sampler is useful as a fast inner loop for a Particle MCMC sampler (Section 4.3).
|
| 20 |
+
|
| 21 |
+
# 2. Method
|
| 22 |
+
|
| 23 |
+
In this section we describe and motivate both the sampling method and how to learn the required flows. Section 2.1 describes the method for fixed flows, which is equivalent to an SMC sampler with added fixed normalizing flows. In Section 2.2, we discuss conventional evidence lower bound (ELBO) methods for learning the flows and their limitations. Finally, we present the CRAFT training method in Section 2.3, and also explain how we improve over the AFT method on which CRAFT builds, by solving what we call the sample replenishment problem.
|
| 24 |
+
|
| 25 |
+
# 2.1. Sampler for fixed flows
|
| 26 |
+
|
| 27 |
+
We start by describing the sampler for fixed normalizing flows. In this case, it corresponds to a generalization of the standard SMC sampler which adds deterministic normalizing flows to the usual steps in SMC. This generalization reduces to a standard SMC sampler when the normalizing flows are the identity. Arbel et al. (2021) give a detailed history. The algorithm is described in Algorithm 3 which sequentially calls Algorithm 1.
|
| 28 |
+
|
| 29 |
+
We now describe the steps and derivation in more detail. We consider a sequence of distributions $(\pi_k(x))_{k=0}^K$ on $\mathbb{R}^M$ where $\pi_k(x) = \frac{\gamma_k(x)}{Z_k}$ and $\gamma_k(x)$ can be evaluated pointwise. The final distribution $\pi_K(x)$ is the distribution with unknown normalizing constant $Z_K$ that we wish to approximate. For the initial distribution $\pi_0(x)$ we assume that we can draw exact samples and that the normalizing constant $Z_0 = 1$ without loss of generality. The intermediate distributions enable us to transition smoothly (or in physics parlance 'anneal') from the tractable $\pi_0(x)$ to our goal $\pi_K(x)$ . While they will sometimes be of interest themselves, they will often only be used to help us construct approximations. The method returns $N$ weighted particles $\left(X_K^i, W_K^i\right)_{i=1}^N$ which can be used to provide an unbiased estimate of the normalizing constant $Z_K$ and consistent estimates of expectations under the target $\pi_K$ .
|
| 30 |
+
|
| 31 |
+
To build up intuition, consider a forward sampling process $\bar{\eta}$ producing a sequence $(X_{k})_{k = 0}^{K}$ defined on $\mathbb{R}^{M\times (K + 1)}$
|
| 32 |
+
|
| 33 |
+
such that the final sample $X_{K}$ approximates $\pi_K$ . The process starts with an initial sample $X_0$ drawn from some distribution $\pi_0$ that is successively transformed using an interleaved sequence of $K$ NFs $(T_k)_{k=1}^K$ and Markov transition kernels $(\mathcal{K}_k)_{k=1}^K$ with invariant distributions $(\pi_k)_{k=1}^K$ :
|
| 34 |
+
|
| 35 |
+
$$
|
| 36 |
+
\xrightarrow {X _ {0} \sim \pi_ {0}} \boxed {Y _ {1} = T _ {1} (X _ {0})} \xrightarrow {X _ {1}} \dots \xrightarrow {X _ {\mathrm {K} - 1}} \boxed {Y _ {\mathrm {K}} = T _ {\mathrm {K}} (X _ {\mathrm {K} - 1})} \xrightarrow {X _ {\mathrm {K}} \sim \mathcal {K} _ {\mathrm {K}} (Y _ {\mathrm {K}}, \cdot)}
|
| 37 |
+
$$
|
| 38 |
+
|
| 39 |
+
Intuitively, the purpose of each flow $T_{k}$ is to 'transport' samples from a density $\pi_{k-1}$ corresponding to an annealing temperature $k-1$ towards the next density $\pi_{k}$ at temperature $k$ . Successive densities are usually similar, making the flow easier to learn compared to a flow directly transporting $\pi_{0}$ towards the target $\pi_{K}$ . Since it is generally hard to find a flow $T_{k}$ that perfectly transports between successive densities, the sampler employs two correction mechanisms: (1) the Markov transition kernel $\mathcal{K}_{k}$ with invariant density $\pi_{k}$ further diffuses the samples towards $\pi_{k}$ and (2) Importance Sampling re-weights the samples to correct for any mismatch between $\pi_{k}$ and the density $T_{k}^{\#} \pi_{k-1}$ obtained by transporting $\pi_{k-1}$ using the flow $T_{k}$ .
|
| 40 |
+
|
| 41 |
+
Importance Sampling. Ultimately, we are interested in re-weighting the final sample $X_{K}$ to approximate the target $\pi_K$ . This could be theoretically achieved by evaluating the target to marginal ratio $\pi_K / \eta_K$ , where $\eta_K$ is obtained by integrating the proposal distribution $\bar{\eta}$ over all previous variables $(X_k)^{K - 1}_{k = 0}$ . To avoid computing such intractable integrals, we follow the standard approach in SMC and Annealed Importance Sampling (AIS) (Neal, 2001) of computing importance weights between the whole forward process $\bar{\eta}$ and an augmented target $\bar{\pi}$ admitting $\pi_K$ as marginal at time $K$ . While there are multiple choices for the augmented target $\bar{\pi}$ , as discussed in (Del Moral et al., 2006), our choice of $\bar{\pi}$ is made for tractability of the importance weights and corresponds to a formal backward process starting by sampling exactly from $\pi_K$ and then sequentially generating samples backwards in time using the reversal transformations of each flow $T_{k}$ and MCMC kernel $\mathcal{K}_k$ . The reversal of each flow transformation is its inverse while the reversal of each forward Markov kernel is the Markov kernel $\tilde{\mathcal{K}}_k$ satisfying $\pi_k(x)\mathcal{K}_k(x,x') = \pi_k(x')\tilde{\mathcal{K}}_k(x',x)$ . When choosing a reversible Markov kernel for the forward kernel $\mathcal{K}_k$ , the reversal kernel is equal to the forward kernel, i.e. $\tilde{\mathcal{K}}_k = \mathcal{K}_k$ . For our choice of augmented target, the unnormalized importance weights take the form:
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
w _ {K} \left(x _ {0: K - 1}\right) = \prod_ {k = 1} ^ {K} G _ {k} \left(x _ {k - 1}\right), \tag {1a}
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
G _ {k} \left(x _ {k - 1}\right) := \frac {\gamma_ {k} \left(T _ {k} \left(x _ {k - 1}\right)\right)}{\gamma_ {k - 1} \left(x _ {k - 1}\right)} | \nabla T _ {k} \left(x _ {k - 1}\right) |. \tag {1b}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
When the flow is the identity, equation (1b) reduces to the
|
| 52 |
+
|
| 53 |
+
ratio of successive densities, which is the standard annealed importance sampling expression. For non-identity flows the $T_{k}$ dependent terms correct for the use of the flow. See Appendix A for a precise mathematical description of $\bar{\eta}$ , $\bar{\pi}$ and the derivation of the weights (1a)-(1b).
|
| 54 |
+
|
| 55 |
+
The algorithm is then implemented sequentially to maintain a set of $N$ particles $(X_{k}^{(i)},W_{k}^{i})_{i = 1}^{N}$ consisting in pairs of samples and corresponding normalized IS weights at each time $k$ where the samples are obtained using the forward generating process $\bar{\eta}$ . The unnormalized IS weights $w_{k}^{i}$ are computed recursively using $w_{k}^{i} = W_{k - 1}^{i}G_{k}(X_{k - 1}^{i})$ , thus allowing to compute the normalized IS weights $W_{k}^{i}$ by normalizing over the set of particles $W_{k}^{i} = w_{k}^{i} / \sum_{j = 1}^{N}w_{k}^{j}$ . A consistent approximate expectation of a function $f$ under the target $\pi_{k}$ is then given by $\sum_{i = 1}^{N}W_{k}^{i}f(X_{k}^{(i)})$ . Moreover, an unbiased estimate $Z_{k}^{N}$ of the normalizing constant $Z_{k}$ is obtained sequentially from the previous one $Z_{k - 1}^{N}$ using the update $Z_{k}^{N} = Z_{k - 1}^{N}(\sum_{i = 1}^{N}w_{k}^{i})$ . If the NFs transport perfectly between the temperatures, this algorithm enjoys the property that the normalizing constant estimate $Z_{K}^{N}$ has a zero variance and the corresponding approximate expectations are unbiased with a variance corresponding to the one of the true target.
|
| 56 |
+
|
| 57 |
+
Resampling. Up to this point, we have described AIS with additional fixed normalizing flows. We now go a step further to incorporate resampling, a key ingredient of SMC methods that has proved to be very beneficial (Arbel et al., 2021; Chopin, 2002; Del Moral et al., 2006; Hukushima and Iba, 2003). Resampling consists in randomly selecting a particle $X_{k}^{i}$ with probability $W_{k}^{i}$ and repeating the operation $N$ times to construct a set of $N$ particles approximately distributed according to $\pi_{k}$ . This operation is equivalent to assigning a number of offspring $N_{k}^{i}$ to each particle $X_{k}^{i}$ and associating a uniform weight of $1 / N$ to each offspring. The vector of all offspring $(N_{k}^{i})_{i = 1}^{N}$ is then drawn from a multinomial distribution with weights $(W_{k}^{i})_{i = 1}^{N}$ under the constraint that $\sum_{i = 1}^{N}N_{k}^{i} = N$ . The expectation under $\pi_{k}$ of a function $f$ is then approximated using $\sum_{i = 1}^{N}\frac{N_{k}^{i}}{N} f(X_{k}^{i})$ which is also an unbiased estimate of the weighted sum $\sum_{i = 1}^{N}W_{k}^{i}f(X_{k}^{i})$ . Hence, resampling refocuses computational effort on promising particles (the ones falling in high density regions of $\pi_{k}$ ) whilst preserving the key analytic properties of the algorithm. However, to avoid the additional variance coming from the multinomial distribution, we only use it when the effective sample size $\mathrm{ESS}_k^N\coloneqq (\sum_{i = 1}^N (W_k^i)^2)^{-1}$ of the particles falls beyond some predefined proportion $A\in [1 / N,1)$ (we use $A = 0.3$ in all our experiments) of the total number of samples $N$ (Liu and Chen, 1995). Since resampling is followed by the Markov kernel step in each iteration of Algorithm 1 any degeneracy introduced during resampling can be reduced.
|
| 58 |
+
|
| 59 |
+
# Algorithm 1 SMC-NF-step
|
| 60 |
+
|
| 61 |
+
1: Input: Approximations $(\pi_{k-1}^{N}, Z_{k-1}^{N})$ to $(\pi_{k-1}, Z_{k-1})$ , normalizing flow $T_{k}$ , unnormalized annealed targets $\gamma_{k-1}$ and $\gamma_{k}$ and resampling threshold $A \in [1/N, 1)$ .
|
| 62 |
+
2: Output: Particles at iteration $k$ : $\pi_k^N = (X_k^i, W_k^i)_{i=1}^N$ , approximation $Z_k^N$ to $Z_k$ .
|
| 63 |
+
3: Transport particles: $Y_{k}^{i} = T_{k}(X_{k - 1}^{i})$ .
|
| 64 |
+
4: Compute IS weights: $w_{k}^{i}\gets W_{k - 1}^{i}G_{k}(X_{k - 1}^{i}) /$ //unnormalized $W_{k}^{i}\gets w_{k}^{i} / \sum_{j = 1}^{N}w_{k}^{j} /$ //normalized
|
| 65 |
+
5: Estimate normalizing constant $Z_{k}$ : $Z_{k}^{N} \gets Z_{k-1}^{N}(\sum_{i=1}^{N} w_{k}^{i})$ .
|
| 66 |
+
6: Compute effective sample size $\mathrm{ESS}_k^N$ .
|
| 67 |
+
7: if $\mathrm{ESS}_k^N\leq NA$ then
|
| 68 |
+
8: Resample $N$ particles denoted abusively also $Y_{k}^{i}$ according to the weights $W_{k}^{i}$ , then set $W_{k}^{i} = \frac{1}{N}$ .
|
| 69 |
+
9: end if
|
| 70 |
+
10: Sample $X_{k}^{i}\sim \mathcal{K}_{k}(Y_{k}^{i},\cdot)$ // MCMC
|
| 71 |
+
11: Return $(\pi_k^N,Z_k^N)$
|
| 72 |
+
|
| 73 |
+
# Algorithm 2 CRAFT-training
|
| 74 |
+
|
| 75 |
+
1: Input: Initial NFs $\{T_k\}_{1:N}$ , number of particles $N$ , unnormalized annealed targets $\{\gamma_k\}_{k=0}^K$ with $\gamma_0 = \pi_0$ and $\gamma_K = \gamma$ , resampling threshold $A \in [1/N, 1)$ .
|
| 76 |
+
2: Output: Learned flows $T_{k}$ and length $J$ sequence of approximations $(\pi_{K}^{N}, Z_{K}^{N})$ to $(\pi_{K}, Z_{K})$ .
|
| 77 |
+
3: for $j = 1, \dots, J$ do
|
| 78 |
+
4: Sample $X_0^i \sim \pi_0$ and set $W_0^i = \frac{1}{N}$ and $Z_0^N = 1$ .
|
| 79 |
+
5: for $k = 1, \dots, K$ do
|
| 80 |
+
6: $\tilde{h}\gets$ flow-grad $(T_{k},\pi_{k - 1}^{N})$ using eqn (8).
|
| 81 |
+
7: $\left(\pi_k^N,Z_k^N\right)\gets$ SMC-NF-step $\left(\pi_{k - 1}^{N},Z_{k - 1}^{N},T_{k}\right)$
|
| 82 |
+
8: Update the flow $T_{k}$ using gradient $\dot{h}$ .
|
| 83 |
+
9: end for
|
| 84 |
+
10: Yield $(\pi_K^N,Z_K^N)$ and continue for loop.
|
| 85 |
+
11: end for
|
| 86 |
+
12: Return learned flows $\{T_k\}_{k=1}^K$ .
|
| 87 |
+
|
| 88 |
+
# Algorithm 3 CRAFT-deployment
|
| 89 |
+
|
| 90 |
+
1: Input: Fixed/trained NFs $\{T_k\}_{k=1}^K$ , number of particles $N$ , unnormalized annealed targets $\{\gamma_k\}_{k=0}^K$ with $\gamma_0 = \pi_0$ , resampling threshold $A \in [1/K, 1)$ .
|
| 91 |
+
2: Output: Approximations $(\pi_K^N,Z_K^N)$ to $(\pi ,Z)$
|
| 92 |
+
3: Sample $X_0^i \sim \pi_0$ and set $W_0^i = \frac{1}{N}$ and $Z_0^N = 1$ .
|
| 93 |
+
4: for $k = 1, \dots, K$ do
|
| 94 |
+
5: $\left(\pi_k^N,Z_k^N\right)\gets$ SMC-NF-step $(\pi_{k - 1}^{N},Z_{k - 1}^{N},T_{k})$
|
| 95 |
+
6: end for
|
| 96 |
+
7: Return $(\pi_K^N,Z_K^N)$
|
| 97 |
+
|
| 98 |
+
# 2.2. Limitations of ELBO based training
|
| 99 |
+
|
| 100 |
+
The idea of learning parameters of a forward generating Markov process $\bar{\eta}$ to approximate a target distribution $\pi_{K}$ and its normalizing constant $Z_{K}$ has already been explored in the literature, in particular in the context of VAEs training (Salimans et al., 2015; Wu et al., 2020; Geffner and Domke, 2021; Thin et al., 2021; Zhang et al., 2021). In these references, the standard approach consisting of minimizing the KL divergence between the distribution $\bar{\eta}$ of the forward generating process and a suitable augmented target $\bar{\pi}$ admitting $\pi_{K}$ as marginal at time $K$ is adopted:
|
| 101 |
+
|
| 102 |
+
$$
|
| 103 |
+
\operatorname {K L} \left[ \bar {\eta} | | \bar {\pi} \right] = \log Z _ {K} - \mathbb {E} _ {\bar {\eta}} \left[ \log w _ {K} \right]. \tag {2}
|
| 104 |
+
$$
|
| 105 |
+
|
| 106 |
+
However, minimizing this Kullback-Leibler divergence, equivalently maximizing the ELBO objective $\mathbb{E}_{\bar{\eta}}[\log w_K]$ , requires differentiating through the forward sampler which can be challenging when MCMC kernels are included as we now discuss in more detail.
|
| 107 |
+
|
| 108 |
+
Discontinuity of the forward sampler. The most widely used MCMC transition kernels, such as Hamiltonian Monte Carlo (HMC) or Metropolis-adjusted Langevin algorithm (MALA), rely on an Metropolis accept/reject mechanism to ensure invariance. However, these mechanisms are discontinuous functions of the input random variables of the forward sampler. To avoid this issue and enable use of the reparametrization trick, a popular solution is to use approximate MCMC kernels without Metropolis correction which have continuous densities and then use an approximate reverse distribution (Salimans et al., 2015; Wu et al., 2020; Geffner and Domke, 2021; Thin et al., 2021; Zhang et al., 2021). The downside of this is the bias that accrues from not having the correction and the difficulty of approximating the reversal. This can mean many slow mixing proposals are required.
|
| 109 |
+
|
| 110 |
+
It is desirable to use in the forward generating process a combination of standard Metropolis-corrected MCMC samplers (e.g. HMC, MALA) and NFs as used by CRAFT, Stochastic Normalizing Flows (SNFs) (Wu et al., 2020) and AFT (Arbel et al., 2021). In this case, the proposition below proven in Appendix B and extending the results of (Thin et al., 2021) shows that the gradient of the ELBO in the flow parameters is the sum of two terms, one of which is a high variance score term.
|
| 111 |
+
|
| 112 |
+
Proposition 1. Let $U$ be the set of continuous r.v.s (used to sample proposals) of distribution independent of any parameter and $A$ the set of discrete r.v.s (used for accept/reject steps) during the forward generating process. Let $p$ denote the joint distribution of $A, U$ . The gradient of the ELBO $\mathbb{E}_{\bar{\eta}}[\log w_K]$ in the flow parameters is:
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
\mathbb {E} _ {p} \left[ \nabla \log \left(w _ {K} \circ \Phi\right) + \log \left(w _ {K} \circ \Phi\right) \nabla \log p \right], \tag {3}
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
where $\Phi$ is a differentiable re-parameterization of the trajectory $X_{0:K}$ in terms of $U$ for any $A$ , i.e. $X_{0:K} = \Phi (U,A)$ . In particular, when the learned flows $(T_k)_{k=1}^K$ are optimal, i.e. $T_k^\# \pi_{k-1} = \pi_k$ , it holds that:
|
| 119 |
+
|
| 120 |
+
$$
|
| 121 |
+
\mathbb {E} _ {p} \left[ \nabla \log (w _ {K} \circ \Phi) \right] = \mathbb {E} _ {p} \left[ \log (w _ {K} \circ \Phi) \nabla \log p \right] = 0.
|
| 122 |
+
$$
|
| 123 |
+
|
| 124 |
+
The first term on the r.h.s. of the expression (3) corresponds to a reparameterization trick term. The second term, however, requires computing a high variance score $\nabla \log p$ . As pointed out by Thin et al. (2021), algorithms such as SNFs with Metropolis-adjusted MCMC kernels are omitting the second score term when optimizing the ELBO. While this term happens to vanish in the particular case where the flows are optimal, its contribution can, in general, be important especially when initializing the algorithm with suboptimal flows.
|
| 125 |
+
|
| 126 |
+
Discontinuity of the resampling steps. The literature has mostly focused on scenarios where the forward generating process $\bar{\eta}$ is a simple Markov process. SMC forward generating processes as used in CRAFT could be exploited to obtain a tighter ELBO and have been used in the context of state-space models (Maddison et al., 2017; Le et al., 2018; Naesseth et al., 2018). However, the resampling steps of SMC correspond to sampling discrete distributions and lead to high variance gradient estimates. Omitting them can introduce significant bias (Corenflos et al., 2021). These difficulties are further exacerbated in our context since, as discussed earlier, the MCMC steps are not differentiable either.
|
| 127 |
+
|
| 128 |
+
Emergence of annealing requires mixing. In (Wu et al., 2020) the ELBO corresponding to equation (2) has the form of the standard VI with NFs objective, with additional weight terms entering into $w_{K}$ due to the MCMC steps and annealing schedule. The additional log-weight corrections are
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
\Delta \log w _ {k} ^ {\mathrm {S N F M C}} = \log \gamma_ {k} (x _ {k} ^ {\mathrm {S N F}}) - \log \gamma_ {k} (\hat {x} _ {k} ^ {\mathrm {S N F}}), \tag {4}
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
where $\hat{x}_k^{\mathrm{SNF}}$ is the value of the sample after the MCMC at temperature $k$ has been applied and $x_k^{\mathrm{SNF}}$ is the value of the sample before it (see Appendix A). This is the only way the annealed densities enter into the Stochastic Normalizing Flow ELBO. Since this additional term is zero in the limit where the Markov kernel does not mix, it follows that the value of the SNF ELBO reduces to the VI with normalizing flows objective in this case. Consequently, for the training objective to be significantly different from the one without MCMC and annealing, this method requires the MCMC kernels to mix sufficiently, which may not be the case in challenging examples.
|
| 135 |
+
|
| 136 |
+
# 2.3. CRAFT training
|
| 137 |
+
|
| 138 |
+
Here we describe a method that neatly sidesteps the issues described with the standard ELBO. We take the learning objective to have the following form:
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
H = \sum_ {k = 1} ^ {K} \mathrm {K L} \left[ T _ {k} ^ {\#} \pi_ {k - 1} \right| | \pi_ {k} ] \tag {5}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
where the sum runs over each transition between temperatures. The minimum of the objective is zero and this is attained when each flow transports perfectly between successive temperatures. In general, breaking the annealing task down term-by-term means that we can reduce the mode seeking effect of the reverse KL divergence. Each KL term in the sum can be written as follows:
|
| 145 |
+
|
| 146 |
+
$$
|
| 147 |
+
\operatorname {K L} \left[ T _ {k} ^ {\#} \pi_ {k - 1} \right| \left| \pi_ {k} \right] = \mathbb {E} _ {\pi_ {k - 1}} \left[ D _ {k} \right] + \log \left(\frac {Z _ {k}}{Z _ {k - 1}}\right), \tag {6a}
|
| 148 |
+
$$
|
| 149 |
+
|
| 150 |
+
$$
|
| 151 |
+
D _ {k} (x) := \log \frac {\gamma_ {k - 1} (x)}{\gamma_ {k} \left(T _ {k} (x)\right)} - \log | \nabla_ {x} T _ {k} (x) |. \tag {6b}
|
| 152 |
+
$$
|
| 153 |
+
|
| 154 |
+
The expectation over $\pi_{k-1}$ on the RHS of the equation is approximated using a streaming SMC approximation of $\pi_{k-1}$ which will always be based on the corresponding learnt sampler for the current optimization step:
|
| 155 |
+
|
| 156 |
+
$$
|
| 157 |
+
\operatorname {K L} \left[ T _ {k} ^ {\#} \pi_ {k - 1} \mid \mid \pi_ {k} \right] - \log \left(\frac {Z _ {k}}{Z _ {k - 1}}\right) \approx \sum_ {i} W _ {k - 1} ^ {i} D _ {k} \left(X _ {k - 1} ^ {i}\right). \tag {7}
|
| 158 |
+
$$
|
| 159 |
+
|
| 160 |
+
The terms inside the summation require that we can evaluate the output of the normalizing flow $T_{k}(X)$ and the log determinant of the Jacobian $\log |\nabla_x T_k(x)|$ . Similarly the gradients with respect to flow parameters $\theta_{k}$ of flow $T_{k}$ are approximated:
|
| 161 |
+
|
| 162 |
+
$$
|
| 163 |
+
\frac {\partial H}{\partial \theta_ {k}} \approx \sum_ {i} W _ {k - 1} ^ {i} \frac {\partial D _ {k} \left(X _ {k - 1} ^ {i}\right)}{\partial \theta_ {k}}. \tag {8}
|
| 164 |
+
$$
|
| 165 |
+
|
| 166 |
+
When the flows transport perfectly, the gradient $\frac{\partial H}{\partial \theta_k}$ will have true value zero and there will be no bias in this particle approximation to it. More generally the gradient estimate will be biased since it is based on a normalized importance estimator. We will analyze this aspect further in Section 3.2, but for now we note that the normal guarantees from the SMC literature imply the gradient is consistent in the number of particles and the variance and bias are of order $O(1/N)$ (Del Moral, 2013).
|
| 167 |
+
|
| 168 |
+
The proposed algorithm is described in Algorithm 2. Each training loop iteration of CRAFT looks like a iteration of the test time sampler in Algorithm 3 with gradient estimation and flow parameter updates added. Note that the parameter update is applied after the flow transport step for that temperature so the first effect it has will be on the particles that come through in the next step. This allows us
|
| 169 |
+
|
| 170 |
+
to view CRAFT as an optimization of the objective (5) and will allow stability analysis that we will detail later in the text. It also means that the normalizing constant estimators produced after each step are unbiased. Since all the constants produced in this way depend on the optimization sequence they are not independent. At test time when the parameters are fixed the independence is restored.
|
| 171 |
+
|
| 172 |
+
Relationship to AFT. We now describe the relationship between CRAFT and the AFT method of (Arbel et al., 2021). Relating them is made more challenging by the fact there are two variants of the AFT algorithm described in (Arbel et al., 2021). For clarity we include both in our supplement as Algorithms 5 and 7. We call Algorithm 5 simple AFT- it is easier to describe and analyse but is not actually used in practice by (Arbel et al., 2021) for reasons we will describe. Instead they use what we call practical AFT (Algorithm 7) a more complicated version that performs better in practice. Relative to CRAFT, both variants of AFT may be viewed as a one pass greedy optimization of the objective (5). The closest point of algorithmic comparison to CRAFT is simple AFT- we show the line difference to CRAFT in Algorithm 6. At each temperature step simple AFT optimizes the particle approximated loss to its numerical minimum, before then updating the particles using the new flow.
|
| 173 |
+
|
| 174 |
+
We now describe the sample replenishment problem in AFT. The learning of the flows is hampered by the fact that the sample complexity of estimation can be higher than the finite number of available particles. In simple AFT this manifests as over-fitting of the flows to the available particles and large biases in expectations. Regenerating new samples at each temperature separately would require computation quadratic in the number of temperatures, which is too slow. To mitigate the over-fitting in a manor consistent with the one pass paradigm of the paper, the authors added validation samples which were used for early stopping. Test samples were also added to remove finite sample residual bias. This significantly increased the complexity of the AFT method, changing simple AFT (Algorithm 5) to practical AFT (Algorithm 7). This gave substantial empirical improvements but the fundamental limitation of the sample replenishment problem remains, arising from the desire to use one temperature pass. Even when the particles are a perfect approximation to the desired distribution at each temperature, if the sample complexity of estimating the training gradients of the flows is high relative to the number of particles then the practical AFT method still struggles to train.
|
| 175 |
+
|
| 176 |
+
By having multiple independent passes instead of one, CRAFT is able to fully address the sample replenishment problem in a different way. Since in CRAFT the parameter update is applied after flow transport at each temperature the analysis of the algorithm is simpler than that required
|
| 177 |
+
|
| 178 |
+
for AFT, where sophisticated arguments must be invoked to ensure consistency of the particles. Later in Section 3.2 we will see that we can analyse CRAFT from the perspective of unbiased gradients. This interpretation does not apply to either variant of AFT.
|
| 179 |
+
|
| 180 |
+
From a practical perspective CRAFT performs better than AFT (Section 4.1). This is significant because (Arbel et al., 2021) already showed they could train samplers that performed well relative to strong baselines like SMC. CRAFT is easier to tune and more robust to hyperparameter misspecification than AFT. Conceptually it is much simpler, to the extent that it is easier to describe CRAFT as a standalone algorithm than in terms of the AFT algorithm on which it builds. CRAFT is closer to a traditional machine learning training paradigm with gradients applied to a single objective. This makes it easier to adopt and to scale through parallelism.
|
| 181 |
+
|
| 182 |
+
Recent work from (Zimmermann et al., 2021) investigates a framework that reduces to the AFT objective (5) for deterministic forward and backward transitions. Inspired by (Arbel et al., 2021) they investigate some toy examples with normalizing flows. There are good reasons to concentrate on the normalizing flow case as we have done. The reverse distribution is analytic and optimal for a flow but can be difficult to approximate otherwise (Thin et al., 2021). Further Zimmermann et al. (2021) do not use Markov transition kernels in practice, which we found to be essential for good performance.
|
| 183 |
+
|
| 184 |
+
# 2.4. Using CRAFT within Particle MCMC
|
| 185 |
+
|
| 186 |
+
Whilst the SMC-NF estimator of the normalizing constant is unbiased, in general estimates of the expectations w.r.t. the target are asymptotically consistent but exhibit a $O(1 / N)$ bias. In situations requiring high accuracy expectations it is desirable to have a method returning asymptotically unbiased estimates. However, increasing $N$ for the bias to be negligible will often not be feasible for challenging applications because of memory requirements. MCMC algorithms have the advantage of providing consistent estimate of these expectations as compute time increases without having to store an increasing number of samples. Particle MCMC methods (Andrieu et al., 2010) provide a way to bring such benefits to SMC samplers. In particular, the so-called Particle Independent Metropolis-Hastings is an MCMC sampler that uses an SMC sampler with $N$ particles and provides consistent estimates of expectations for any $N$ as the number of iterations increases; see Appendix D for details. Since a single pass of CRAFT with fixed parameters can be thought of as an SMC sampler with additional deterministic transformations it can be used here. Computational trade-offs for Particle MCMC are analyzed by Pitt et al. (2012).
|
| 187 |
+
|
| 188 |
+
# 3. Analysis of the training objective
|
| 189 |
+
|
| 190 |
+
# 3.1. Reformulating the CRAFT objective
|
| 191 |
+
|
| 192 |
+
The CRAFT objective (5) can be rewritten as a single KL divergence between certain product distributions:
|
| 193 |
+
|
| 194 |
+
$$
|
| 195 |
+
H = \mathrm {K L} \left[ \prod_ {k = 1} ^ {K} T _ {k} ^ {\#} \pi_ {k - 1} | | \prod_ {k = 1} ^ {K} \pi_ {k} \right]. \tag {9}
|
| 196 |
+
$$
|
| 197 |
+
|
| 198 |
+
We have adopted the notation that the product symbol $\prod$ can be used to denote multi-variable product measures. The proof above effectively applies the multi-variable generalization of the result for the $KL$ -divergence between two product measures, $\mathrm{KL}[U\times V||F\times G] = \mathrm{KL}[U||F] + \mathrm{KL}[V||G]$ . Despite the non-standard form of the KL divergence, it is also possible to obtain a bound on the normalizing constant
|
| 199 |
+
|
| 200 |
+
$$
|
| 201 |
+
\log Z _ {K} \geq - \sum_ {k = 1} ^ {K} \mathbb {E} _ {X \sim \pi_ {k - 1}} [ D _ {k} ] \tag {10}
|
| 202 |
+
$$
|
| 203 |
+
|
| 204 |
+
This bound can be obtained starting from $H \geq 0$ , expanding each term using (6a) and noting that there is a telescoping cancellation between the intermediate normalizing constants. The bound is distinct from the one obtained if we take the logarithm of the unbiased estimate used in SMC samplers. In practice the bound is estimated using the particle estimate for each temperature and is effectively the training objective of CRAFT.
|
| 205 |
+
|
| 206 |
+
# 3.2. Re-thinking the KL criterion at the particle level
|
| 207 |
+
|
| 208 |
+
We discuss here a re-interpretation of the CRAFT training procedure in terms of KL-divergences between particle approximations and targets. An advantage of this interpretation is that it shows that we compute an unbiased gradient of a modified objective that is well-motivated even when the number of particles is moderate.
|
| 209 |
+
|
| 210 |
+
CRAFT training requires minimizing the objective (5). As we do not have access to $\pi_{k - 1}$ , CRAFT approximates the intractable gradient of this objective using equation (8). We will compactly refer to the particle approximation as a weighted sum of delta masses $\pi_{k - 1}^{N}(\mathrm{d}x) = \sum_{i = 1}^{N}W_{k - 1}^{i}\delta_{X_{k - 1}^{i}}(\mathrm{d}x)$ . While the use of such an approach could be of concern when $N$ is moderate or at initialization, the following proposition shows that such concerns are ill-founded and that this approximate SMC based gradient is also the unbiased gradient of a well-defined and intuitive objective.
|
| 211 |
+
|
| 212 |
+
Proposition 2. Let $\hat{\pi}_{k - 1}^{N}(\mathrm{d}x) = \mathbb{E}[\pi_{k - 1}^{N}(\mathrm{d}x)]$ denote the expectation of the random SMC approximation $\pi_k^N$ of $\pi_{k - 1}$ w.r.t. to the law of the SMC-NF algorithm. An unbiased
|
| 213 |
+
|
| 214 |
+
gradient of the objective KL $[T_k^\# \hat{\pi}_{k - 1}^N ||\pi_k]$ is given by
|
| 215 |
+
|
| 216 |
+
$$
|
| 217 |
+
\mathbb {E} _ {X \sim \pi_ {k - 1} ^ {N}} [ \nabla_ {\theta_ {k}} D _ {k} (X) ]. \tag {11}
|
| 218 |
+
$$
|
| 219 |
+
|
| 220 |
+
The l.h.s of this KL is the pushforward of the "average" SMC approximation to $\pi_{k - 1}$ through the flow $T_{k}$ .
|
| 221 |
+
|
| 222 |
+
To help understand this result and how it is derived, consider a simpler scenario where the random approximation $\pi_{k - 1}^{N}$ of $\pi_{k - 1}$ has been obtained by a batch parallel importance sampling method. That is we sample $X_{k - 1}^{i}\stackrel {i.i.d.}{\sim}q_{k - 1}$ for $i = 1,\dots ,N$ , compute $w(X_{k - 1}^{i}) = \pi_{k - 1}(X_{k - 1}^{i}) / q_{k - 1}(X_{k - 1}^{i})$ .We then define $W_{k - 1}^{i}\propto w(X_{k - 1}^{i})$ such that $\sum_{i = 1}^{N}W_{k - 1}^{i} = 1$ and finally return $\pi_{k - 1}^{N}(\mathrm{d}x)$ . If we average over the random particle locations $X_{k - 1}^{1:N}$ , we obtain
|
| 223 |
+
|
| 224 |
+
$$
|
| 225 |
+
\hat {\pi} _ {k - 1} ^ {N} (\mathrm {d} x) = \mathbb {E} _ {X _ {k - 1} ^ {i}} \underset {\sim} {\overset {\text {i . d .}} {\sim}} q _ {k - 1} [ \pi_ {k - 1} ^ {N} (\mathrm {d} x) ]. \tag {12}
|
| 226 |
+
$$
|
| 227 |
+
|
| 228 |
+
Contrary to $\pi_{k - 1}^{N}(\mathrm{d}x)$ which is a discrete measure, the distribution $\hat{\pi}_{k - 1}^{N}(\mathrm{d}x)$ admits a density and its Kullback-Leibler divergence with $\mu_{k} = (T_{k}^{-1})^{\#}\pi_{k}$ is well-defined. It then follows directly from (12), diffeomorphism invariance, and iterated expectations that
|
| 229 |
+
|
| 230 |
+
$$
|
| 231 |
+
\begin{array}{l} \operatorname {K L} \left[ T _ {k} ^ {\#} \hat {\pi} _ {k - 1} ^ {N} \mid \mid \pi_ {k} \right] = \mathbb {E} _ {X \sim \hat {\pi} _ {k - 1} ^ {N}} \left[ \log \frac {\hat {\pi} _ {k - 1} ^ {N} (X)}{\mu_ {k} (X)} \right] \tag {13} \\ = \mathbb {E} _ {X _ {k - 1} ^ {i}} \underset {\sim q _ {k - 1}} {\mathrm {i . i . d .}} \left[ \mathbb {E} _ {X \sim \pi_ {k - 1} ^ {N}} \left[ \log \frac {\hat {\pi} _ {k - 1} (X)}{\mu_ {k} (X)} \right] \right]. \\ \end{array}
|
| 232 |
+
$$
|
| 233 |
+
|
| 234 |
+
A direct consequence of this identity is that an unbiased gradient of $\mathrm{KL}[T_k^\# \hat{\pi}_{k - 1}^N ||\pi_k]$ is indeed given by (11). To prove Proposition 2, we extend this argument to the scenario where $\hat{\pi}_{k - 1}^N (\mathrm{d}x)$ has been obtained by using an SMC sampler where the particles are not independent because of resampling; see Appendix C.
|
| 235 |
+
|
| 236 |
+
Taking a step back and returning to the original objective we note that once we start considering the marginal particle distribution $\hat{\pi}_{k - 1}^{N}$ instead of the target distribution $\pi_{k - 1}$ the flow parameters $\theta$ effect the KL divergences that follow them, since later particle distributions depend on them. This additional effect is not included in the CRAFT updates. Each flow takes whatever particle distribution it receives from the steps and is trained only to reduce the KL of the push-forward to the next target distribution $\pi_{k}$ , without reference to what follows.
|
| 237 |
+
|
| 238 |
+
# 4. Experiments
|
| 239 |
+
|
| 240 |
+
In this section we empirically investigate the performance of CRAFT. First, in Section 4.1 we give a case study demonstrating the empirical benefit of CRAFT relative to AFT, then in Section 4.2 we show that CRAFT
|
| 241 |
+
|
| 242 |
+
outperforms Stochastic Normalizing flows in two challenging examples. We then show a compelling example use case for CRAFT as a learnt proposal for a particle MCMC sampler applied to lattice field theory. Code for the algorithms and examples can be found at https://github.com/deepmind/annealed_flow_transport. Further experiments and details are included in the Appendix.
|
| 243 |
+
|
| 244 |
+
# 4.1. Empirically comparing CRAFT and AFT
|
| 245 |
+
|
| 246 |
+
Our first point of comparison for CRAFT is with AFT (Arbel et al., 2021). We use what we call the practical AFT method in Section 2.3. To illustrate the benefit of solving the AFT sample replenishment problem we show the effect of varying the batch size in both algorithms. Both methods are given the same total budget of particles at train and test time. To highlight that the sample replenishment problem cannot be mitigated simply by using more MCMC we gave AFT 100 times more MCMC updates than CRAFT at train and test time. We use the 1024 dimensional log Gaussian Cox process (LGCP) example which is the most challenging from (Arbel et al., 2021). As in that paper, we used a diagonal affine flow.
|
| 247 |
+
|
| 248 |
+
Figure 1 shows the results. We see that CRAFT outperforms AFT for all number of particles considered. This can be understood in terms of our better solution to the sample replenishment problem as described in Section 2.3. Since many of the most challenging modern sampling problems also have high memory usage this is also of considerable practical significance.
|
| 249 |
+
|
| 250 |
+
# 4.2. Comparing CRAFT and SNFs
|
| 251 |
+
|
| 252 |
+
We now compare CRAFT training with that of the corresponding SNFs. For the Markov transition kernel both methods use full Metropolis corrected HMC. We use the same normalizing flow family for both methods. As such this constitutes a direct comparison of the two approaches. We ran timed training experiments for both methods computing the corresponding unbiased estimates for the normalizing constant as it progressed. Note that at test time these estimates could be obtained faster by running the learnt sampler. We consider two different target densities following (Arbel et al., 2021) namely the 30 dimensional variational autoencoder latent space and the LGCP example. For the VAE example we used an affine inverse autoregressive flow (Kingma et al., 2016). For the LGCP we again used a diagonal affine flow.
|
| 253 |
+
|
| 254 |
+
Figure 2 shows the results. In both cases we find that CRAFT converges to a better value than SNFs. We attribute the worse final value for SNFs to the issues with the training discussed in Section 2.2.
|
| 255 |
+
|
| 256 |
+

|
| 257 |
+
Figure 1. Comparison of normalizing constants between CRAFT and AFT for different numbers of particles. AFT is given 100 times more train/test MCMC steps than CRAFT. A gold standard value is shown in magenta.
|
| 258 |
+
|
| 259 |
+

|
| 260 |
+
|
| 261 |
+

|
| 262 |
+
Figure 2. Timed comparison of normalizing constant estimates produced by CRAFT and SNF during training (higher is better). Three independent repeats are shown. A gold standard value is shown in magenta.
|
| 263 |
+
|
| 264 |
+
# 4.3. CRAFT based Particle MCMC for lattice $\phi^4$ theory
|
| 265 |
+
|
| 266 |
+
A classic proving ground for sampling algorithms is in the area of lattice field theory. Indeed, it was this application that motivated Hamiltonian/Hybrid Monte Carlo (Duane et al., 1987). State of the art quantum chromodynamics (QCD) calculations (Borsanyi et al., 2021) use lattice sampling run on multiple supercomputers. There has been a recent surge of interest in using normalizing flows to improve sampling in such models starting with (Albergo et al., 2019), so it is natural to investigate the performance of CRAFT in this area.
|
| 267 |
+
|
| 268 |
+
Of the available lattice field theories we use the Euclidean $\phi^4$ theory in two dimensions following (Albergo et al., 2019; 2021). We start from the continuum action:
|
| 269 |
+
|
| 270 |
+
$$
|
| 271 |
+
S _ {\text {c o n t}} [ \phi ] = \int \left\{\left| | \nabla \phi (x) | \right| _ {2} ^ {2} + m ^ {2} \phi (x) ^ {2} + \lambda \phi (x) ^ {4} \right\} d x,
|
| 272 |
+
$$
|
| 273 |
+
|
| 274 |
+
which we discretize on a $14 \times 14$ lattice, using lattice units, and periodic boundary conditions to obtain:
|
| 275 |
+
|
| 276 |
+
$$
|
| 277 |
+
S _ {\text {l a t t}} (\phi) = \sum_ {\hat {x}} \left\{\phi (\hat {x}) \zeta (\hat {x}) + m ^ {2} \phi (\hat {x}) ^ {2} + \lambda \phi (\hat {x}) ^ {4} \right\}, \tag {14a}
|
| 278 |
+
$$
|
| 279 |
+
|
| 280 |
+
$$
|
| 281 |
+
\zeta (\hat {x}) := \sum_ {\mu} \left[ 2 \phi (\hat {x}) - \phi (\hat {x} + \hat {e} _ {\mu}) - \phi (\hat {x} - \hat {e} _ {\mu}) \right]. \tag {14b}
|
| 282 |
+
$$
|
| 283 |
+
|
| 284 |
+
Here the sum over $\hat{x}$ runs over all lattice sites. The summation $\mu \in \{x,y\}$ runs over the dimensions of the lattice and $\hat{e}_{\mu}$ defines a single lattice step the in direction $\mu$ . $\lambda \geq 0$ defines the coupling parameter and $m^2$ is the mass squared which counter-intuitively may be positive or negative. A probability distribution over the values of the field at the lattice sites is then defined using $p(\phi) = \frac{1}{Z}\exp \left\{-S_{\mathrm{latt}}(\phi)\right\}$ . For positive $\lambda$ and large $\phi$ the quartic term will dominate giving highly non-Gaussian tails. For negative $m^2$ , which will be our focus, the marginal distributions are bimodal.
|
| 285 |
+
|
| 286 |
+
As is typical of physical sciences applications, symmetry is crucial in this example. The target is invariant under any translation on the lattice. In Appendix H we show that the corresponding requirement on flows between successive temperatures is translation equivariance. We incorporate this in to the flow models under consideration using convolutional affine coupling layers (Dinh et al., 2017) with periodic boundary conditions.
|
| 287 |
+
|
| 288 |
+
Since high accuracy is required we are interested in methods that are asymptotically unbiased for general expectations. We take physically relevant expectations from (Albergo et al., 2019) namely the 'two point susceptibility' in the main text and the 'Ising energy density' in the supplement -both have similar results. We use Particle MCMC with three different independent proposal types. These are
|
| 289 |
+
|
| 290 |
+

|
| 291 |
+
Figure 3. Timed comparison of MCMC methods for the $\phi^4$ example, based on fifteen repeats. CRAFT, SMC and VI serve as proposal mechanisms for Particle MCMC. HMC is applied directly to the target. Error is in estimating two point susceptibility, an example, physically relevant expectation. Note HMC never reaches the detailed level of error in the top row.
|
| 292 |
+
|
| 293 |
+
SMC, VI with normalizing flows (and no annealing) and CRAFT. We also compare against HMC that directly targets the target density. The results are shown in Figure 3. We see that HMC struggles with this target density because it is unable to mix between modes. Despite producing samples quickly, the VI proposal fails to cover all of the modes because of the use of reverse KL divergence. The SMC proposal does much better. As an annealing method it is less susceptible to multi-modality. Finally CRAFT converges the fastest, improving on SMC without flows and avoiding the mode seeking behaviour of pure VI. Note that by scaling arguments if we accelerate an MCMC method by a factor of $R$ we expect that errors in expectations will be reduced by a factor of $\sqrt{R}$ on such error plots.
|
| 294 |
+
|
| 295 |
+
# 5. Conclusion
|
| 296 |
+
|
| 297 |
+
In this paper we have proposed and analyzed the CRAFT algorithm. At test time it is an SMC sampler with added flow transport steps. The training algorithm uses particle estimation to optimize a training objective that explicitly promotes transport between annealing temperatures. We have given a thorough analysis of the training including an insightful particle interpretation. We have shown that CRAFT compares well empirically and conceptually with existing methods to estimate flows in this context.
|
| 298 |
+
|
| 299 |
+
In terms of where such efforts fit into the bigger picture, there are many applications where high accuracy expecta
|
| 300 |
+
|
| 301 |
+
tions are required and algorithms like Particle MCMC become necessary due to the challenging nature of the target. As such we believe the speed up we have achieved in the quantum field theory application is exciting both for that field and others.
|
| 302 |
+
|
| 303 |
+
# 6. Acknowledgements
|
| 304 |
+
|
| 305 |
+
The authors wish to thank Sébastien Racanière, Michalis Titsias, Phiala Shanahan, Jonas Köhler, Andy Ballard, Laurence Midgley and the anonymous reviewers.
|
| 306 |
+
|
| 307 |
+
# References
|
| 308 |
+
|
| 309 |
+
Michael S. Albergo, Gurtej Kanwar, and Phiala E. Shanahan. Flow-based generative models for Markov chain Monte Carlo in lattice field theory. Physical Review D, 100:034515, 2019.
|
| 310 |
+
Michael S Albergo, Denis Boyda, Daniel C Hackett, Gurtej Kanwar, Kyle Cranmer, Sébastien Racanière, Danilo Jimenez Rezende, and Phiala E Shanahan. Introduction to normalizing flows for lattice field theory. arXiv preprint arXiv:2101.08176, 2021.
|
| 311 |
+
Christophe Andrieu, Arnaud Doucet, and Roman Holenstein. Particle Markov chain Monte Carlo methods. Journal of the Royal Statistical Society: Series B, 72(3): 269-342, 2010.
|
| 312 |
+
|
| 313 |
+
Michael Arbel, Alex Matthews, and Arnaud Doucet. Annealed flow transport Monte Carlo. In International Conference on Machine Learning, 2021.
|
| 314 |
+
Sz. Borsanyi, Z. Fodor, J. N. Guenther, C. Hoelbling, S. D. Katz, L. Lellouch, T. Lippert, K. Miura, L. Parato, K. K. Szabo, F. Stokes, B. C. Toth, Cs. Torok, and L. Varnhorst. Leading hadronic contribution to the muon magnetic moment from lattice QCD. Nature, 593(7857):51-55, 2021.
|
| 315 |
+
James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018.
|
| 316 |
+
Nicolas Chopin. A sequential particle filter method for static models. Biometrika, 89(3):539-552, 2002.
|
| 317 |
+
Adrien Corenflos, James Thornton, George Deligiannidis, and Arnaud Doucet. Differentiable particle filtering via entropy-regularized optimal transport. In International Conference on Machine Learning, 2021.
|
| 318 |
+
Rob Cornish, Anthony Caterini, George Deligiannidis, and Arnaud Doucet. Relaxing bijectivity constraints with continuously indexed normalising flows. In International Conference on Machine Learning, 2020.
|
| 319 |
+
Chenguang Dai, Jeremy Heng, Pierre E Jacob, and Nick Whiteley. An invitation to sequential Monte Carlo samplers. arXiv preprint arXiv:2007.11936, 2020.
|
| 320 |
+
Pim de Haan, Corrado Rainone, Miranda Cheng, and Roberto Bondesan. Scaling up machine learning for quantum field theory with equivariant continuous flows. arXiv preprint arXiv:2110.02673, 2021.
|
| 321 |
+
Pierre Del Moral. Mean Field Simulation for Monte Carlo Integration. CRC press, 2013.
|
| 322 |
+
Pierre Del Moral, Arnaud Doucet, and Ajay Jasra. Sequential Monte Carlo samplers. Journal of the Royal Statistical Society: Series B, 68(3):411-436, 2006.
|
| 323 |
+
Joshua V. Dillon, Ian Langmore, Dustin Tran, Eugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt Hoffman, and Rif A. Saurous. Tensor-Flow Distributions. arXiv preprint arXiv:1711.10604, 2017.
|
| 324 |
+
Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real NVP. In International Conference on Learning Representations, 2017.
|
| 325 |
+
|
| 326 |
+
Simon Duane, Anthony D Kennedy, Brian J Pendleton, and Duncan Roweth. Hybrid Monte Carlo. Physics Letters B, 195(2):216-222, 1987.
|
| 327 |
+
Tomas Geffner and Justin Domke. MCMC variational inference via uncorrected Hamiltonian annealing. In Advances in Neural Information Processing Systems, 2021.
|
| 328 |
+
Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. In International Conference on Machine Learning, 2015.
|
| 329 |
+
Jeremy Heng, Arnaud Doucet, and Yvo Pokern. Gibbs flow for approximate transport with applications to Bayesian computation. Journal of the Royal Statistical Society Series B, 83(1):156-187, 2021.
|
| 330 |
+
Tom Hennigan, Trevor Cai, Tamara Norman, and Igor Babuschkin. Haiku: Sonnet for JAX, 2020.
|
| 331 |
+
Matteo Hessel, David Budden, Fabio Viola, Mihaela Rosca, Eren Sezener, and Tom Hennigan. Optax: composable gradient transformation and optimisation, in jax., 2020.
|
| 332 |
+
Koji Hukushima and Yukito Iba. Population annealing and its application to a spin glass. In AIP Conference Proceedings, volume 690, pages 200-206. American Institute of Physics, 2003.
|
| 333 |
+
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
|
| 334 |
+
Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. Improved variational inference with inverse autoregressive flow. In Advances in Neural Information Processing Systems, 2016.
|
| 335 |
+
Tuan Anh Le, Maximilian Igl, Tom Rainforth, Tom Jin, and Frank Wood. Auto-encoding sequential Monte Carlo. In International Conference on Learning Representations, 2018.
|
| 336 |
+
Jun S Liu and Rong Chen. Blind deconvolution via sequential imputations. Journal of the American Statistical Association, 90(430):567-576, 1995.
|
| 337 |
+
Chris J Maddison, John Lawson, George Tucker, Nicolas Heess, Mohammad Norouzi, Andriy Mnih, Arnaud Doucet, and Yee Teh. Filtering variational objectives. In Advances in Neural Information Processing Systems, 2017.
|
| 338 |
+
Christian A Naesseth, Scott W Linderman, Rajesh Ranganath, and David M Blei. Variational sequential Monte Carlo. In Artificial Intelligence and Statistics, 2018.
|
| 339 |
+
|
| 340 |
+
Radford M Neal. Annealed importance sampling. Statistics and Computing, 11(2):125-139, 2001.
|
| 341 |
+
George Papamakarios, Eric T Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. Journal Machine Learning Research, 22(57):1-64, 2021.
|
| 342 |
+
Michael K Pitt, Ralph dos Santos Silva, Paolo Giordani, and Robert Kohn. On some properties of Markov chain Monte Carlo simulation methods based on the particle filter. Journal of Econometrics, 171(2):134-151, 2012.
|
| 343 |
+
Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. In International Conference on Machine Learning, 2015.
|
| 344 |
+
Tim Salimans, Diederik Kingma, and Max Welling. Markov chain Monte Carlo and variational inference: Bridging the gap. In International Conference on Machine Learning, 2015.
|
| 345 |
+
Saifuddin Syed, Alexandre Bouchard-Côté, George Deligiannidis, and Arnaud Doucet. Non-reversible parallel tempering: a scalable highly parallel MCMC scheme. Journal of the Royal Statistical Society: Series B, 84(2): 321-350, 2022.
|
| 346 |
+
Achille Thin, Nikita Kotelevskii, Alain Durmus, Eric Moulines, Maxim Panov, and Arnaud Doucet. Monte Carlo variational auto-encoders. In International Conference on Machine Learning, 2021.
|
| 347 |
+
Suriyanarayanan Vaikuntanathan and Christopher Jarzynski. Escorted free energy simulations. The Journal of Chemical Physics, 134(5):054107, 2011.
|
| 348 |
+
Hao Wu, Jonas Köhler, and Frank Noé. Stochastic normalizing flows. In Advances in Neural Information Processing Systems, 2020.
|
| 349 |
+
Guodong Zhang, Kyle Hsu, Jianing Li, Chelsea Finn, and Roger Grosse. Differentiable annealed importance sampling and the perils of gradient noise. In Advances in Neural Information Processing Systems, 2021.
|
| 350 |
+
Heiko Zimmermann, Hao Wu, Babak Esmaeili, and Jan-Willem van de Meent. Nested variational inference. In Advances in Neural Information Processing Systems, 2021.
|
| 351 |
+
|
| 352 |
+
# Continual Repeated Annealed Flow Transport Monte Carlo: Supplementary Material
|
| 353 |
+
|
| 354 |
+
# A. Extended proposal and target distributions
|
| 355 |
+
|
| 356 |
+
Assume the transport maps $T_{l}$ are fixed. We detail the extended proposal and target distributions used by Stochastic Normalizing Flows (Wu et al., 2020). These were introduced early on in (Vaikuntanathan and Jarzynski, 2011). The same target and proposal are also used by CRAFT and Annealed Flow Transport (Arbel et al., 2021) and the Gibbs flow algorithm (Heng et al., 2021) when resampling is omitted. We include here these distributions for sake of completeness.
|
| 357 |
+
|
| 358 |
+
We sample $X_0 \sim \pi_0(\cdot)$ at $k = 0$ then use $Y_k = T_k(X_{k-1})$ followed by $X_k \sim \mathcal{K}_k(Y_k, \cdot)$ at time $k \geq 1$ . To simplify presentation, we do not use measure-theoretic notation. Hence, using the notation $M_l^{\mathrm{trans}}(x, x') = \delta_{T_l(x)}(x')$ and $M_l^{\mathrm{mut}}(x, x') = \mathcal{K}_l(x, x')$ , we obtain the following proposal at time $k$ after the transport step
|
| 359 |
+
|
| 360 |
+
$$
|
| 361 |
+
\bar {\eta} _ {k} \left(x _ {0: k - 1}, y _ {1: k}\right) = \pi_ {0} \left(x _ {0}\right) \left(\prod_ {l = 1} ^ {k - 1} M _ {l} ^ {\text {t r a n s}} \left(x _ {l - 1}, y _ {l}\right) M _ {l} ^ {\text {m u t}} \left(y _ {l}, x _ {l}\right)\right) M _ {k} ^ {\text {t r a n s}} \left(x _ {k - 1}, y _ {k}\right), \tag {15}
|
| 362 |
+
$$
|
| 363 |
+
|
| 364 |
+
and the target is
|
| 365 |
+
|
| 366 |
+
$$
|
| 367 |
+
\bar {\pi} _ {k} \left(x _ {0: k - 1}, y _ {1: k}\right) = \pi_ {k} \left(y _ {k}\right) L _ {k - 1} ^ {\text {t r a n s}} \left(y _ {k}, x _ {k - 1}\right) \left(\prod_ {l = 1} ^ {k - 1} L _ {l - 1} ^ {\text {m u t}} \left(x _ {l}, y _ {l}\right) L _ {l - 1} ^ {\text {t r a n s}} \left(y _ {l}, x _ {l - 1}\right)\right), \tag {16}
|
| 368 |
+
$$
|
| 369 |
+
|
| 370 |
+
where $L_{l-1}^{\mathrm{trans}}(x, x') = \delta_{T_l^{-1}(x)}(x')$ and $L_{l-1}^{\mathrm{mul}}(x, x') = \pi_l(x') M_l^{\mathrm{mul}}(x', x) / \pi_l(x)$ . After the mutation step at time $k$ , the proposal is
|
| 371 |
+
|
| 372 |
+
$$
|
| 373 |
+
\bar {\eta} _ {k} \left(x _ {0: k}, y _ {1: k}\right) = \bar {\eta} _ {k} \left(x _ {0: k - 1}, y _ {1: k}\right) M _ {k} ^ {\text {m u t}} \left(y _ {k}, x _ {k}\right) = \pi_ {0} \left(x _ {0}\right) \left(\prod_ {l = 1} ^ {k} M _ {l} ^ {\text {t r a n s}} \left(x _ {l - 1}, y _ {l}\right) M _ {l} ^ {\text {m u t}} \left(y _ {l}, x _ {l}\right)\right) \tag {17}
|
| 374 |
+
$$
|
| 375 |
+
|
| 376 |
+
and the target is
|
| 377 |
+
|
| 378 |
+
$$
|
| 379 |
+
\bar {\pi} _ {k} \left(x _ {0: k}, y _ {1: k}\right) = \pi_ {k} \left(x _ {k}\right) \left(\prod_ {l = 0} ^ {k - 1} L _ {l} ^ {\text {m u t}} \left(x _ {l + 1}, y _ {l + 1}\right) L _ {l} ^ {\text {t r a n s}} \left(y _ {l + 1}, x _ {l}\right)\right). \tag {18}
|
| 380 |
+
$$
|
| 381 |
+
|
| 382 |
+
Hence the incremental weight after a transport term at time $k$ is of the form
|
| 383 |
+
|
| 384 |
+
$$
|
| 385 |
+
\frac {\bar {\pi} _ {k} \left(x _ {0 : k - 1} , y _ {1 : k}\right)}{\bar {\eta} _ {k} \left(x _ {0 : k - 1} , y _ {1 : k}\right)} = \frac {\bar {\pi} _ {k - 1} \left(x _ {0 : k - 1} , y _ {1 : k - 1}\right)}{\bar {\eta} _ {k - 1} \left(x _ {0 : k - 1} , y _ {1 : k - 1}\right)} \underbrace {\frac {\pi_ {k} \left(y _ {k}\right) L _ {k - 1} ^ {\text {t r a n s}} \left(y _ {k} , x _ {k - 1}\right)}{\pi_ {k - 1} \left(y _ {k - 1}\right) M _ {k} ^ {\text {t r a n s}} \left(x _ {k - 1} , y _ {k}\right)}} _ {\text {i n c r e m e n t a l w e i g h t} = \frac {Z _ {k - 1}}{Z _ {k}} G _ {k , T _ {k}} \left(x _ {k - 1}\right)}, \tag {19}
|
| 386 |
+
$$
|
| 387 |
+
|
| 388 |
+
while after the mutation step it is of the form
|
| 389 |
+
|
| 390 |
+
$$
|
| 391 |
+
\frac {\bar {\pi} _ {k} \left(x _ {0 : k} , y _ {1 : k}\right)}{\bar {\eta} _ {k} \left(x _ {0 : k} , y _ {1 : k}\right)} = \frac {\bar {\pi} _ {k} \left(x _ {0 : k - 1} , y _ {1 : k}\right)}{\bar {\eta} _ {k} \left(x _ {0 : k - 1} , y _ {1 : k}\right)} \underbrace {\frac {\pi_ {k} \left(x _ {k}\right) L _ {k - 1} ^ {\mathrm {m u l}} \left(x _ {k} , y _ {k}\right)}{\pi_ {k} \left(y _ {k}\right) M _ {k} ^ {\mathrm {m u l}} \left(y _ {k} , x _ {k}\right)}} _ {\text {i n c r e m e n t a l w e i g h t = 1}}. \tag {20}
|
| 392 |
+
$$
|
| 393 |
+
|
| 394 |
+
As shown in Appendix B.2 of (Arbel et al., 2021), we can also integrate out the random variables $Y_{1:k}$ in these expressions
|
| 395 |
+
|
| 396 |
+
as we can collapse the transport step and mutation step into one single Markov kernel
|
| 397 |
+
|
| 398 |
+
$$
|
| 399 |
+
\begin{array}{l} M _ {l} ^ {\mathrm {c o l}} (x _ {l - 1}, x _ {l}) = \int M _ {l} ^ {\mathrm {t r a n s}} (x _ {l - 1}, y _ {l}) M _ {l} ^ {\mathrm {m u t}} (y _ {l}, x _ {l}) \mathrm {d} y _ {l} \\ = \int \delta_ {T _ {l} (x _ {l - 1})} (y _ {l}) \mathcal {K} _ {l} (y _ {l}, x _ {l}) d y _ {l} \\ = \mathcal {K} _ {l} \left(T _ {l} \left(x _ {l - 1}\right), x _ {l}\right). \tag {21} \\ \end{array}
|
| 400 |
+
$$
|
| 401 |
+
|
| 402 |
+
Similarly we can collapse the backward kernels that have been used to defined the extended target distributions $\bar{\pi}_k$
|
| 403 |
+
|
| 404 |
+
$$
|
| 405 |
+
\begin{array}{l} L _ {l - 1} ^ {\mathrm {c o l}} \left(x _ {l}, x _ {l - 1}\right) = \int L _ {l - 1} ^ {\mathrm {m u t}} \left(x _ {l}, y _ {l}\right) L _ {l - 1} ^ {\mathrm {t r a n s}} \left(y _ {l}, x _ {l - 1}\right) \mathrm {d} y _ {l} \\ = \frac {\pi_ {l} \left(T _ {l} \left(x _ {l - 1}\right)\right) \left| \nabla T _ {l} \left(x _ {l - 1}\right) \right| \mathcal {K} _ {l} \left(T _ {l} \left(x _ {l - 1}\right) , x _ {l}\right)}{\pi_ {l} \left(x _ {l}\right)}, \tag {22} \\ \end{array}
|
| 406 |
+
$$
|
| 407 |
+
|
| 408 |
+
so the collapsed proposal and target can be written as
|
| 409 |
+
|
| 410 |
+
$$
|
| 411 |
+
\bar {\eta} _ {k} \left(x _ {0: k}\right) = \pi_ {0} \left(x _ {0}\right) \prod_ {l = 1} ^ {k} M _ {l} ^ {\operatorname {c o l}} \left(x _ {l - 1}, x _ {l}\right), \tag {23}
|
| 412 |
+
$$
|
| 413 |
+
|
| 414 |
+
$$
|
| 415 |
+
\bar {\pi} _ {k} \left(x _ {0: k}\right) = \pi_ {k} \left(x _ {k}\right) \prod_ {l = 0} ^ {k - 1} L _ {l} ^ {\operatorname {c o l}} \left(x _ {l + 1}, x _ {l}\right). \tag {24}
|
| 416 |
+
$$
|
| 417 |
+
|
| 418 |
+
For $k = K$ , we write $\bar{\eta}_K = \bar{\eta}$ and $\bar{\pi}_K = \bar{\pi}$ .
|
| 419 |
+
|
| 420 |
+
It follows that
|
| 421 |
+
|
| 422 |
+
$$
|
| 423 |
+
\begin{array}{l} \frac {Z _ {K} \bar {\eta} \left(x _ {0 : K}\right)}{\bar {\pi} \left(x _ {0 : K}\right)} := w _ {K} \left(x _ {0: K - 1}\right) = \frac {\gamma_ {K} \left(x _ {K}\right)}{\gamma_ {0} \left(x _ {0}\right)} \prod_ {l = 1} ^ {K} \frac {L _ {k - 1} ^ {\operatorname {c o l}} \left(x _ {k} , x _ {k - 1}\right)}{M _ {k} ^ {\operatorname {c o l}} \left(x _ {k - 1} , x _ {k}\right)} (25) \\ = \frac {\gamma_ {K} (x _ {K}) \gamma_ {K - 1} (x _ {K - 1}) \cdots \gamma_ {1} (x _ {1})}{\gamma_ {0} (x _ {0}) \gamma_ {1} (x _ {1}) \cdots \gamma_ {K - 1} (x _ {K - 1})} \prod_ {k = 1} ^ {K} \frac {L _ {k - 1} ^ {\mathrm {c o l}} (x _ {k} , x _ {k - 1})}{M _ {k} ^ {\mathrm {c o l}} (x _ {k - 1} , x _ {k})} \\ = \prod_ {k = 1} ^ {K} \frac {\gamma_ {k} (x _ {k}) L _ {k - 1} ^ {\mathrm {c o l}} (x _ {k} , x _ {k - 1})}{\gamma_ {k - 1} (x _ {k - 1}) M _ {k} ^ {\mathrm {c o l}} (x _ {k - 1} , x _ {k})} \\ = \prod_ {l = 1} ^ {K} G _ {k} \left(x _ {k - 1}\right), (26) \\ \end{array}
|
| 424 |
+
$$
|
| 425 |
+
|
| 426 |
+
where, from (21) and (22), we obtain the so-called incremental weights
|
| 427 |
+
|
| 428 |
+
$$
|
| 429 |
+
G _ {k} \left(x _ {k - 1}\right) = \frac {\gamma_ {k} \left(x _ {k}\right) L _ {k - 1} ^ {\operatorname {c o l}} \left(x _ {k} , x _ {k - 1}\right)}{\gamma_ {k - 1} \left(x _ {k - 1}\right) M _ {k} ^ {\operatorname {c o l}} \left(x _ {k - 1} , x _ {k}\right)} = \frac {\gamma_ {k} \left(T _ {k} \left(x _ {k - 1}\right)\right)}{\gamma_ {k - 1} \left(x _ {k - 1}\right)} | \nabla T _ {k} \left(x _ {k - 1}\right) | \tag {27}
|
| 430 |
+
$$
|
| 431 |
+
|
| 432 |
+
as given in (1b). In (Wu et al., 2020), the weight $w_{K}(x_{0:K - 1})$ is instead obtained by first computing recursively the product appearing on the RHS of (25), thus they consider formally<sup>1</sup> in equation (12) of their paper a log-weight update at iteration $k$ of the form
|
| 433 |
+
|
| 434 |
+
$$
|
| 435 |
+
\begin{array}{l} \log \left(\frac {L _ {k - 1} ^ {\operatorname {c o l}} (x _ {k} , x _ {k - 1})}{M _ {k} ^ {\operatorname {c o l}} (x _ {k - 1} , x _ {k})}\right) = \log (\gamma_ {k} (T _ {k} (x _ {k - 1}) | \nabla T _ {l} (x _ {l - 1}) |) - \log \gamma_ {k} (x _ {k}) \\ = \log \gamma_ {k} \left(x _ {k} ^ {\mathrm {S N F}}\right) - \log \gamma_ {k} \left(\hat {x} _ {k} ^ {\mathrm {S N F}}\right) \tag {28} \\ \end{array}
|
| 436 |
+
$$
|
| 437 |
+
|
| 438 |
+
where $\hat{x}_k^{\mathrm{SNF}}$ is the value of the sample after the MCMC at temperature $k$ has been applied and $x_k^{\mathrm{SNF}}$ is the value of the sample before it as described in (4).
|
| 439 |
+
|
| 440 |
+
# B. Gradient expression for ELBO based approaches
|
| 441 |
+
|
| 442 |
+
Assume we consider the ELBO objective as in (Wu et al., 2020), that is we are interested in maximizing w.r.t. to the NFs parameters and the stochastic kernel parameters
|
| 443 |
+
|
| 444 |
+
$$
|
| 445 |
+
\mathcal {L} = \mathbb {E} _ {X _ {0: K} \sim \bar {\eta} _ {K}} \left[ \log w _ {K} \left(X _ {0: K - 1}\right) \right], \tag {29}
|
| 446 |
+
$$
|
| 447 |
+
|
| 448 |
+
where $\bar{\eta} (x_{0:K}) = \bar{\eta}_K(x_{0:K})$ is defined in (23) and, as shown in (26), we have
|
| 449 |
+
|
| 450 |
+
$$
|
| 451 |
+
w _ {K} \left(x _ {0: K - 1}\right) = \frac {Z _ {K} \bar {\pi} _ {K} \left(x _ {0 : K}\right)}{\bar {\eta} _ {K} \left(x _ {0 : K}\right)} = \prod_ {l = 1} ^ {K} G _ {l} \left(x _ {l - 1}\right) \tag {30}
|
| 452 |
+
$$
|
| 453 |
+
|
| 454 |
+
for $\bar{\pi}_K(x_{0:K})$ defined in (24). It is thus trivial to check that $w_{K}(X_{0:K - 1})$ is an unbiased estimate of $Z_{K}$ when $X_{0:K}\sim \bar{\eta}_K$ .
|
| 455 |
+
|
| 456 |
+
Thin et al. (2021) provide a gradient estimate of the ELBO for annealed importance sampling, which corresponds to the case where $T_{l}(x) = x$ for all $l$ , when the MCMC kernels $\mathcal{K}_l$ are of the Metropolis-Hastings type and we use a reparameterized proposal
|
| 457 |
+
|
| 458 |
+
$$
|
| 459 |
+
\mathcal {K} _ {l} (x, x ^ {\prime}) = \int Q _ {l} ((x, u), x ^ {\prime}) g (u) \mathrm {d} u \tag {31}
|
| 460 |
+
$$
|
| 461 |
+
|
| 462 |
+
for
|
| 463 |
+
|
| 464 |
+
$$
|
| 465 |
+
Q _ {l} \left(\left(x, u\right), x ^ {\prime}\right) = \alpha_ {l} (x, u) \delta_ {S _ {l} (x, u)} \left(x ^ {\prime}\right) + \left\{1 - \alpha_ {l} (x, u) \right\} \delta_ {x} \left(x ^ {\prime}\right), \tag {32}
|
| 466 |
+
$$
|
| 467 |
+
|
| 468 |
+
We show here how one can directly extend these results to our settings. We will use the convention $\alpha_l^1 (x,u) = \alpha_l(x,u)$ , $S_{l}^{1}(x,u) = S_{l}(x,u)$ , $\alpha_l^0 (x,u) = 1 - \alpha_l(x,u)$ and $S_{l}^{0}(x,u) = x$ so that we can rewrite
|
| 469 |
+
|
| 470 |
+
$$
|
| 471 |
+
Q _ {l} ((x, u), x ^ {\prime}) = \sum_ {a _ {l} = 0} ^ {1} \alpha_ {l} ^ {a _ {l}} (x, u) \delta_ {S _ {l} ^ {a _ {l}} (x, u)} \left(x ^ {\prime}\right). \tag {33}
|
| 472 |
+
$$
|
| 473 |
+
|
| 474 |
+
By combining (21), (23), (31) and (32), we can rewrite the distribution $\bar{\eta}_k(x_{0:K})$ as
|
| 475 |
+
|
| 476 |
+
$$
|
| 477 |
+
\begin{array}{l} \bar {\eta} _ {K} (x _ {0: K}) = \pi_ {0} (x _ {0}) \prod_ {l = 1} ^ {K} \mathcal {K} _ {l} (T _ {l} (x _ {l - 1}), x _ {l}) \\ = \pi_ {0} (x _ {0}) \prod_ {l = 1} ^ {K} \int Q _ {l} ((T _ {l} (x _ {l - 1}), u _ {l}), x _ {l}) g (u _ {l}) \mathrm {d} u _ {l} \\ = \pi_ {0} (x _ {0}) \int \dots \int \sum_ {a _ {1: K}} \prod_ {l = 1} ^ {K} \left(g (u _ {l}) \alpha_ {l} ^ {a _ {l}} \left(T _ {l} \left(\Phi_ {l - 1} \left(x _ {0}, u _ {1: l - 1}, a _ {1: l - 1}\right)\right), u _ {l}\right) \delta_ {S _ {l} ^ {a _ {l}}} \left(T _ {l} \left(\Phi_ {l - 1} \left(x _ {0}, u _ {1: l - 1}, a _ {1: l - 1}\right), u _ {l}\right)\right) d u _ {1: K} \right. \\ = \pi_ {0} (x _ {0}) \int \dots \int \sum_ {a _ {1: K}} g \left(u _ {1: K}\right) \beta \left(a _ {1: K} \mid x _ {0}, u _ {1: K}\right) \prod_ {l = 1} ^ {K} \delta_ {S _ {l} ^ {a _ {l}}} \left(T _ {l} \left(\Phi_ {l - 1} \left(x _ {0}, u _ {1: l - 1}, a _ {1: l - 1}\right), u _ {l}\right) (x _ {l}) \mathrm {d} u _ {1: K} \right. \tag {34} \\ \end{array}
|
| 478 |
+
$$
|
| 479 |
+
|
| 480 |
+
where, for a given realization of $u_{1:L}$ , we draw sequentially the Bernoulli r.v. $A_l$ with
|
| 481 |
+
|
| 482 |
+
$$
|
| 483 |
+
\mathbb {P} \left(A _ {l} = a _ {l} \mid x _ {0}, u _ {1: l - 1}, a _ {1: l - 1}\right) = \alpha_ {l} ^ {a _ {l}} \left(T _ {l} \left(x _ {l - 1}\right), u _ {l}\right) \tag {35}
|
| 484 |
+
$$
|
| 485 |
+
|
| 486 |
+
and, given $x_0, u_{1:l}, a_{1:l}$ , the state $x_l$ is given by
|
| 487 |
+
|
| 488 |
+
$$
|
| 489 |
+
x _ {l} := \Phi_ {l} \left(x _ {0}, u _ {1: l}, a _ {1: l}\right) = S _ {l} ^ {a _ {l}} \left(T _ {l} \left(x _ {l - 1}\right), u _ {l}\right) = S _ {l} ^ {a _ {l}} \left(T _ {l} \left(\Phi_ {l - 1} \left(x _ {0}, u _ {1: l - 1}, a _ {1: l - 1}\right)\right), u _ {l}\right) \tag {36}
|
| 490 |
+
$$
|
| 491 |
+
|
| 492 |
+
with $\Phi_0(x_0, u_{1:0}, a_{1:0}) \coloneqq x_0$ . Finally we used the notation $g(u_{1:K}) = \prod_{l=1}^{K} g(u_l)$ and the joint distribution of $A_{1:K}$ is denoted
|
| 493 |
+
|
| 494 |
+
$$
|
| 495 |
+
\beta \left(a _ {1: K} \mid x _ {0}, u _ {1: K}\right) = \prod_ {l = 1} ^ {K} \alpha_ {l} ^ {a _ {l}} \left(T _ {l} \left(\Phi_ {l - 1} \left(x _ {0}, u _ {1: l - 1}, a _ {1: l - 1}\right)\right), u _ {l}\right). \tag {37}
|
| 496 |
+
$$
|
| 497 |
+
|
| 498 |
+
So we can rewrite the ELBO as
|
| 499 |
+
|
| 500 |
+
$$
|
| 501 |
+
\begin{array}{l} \mathcal {L} = \mathbb {E} _ {\bar {\eta} _ {K} (x _ {0: K})} \left[ \log w _ {K} (x _ {0: K - 1}) \right] \\ = \mathbb {E} _ {\pi_ {0} \left(x _ {0}\right) g \left(u _ {1: K}\right) \beta \left(a _ {1: K} \mid x _ {0}, u _ {1: K}\right)} \left[ \log w _ {K} \left(x _ {0: K - 1}\right) \right], \tag {38} \\ \end{array}
|
| 502 |
+
$$
|
| 503 |
+
|
| 504 |
+
where we recall that the states $x_{0:K}$ are deterministic functions of $x_0, u_{1:K}, a_{1:K}$ given by (36). Now when taking the gradient of the ELBO w.r.t. parameters of the NFs and/or stochastic kernels then we have
|
| 505 |
+
|
| 506 |
+
$$
|
| 507 |
+
\nabla \mathcal {L} = \mathbb {E} _ {\pi_ {0} (x _ {0}) g (u _ {1: K}) \beta (a _ {1: K} | x _ {0}, u _ {1: K})} \left[ \nabla \log w _ {K} (x _ {0: K - 1}) + \nabla \log \beta (a _ {1: K} | x _ {0}, u _ {1: K}) \cdot \log w _ {K} (x _ {0: K - 1}) \right]. \tag {39}
|
| 508 |
+
$$
|
| 509 |
+
|
| 510 |
+
Now set $u_0 = x_0$ , $u = u_{0:K}$ , $u = a_{1:K}$ . Hence, the random variables $(A,U)$ admit a joint distribution $p$ of the form $p(a,u)\coloneqq \pi_0(u_0)g(u_{1:K})\beta (a_{1:L}|x_0,u_{1:K})$ . Moreover, define $\Phi$ the re-parametrization that maps $(A,U)$ to the samples $X_{0:K}$ , i.e. $[\Phi (A,U)]_l = \Phi_l(U_{0:l},A_{1:l}) = X_l$ . Hence, we can express the gradient in Equation (39) as follows:
|
| 511 |
+
|
| 512 |
+
$$
|
| 513 |
+
\mathbb {E} _ {p} \left[ \nabla \log \left(w _ {K} \circ \Phi\right) + \log \left(w _ {K} \circ \Phi\right) \nabla \log p \right]. \tag {40}
|
| 514 |
+
$$
|
| 515 |
+
|
| 516 |
+
The first term on the r.h.s. corresponds to the reparametrization trick while the second term is a score term, which in general can have high variance.
|
| 517 |
+
|
| 518 |
+
The gradient estimator of (Wu et al., 2020) effectively uses an unbiased estimator of the first reparameterization term but neglects the second term altogether.
|
| 519 |
+
|
| 520 |
+
In the particular case where the flows transport exactly between each temperature, i.e. $T_{l}^{\#}\pi_{l - 1} = \pi_{l}$ , the importance weight $w_{K}(x_{0:K - 1})$ has the constant value $Z_{K}$ . Hence, the second term in the expectation is a constant multiplied by a score function and consequently it vanishes:
|
| 521 |
+
|
| 522 |
+
$$
|
| 523 |
+
\mathbb {E} _ {p} \left[ \log \left(w _ {K} \circ g\right) \nabla \log p \right] \propto \mathbb {E} _ {p} \left[ \nabla \log p \right] = \int \nabla p = 0. \tag {41}
|
| 524 |
+
$$
|
| 525 |
+
|
| 526 |
+
Moreover, since the ELBO is maximized when using optimal flows, the gradient $\nabla \mathcal{L}$ must vanish, which directly implies that the first term must also vanish. Therefore, despite the missing term in the gradient, the SNF learning rule has the correct fixed point in expectation.
|
| 527 |
+
|
| 528 |
+
# C. Particle Interpretation of CRAFT
|
| 529 |
+
|
| 530 |
+
CRAFT is attempting to minimize w.r.t. $\theta$ the KL divergence $\mathcal{KL}(T_k^\# (\theta)\pi_k||\pi_{k + 1}) = \mathcal{KL}(\pi_k||(T_k^{-1}(\theta))^{\#}\pi_{k + 1})$ We have shown in Section 3.2 that we could re-interpret the criterion minimized by CRAFT at time $k$ as the KL divergence between the expectation of the random measure approximating $\pi_{k}$ and the pullback of $\pi_{k + 1}$ by $T_{k}(\theta)$ . This was established in the case where the random measure is obtained using importance sampling. We show here that it is also valid for the case considered by CRAFT where it arises from an SMC-NF algorithm.
|
| 531 |
+
|
| 532 |
+
For sake of simplicity, we assume resampling is performed at any time step. Then in this case, the particles generated by the SMC-NF algorithm are distributed according to
|
| 533 |
+
|
| 534 |
+
$$
|
| 535 |
+
\bar {q} _ {k} \left(\mathrm {d} x _ {0: k} ^ {1: N}, \mathrm {d} y _ {1: k} ^ {1: N}\right) = \prod_ {i = 1} ^ {N} \pi_ {0} \left(\mathrm {d} x _ {0} ^ {i}\right) \prod_ {l = 1} ^ {k} \left[ \prod_ {i = 1} ^ {N} \delta_ {T _ {l} \left(x _ {l - 1} ^ {i}\right)} (\mathrm {d} y _ {l} ^ {i}) \prod_ {i = 1} ^ {N} W _ {l} ^ {a _ {l - 1} ^ {i}} \mathcal {K} _ {l} \left(y _ {l} ^ {a _ {l - 1} ^ {i}}, \mathrm {d} x _ {l} ^ {i}\right) \right], \tag {42}
|
| 536 |
+
$$
|
| 537 |
+
|
| 538 |
+
where $a_{l-1}^i \in \{1, \dots, N\}$ is the ancestral index of particle $x_i^l$ . Using random particles arising from the SMC-NF algorithm, the distribution $\pi_k$ is then approximated by the random empirical measure
|
| 539 |
+
|
| 540 |
+
$$
|
| 541 |
+
\pi_ {k} ^ {N} (\mathrm {d} x) = \frac {1}{N} \sum_ {i = 1} ^ {N} \delta_ {X _ {k} ^ {i}} (\mathrm {d} x) \tag {43}
|
| 542 |
+
$$
|
| 543 |
+
|
| 544 |
+
and its expectation w.r.t. (42) is denoted
|
| 545 |
+
|
| 546 |
+
$$
|
| 547 |
+
\hat {\pi} _ {k} ^ {N} (\mathrm {d} x) = \mathbb {E} \left[ \pi_ {k} ^ {N} (\mathrm {d} x) \right]. \tag {44}
|
| 548 |
+
$$
|
| 549 |
+
|
| 550 |
+
In this case, we have
|
| 551 |
+
|
| 552 |
+
$$
|
| 553 |
+
\begin{array}{l} \mathcal {K L} \left(\hat {\pi} _ {k} ^ {N} | | T _ {k} ^ {- 1} (\theta) _ {\#} \pi_ {k + 1}\right) = \mathbb {E} _ {X \sim \hat {\pi} _ {k} ^ {N}} \left[ \log \left(\frac {\hat {\pi} _ {k} ^ {N} (X)}{\left(T _ {k} ^ {- 1} (\theta)\right) ^ {\#} \pi_ {k + 1} (X)}\right) \right] \\ = \mathbb {E} _ {(X _ {0: k} ^ {1: N}, Y _ {1: k} ^ {1: N}) \sim \bar {q} _ {k}, X \sim \pi_ {k} ^ {N}} \left[ \log \left(\frac {\hat {\pi} _ {k} ^ {N} (X)}{(T _ {k} ^ {- 1} (\theta)) ^ {\#} \pi_ {k + 1} (X)}\right) \right] \\ = \mathbb {E} _ {\left(X _ {0: k} ^ {1: N}, Y _ {1: k} ^ {1: N}\right) \sim \bar {q} _ {k}} \left[ \mathbb {E} _ {X \sim \pi_ {k} ^ {N}} \left[ \log \left(\frac {\hat {\pi} _ {k} (X)}{\left(T _ {k} ^ {- 1} (\theta)\right) ^ {\#} \pi_ {k + 1} (X)}\right) \right] \right]. \tag {45} \\ \end{array}
|
| 554 |
+
$$
|
| 555 |
+
|
| 556 |
+
So an unbiased gradient estimate w.r.t. $\theta$ of $\mathcal{KL}(\hat{\pi}_k||T_k^{-1}(\theta)_{\#}\pi_{k + 1})$ is given by
|
| 557 |
+
|
| 558 |
+
$$
|
| 559 |
+
- \mathbb {E} _ {\chi \sim \pi_ {k} ^ {N}} \left[ \nabla_ {\theta} \log \left(T _ {k} ^ {- 1} (\theta) ^ {\#} \pi_ {k + 1}\right) \right], \tag {46}
|
| 560 |
+
$$
|
| 561 |
+
|
| 562 |
+
which is the gradient used by CRAFT to update the parameter of $T_{k}(\theta)$ .
|
| 563 |
+
|
| 564 |
+
# D. Particle MCMC
|
| 565 |
+
|
| 566 |
+
We describe here the particle independent Metropolis-Hastings algorithm (PIMH) introduced in (Andrieu et al., 2010). This is an MCMC algorithm using as an independent proposal an importance sampling or SMC-type algorithm such as an SMC combined with NFs learned by CRAFT (see Algorithm 3). It generates a Markov chain $(\pi_K^N(j), Z_K^N(j))_{j \geq 1}$ where $\pi_K^N(j)(\mathrm{d}x) = \sum_{i=1}^{N} W_K^i(j) \delta_{X_K^i(j)}(\mathrm{d}x)$ such that for any test function $f$
|
| 567 |
+
|
| 568 |
+
$$
|
| 569 |
+
\frac {1}{J} \sum_ {j = 1} ^ {J} \mathbb {E} _ {X \sim \pi_ {K} ^ {N} (j)} [ f (X) ] \rightarrow \mathbb {E} _ {X \sim \pi_ {K}} [ f (X) ], \quad \text {w h e r e} \quad \mathbb {E} _ {X \sim \pi_ {K} ^ {N} (j)} [ f (X) ] := \sum_ {i = 1} ^ {N} W _ {K} ^ {i} (j) f \left(X _ {K} ^ {i} (j)\right), \tag {47}
|
| 570 |
+
$$
|
| 571 |
+
|
| 572 |
+
as the number $J$ of MCMC iterations goes to infinity. This is described in detail in Algorithm 4.
|
| 573 |
+
|
| 574 |
+
Algorithm 4 Particle Independent Metropolis-Hastings step $P((\pi_K^N (j),Z_K^N (j)),(\cdot ,\cdot))$
|
| 575 |
+
|
| 576 |
+
1: Input: Current approximations $(\pi_K^N (j),Z_K^N (j))$ to $(\pi_{K},Z_{K})$
|
| 577 |
+
2: Output: New approximations $(\pi_K^N (j + 1),Z_K^N (j + 1))$ to $(\pi_{K},Z_{K})$
|
| 578 |
+
3: Run an SMC algorithm to obtain $(\pi_K^{\star ,N},Z_K^{\star ,N})$ approximating $(\pi_K,Z_K)$ (such as Algorithm 3).
|
| 579 |
+
4: Set $(\pi_K^N (j + 1),Z_K^N (j + 1)) = (\pi_K^{\star ,N},Z_K^{\star ,N})$ with probability $\alpha (Z_K^N (j),Z_K^{\star ,N})\coloneqq \min \left\{1,\frac{Z_K^{\star,N}}{Z_K^N(j)}\right\}$
|
| 580 |
+
5: Otherwise set $(\pi_K^N (j + 1),Z_K^N (j + 1)) = (\pi_K^N (j),Z_K^N (j)$
|
| 581 |
+
|
| 582 |
+
For fixed computational efforts, one could either run many MCMC iterations with $N$ small or few MCMC iterations with $N$ large. Under regularity conditions, it was shown in (Pitt et al., 2012) that $N$ should be selected such that the variance of the estimate of the log-normalizing constant is close to 1 in order to minimize the asymptotic variance of the estimator (47).
|
| 583 |
+
|
| 584 |
+
# E. Learning model parameters using CRAFT
|
| 585 |
+
|
| 586 |
+
We do not consider in the paper examples where $\pi_K$ depends on some parameters $\phi$ that one wants to learn. We explain here how this could be addressed.
|
| 587 |
+
|
| 588 |
+
Consider the following target
|
| 589 |
+
|
| 590 |
+
$$
|
| 591 |
+
\pi_ {K} ^ {\phi} (x) = \frac {\gamma_ {K} ^ {\phi} (x)}{Z _ {K} ^ {\phi}}. \tag {48}
|
| 592 |
+
$$
|
| 593 |
+
|
| 594 |
+
To estimate $\phi$ , we propose to learn $\phi$ by maximizing the log-normalizing constant/evidence
|
| 595 |
+
|
| 596 |
+
$$
|
| 597 |
+
\underset {\phi} {\arg \min } \log Z _ {K} ^ {\phi}. \tag {49}
|
| 598 |
+
$$
|
| 599 |
+
|
| 600 |
+
The gradient of this objective w.r.t. $\phi$ can be approximated as follows
|
| 601 |
+
|
| 602 |
+
$$
|
| 603 |
+
\nabla \log Z _ {K} ^ {\phi} = \mathbb {E} _ {X \sim \pi_ {K} ^ {\phi}} [ \nabla \log \gamma_ {K} ^ {\phi} (X) ] \approx \mathbb {E} _ {X \sim \pi_ {K} ^ {N, \phi}} [ \nabla \log \gamma_ {K} ^ {\phi} (X) ], \tag {50}
|
| 604 |
+
$$
|
| 605 |
+
|
| 606 |
+
where $\pi_K^{N,\phi}$ is an particle approximation of $\pi_K^\phi$ . This approximation can be obtained using a learned SMC/normalizing flow sampler obtained using CRAFT.
|
| 607 |
+
|
| 608 |
+
# F. Further comparison of CRAFT and AFT
|
| 609 |
+
|
| 610 |
+
In this section we investigate the effect of reducing the number of MCMC iterations in CRAFT and AFT. As a benchmark we use the log Gaussian Cox process- with a large lattice discretization of $40 \times 40 = 1600$ . To make the example harder we reduce the number of HMC iterations per temperature from 10 to 2. The rest of the experimental setup is kept the same. The goal here is to learn a fast flow augmented sampler at test time with reasonable training time. Since the diagonal affine flow used constitutes a negligible overhead the change corresponds to a 5 times reduction in test compute time relative to the previous work.
|
| 611 |
+
|
| 612 |
+
Figure 4 shows the results. We see that for 10 and 30 transitions that AFT struggles to learn a good flow. This is because the expectation of the objective required in the greedy training is not well estimated with fewer MCMC iterations. CRAFT has the same difficulty at the beginning of optimization but due to the repeated nature of the optimization it is able to recover. Having seen the solution found by CRAFT one might reasonably ask if we could use different numbers of HMC iterations at train and test time for AFT- though this is not suggested in the original work of (Arbel et al., 2021). After all, we know that with 10 iterations AFT performs well on the Cox process task, which suggests we could use that learnt flow with fewer HMC samples at test time. This is indeed the case. Training with extra MCMC iterations restores the behaviour of AFT to be similar to that of CRAFT in this case. However this is at the expense of introducing even more hyperparameters to AFT that require tuning. Relative to CRAFT, a user of AFT has to correctly tune the behaviour of three sets of particles (train/Validation/test) and manage multiple separate optimization loops (one for each temperature). This extra modification of allowing different number of MCMC iterations is then a further complexity.
|
| 613 |
+
|
| 614 |
+
Note additionally that AFT is subject to the degradation in performance with batch size relative to CRAFT described in Section 4.1- we regard this case as the most direct demonstration of how CRAFT solves the sample replenishment problem in AFT.
|
| 615 |
+
|
| 616 |
+
# G. Distributed and parallel computation
|
| 617 |
+
|
| 618 |
+
In this section we discuss the particular challenges we envisage scaling the algorithm to large scale compute infrastructure and potential remedies.
|
| 619 |
+
|
| 620 |
+
We have described the algorithm as a sequential algorithm but there is considerable scope for running it asynchronously in parallel. Each temperature can be considered to correspond to a node in a chain graph. Along the edges of the graph messages are passed. In the most direct implementation this corresponds to the particles and the weights. There is no need to run for a full pass along the graph before starting the next one. In some compute clusters communication locality is also important so it is helpful that the message passing is sparse and local.
|
| 621 |
+
|
| 622 |
+
An alternative to passing the particles and weights is to cycle the temperature at each node and pass gradient updates instead. The temperatures at each node can be offset by one so that the gradients are always received at the right time. Such a prescription is inspired by the distributed Algorithm 5 of (Syed et al., 2022). For CRAFT, this method will reduce communication overhead between nodes in the case where the gradient updates require less memory than passing particles and weights. Although the gradient updates for neural networks can in general be large, incorporating symmetries as in Sections H and 4.3 or more generally adding domain knowledge can reduce network sizes substantially.
|
| 623 |
+
|
| 624 |
+
Another consideration is the case where only a few particles will fit onto a worker, so that it is desirable to parallelize a single iteration across multiple workers. The resampling step means that the algorithm is not batch parallel. Therefore resampling could introduce a significant communication overhead between the parallel workers. One possible remedy is only to resample within a worker.
|
| 625 |
+
|
| 626 |
+

|
| 627 |
+
Figure 4. Comparison of CRAFT, AFT, SMC and VI on the pines task from (Arbel et al., 2021) with a reduced number of HMC iterations.
|
| 628 |
+
|
| 629 |
+
# H. Incorporating symmetries
|
| 630 |
+
|
| 631 |
+
In applications to high dimensional target densities the target density often exhibits symmetries. Incorporating such symmetries in normalizing flows can give significant improvements in the performance and efficiency of normalizing flows.
|
| 632 |
+
|
| 633 |
+
Consider a case where we have a group $\mathcal{G}$ with elements $g$ and corresponding group representation members $D_g$ . In this case we have a symmetric target density $\pi_T$ which is symmetric under the target density.
|
| 634 |
+
|
| 635 |
+
$$
|
| 636 |
+
\pi_ {T} \left(D _ {g} x\right) = \pi_ {T} (x) \forall g \in \mathcal {G} \tag {51}
|
| 637 |
+
$$
|
| 638 |
+
|
| 639 |
+
We also take a base density $\pi_B$ that is also symmetric under the group. We choose a normalizing flow that is equivariant under the action of the group so that:
|
| 640 |
+
|
| 641 |
+
$$
|
| 642 |
+
T \left(D _ {g} x\right) = D _ {g} T (x) \forall g \in \mathcal {G} \tag {52}
|
| 643 |
+
$$
|
| 644 |
+
|
| 645 |
+
When we push forward the base density we end up with a distribution $T_{\#} \pi_B$ that is invariant under the action of the group.
|
| 646 |
+
|
| 647 |
+
$$
|
| 648 |
+
T _ {\#} \pi_ {B} (D _ {g} x) = T _ {\#} \pi_ {B} (x) \forall g \in \mathcal {G} \tag {53}
|
| 649 |
+
$$
|
| 650 |
+
|
| 651 |
+
In standard normalizing flow modelling the symmetric pushforward distribution is then tuned to match the symmetric target distribution using reverse KL-divergence $\mathrm{KL}[T_{\#}\pi_B||\pi_T]$ .
|
| 652 |
+
|
| 653 |
+
It is important to extend such a treatment of symmetries to the case of CRAFT. To this end we analyze the symmetry properties of a sequence of densities with a geometric annealing schedule. We consider an initial normalized density $\pi_0$
|
| 654 |
+
|
| 655 |
+
and unnormalized final density $\gamma_{1}$ . The unnormalized density of the geometric annealing schedule takes the form:
|
| 656 |
+
|
| 657 |
+
$$
|
| 658 |
+
\log \gamma_ {\beta} (x) = \log (1 - \beta) \log \pi_ {0} (x) + \beta \log \gamma_ {1} (x) \tag {54}
|
| 659 |
+
$$
|
| 660 |
+
|
| 661 |
+
where the schedule time $\beta \in [0,1]$ . Clearly if both the initial density and final density are symmetric under the action of the group then so is $\gamma_{\beta}(x)$ and its normalized version $\pi_{\beta}$ .
|
| 662 |
+
|
| 663 |
+
In CRAFT we use a sum of KL divergences between the pushforward of an initial annealed density and the next target density $\mathrm{KL}[T\# \pi_{\beta}||\pi_{\beta '}]$ . In terms of symmetry there is a direct analogy to the case of standard variational inference with normalizing flows with which we started the discussion. In particular $\pi_{\beta}$ now takes the role of the symmetric base density and $\pi_{\beta '}$ takes the role of the target. The symmetry requirement on the normalizing flow $T$ is thus the same. It needs to be equivariant under the action of the group so that the push forward $T\# \pi_{\beta}$ is symmetric in a manner that matches its target.
|
| 664 |
+
|
| 665 |
+
# I. Additional experiment details
|
| 666 |
+
|
| 667 |
+
All experiments used a geometric (log-linear) annealing schedule. The initial distribution was always a standard multivariate normal. All experiments used HMC as the Markov kernel, which was tuned to get a reasonable acceptance rate based on preliminary runs of SMC. Normalizing flows were always initialized to the identity flow. Wherever a stochastic gradient optimizer was required we used the Adam optimizer (Kingma and Ba, 2015).
|
| 668 |
+
|
| 669 |
+
In terms of software dependencies for our code we used Python, JAX (Bradbury et al., 2018), Optax (Hessel et al., 2020), Haiku (Hennigan et al., 2020), and the TensorFlow probability JAX 'substrate' (Dillon et al., 2017).
|
| 670 |
+
|
| 671 |
+
All experiments were carried out using a single Nvidia v100 GPU. We used sufficient CPUs and CPU RAM such that these were not the bottleneck.
|
| 672 |
+
|
| 673 |
+
# I.1. Comparing CRAFT and AFT batch size performance
|
| 674 |
+
|
| 675 |
+
Here we give more details of the experiment described in Section 4.1. We used the original software for AFT which is available at https://github.com/deepmind/annealed_flow_transport. We used practical AFT (Algo- rithm 7), which is the version using early stopping on a set of validation particles and a 'hold out' test set.
|
| 676 |
+
|
| 677 |
+
We used the same MCMC method for both approaches, which was pre-tuned full Metropolis adjusted Hamiltonian Monte Carlo (HMC) with 10 leapfrog steps per iteration. For CRAFT we used 10 HMC steps per temperature and for AFT we used 1000 HMC steps per temperature meaning that AFT had 100 times more HMC steps per temperature. The HMC step sizes were linearly interpolated with 0 corresponding to the initial distribution and 1 corresponding to the final distribution, the interpolation points were $[0, 0.25, 0.5.1]$ and the corresponding step sizes were $[0.3, 0.3, 0.2, 0.2]$ .
|
| 678 |
+
|
| 679 |
+
At test time AFT and CRAFT used the same number of particles for each experiment. To make it fair at train time, the total CRAFT particle budget was divided in to two halves for AFT, one half was used for the training particles and the other half was used for the validation particles. For each experiment the total number of train particles for either method was the same as the number used at test time and this is the number shown in Figure 1.
|
| 680 |
+
|
| 681 |
+
In terms of optimization both methods were well converged. AFT used an optimization step size of 1e-2, and 500 optimization iterations per temperature. CRAFT used 200 total optimization iterations, and a step size of 5e-2 which was reduced down to 1e-2 after 100 iterations.
|
| 682 |
+
|
| 683 |
+
# I.2. Comparing CRAFT and Stochastic Normalizing Flows
|
| 684 |
+
|
| 685 |
+
To the best of knowledge, the HMC based Stochastic Normalizing Flow described in (Wu et al., 2020) was not implemented or experimented with in that work. We implemented it and found in preliminary experiments that it outperformed the random walk Metropolis Hastings used in the original work. It was also the most commensurate with our own HMC usage. Therefore we used the HMC SNF for the comparisons in this work.
|
| 686 |
+
|
| 687 |
+
In the spirit of the original SNF paper, we performed preliminary experiments with a SNF ELBO that incorporated resampling in the forward computation. For the reverse computation we neglected the resampling contribution just as is done for the score term arising from the discrete Metropolis-Hastings correction. In other words we proceeded as if the forward computation was compatible with the reparameterization gradient although this does not in fact correspond to an unbiased
|
| 688 |
+
|
| 689 |
+
estimate of the gradient. These preliminary experiments indicated that this variant of SNF became unstable and challenging to train. The fact that CRAFT can cope with resampling is a benefit of the approach, and is expected from the form of the CRAFT training objective.
|
| 690 |
+
|
| 691 |
+
We observed that variational learning of MCMC step sizes is unstable in SNF, so we followed (Wu et al., 2020) who use a fixed step size during variational training for the majority of their experiments. Instead of a learnt step size, we tuned the step sizes separately in advance so that the corresponding MCMC would have a reasonable acceptance rate just as we do for CRAFT. Consequently we used the same Markov kernels for both CRAFT and SNFs.
|
| 692 |
+
|
| 693 |
+
For the SNF software we implemented them in JAX using similar modules to that of the CRAFT implementation. We verified this JAX implementation against the original publicly available SNF implementation at https://github.com/noegroup/stochastic_normalizing_flows. A benefit of using the same basic modules as the CRAFT implementation is that the timed comparison is much less subject to unwanted differences arising from the different software libraries used. We confirmed using a Python execution profiler that both the CRAFT and SNF code was spending time on the expected core algorithmic computations and not on unexpected code paths. We observed some variability in the CRAFT and SNF step times.
|
| 694 |
+
|
| 695 |
+
The JAX SNF implementation exploits the connection between SNFs and Annealed Importance Sampling with normalizing flows. This equivalence can be easily seen from the form of the overall unnormalized weights in each case (Appendix A). The gradient dynamics are unchanged by using this representation of the forward computation.
|
| 696 |
+
|
| 697 |
+
Relative to CRAFT we found that SNFs used significantly more memory in our experiments. This is due to the backward pass of the SNFs where gradients are passed through the whole forward computation whereas the gradients are local in CRAFT. To the benefit of SNFs, we reduced the batch size of both CRAFT and SNFs in the relevant experiments so that the SNF training would still fit on GPU and that the batch sizes would be comparable for the two algorithms.
|
| 698 |
+
|
| 699 |
+
The normalizing flow used for the VAE experiment was an Affine Inverse Autoregressive Flow. Similar to (Arbel et al., 2021) we use a masked network (Germain et al., 2015) to achieve the autoregressive dependency structure. The masked network had two hidden layers and unmasked would have 150 hidden units. The final weights and biases were initialized to give an identity flow. We used a leaky ReLU nonlinearity. Since the flows have high capacity in this case we only needed to use 3 temperatures in both cases. Since the cost of these inverse autoregressive flows is quadratic in the dimension they are prohibitively expensive for systems of higher dimensionality. We used 2 HMC steps per transition with 10 leapfrog steps in both cases. Both CRAFT and SNF had a batch size of 100.
|
| 700 |
+
|
| 701 |
+
For the LGCP SNF/CRAFT comparison we used 10 transitions in both cases. We used 1 HMC step per transition with 10 leapfrog steps in both cases. Both CRAFT and SNF had a batch size of 2000.
|
| 702 |
+
|
| 703 |
+
# I.3. CRAFT based Particle MCMC for lattice $\phi^4$ theory
|
| 704 |
+
|
| 705 |
+
Those readers who are less familiar with the background physics can gain intuition from the function form of equation (14). The terms over neighbouring lattice sites promote spatial correlation. The other terms all involve only a single lattice site.
|
| 706 |
+
|
| 707 |
+
The expectations we use to evaluate the algorithms are physically motivated observables described by (Albergo et al., 2019). In the main text we use the two point susceptibility. In Figure 5 we show that similar results show for the Ising energy density. The chains that were averaged to produce the expectations in 3 and 5 are shown in 6 and 7 respectively. The errors shown in the average value plots are computed by subtracting off the gold standard value.
|
| 708 |
+
|
| 709 |
+
We note that de Haan et al. (2021) incorporate additional symmetries of the target on top of the lattice translation symmetry we incorporate. This is at the expense of introducing numerical integration into the variational inference with normalizing flows. The convolutional affine coupling layers we use do not require such numerical integration, and are known to significantly outperform fully connected affine coupling layers in this context (Albergo et al., 2021).
|
| 710 |
+
|
| 711 |
+
The parameters we consider for the theory are based on those from (Albergo et al., 2019). In particular, we take the hardest parameter set E5 and make it more difficult by reducing $m^2$ from -4 to -4.75. As noted by (Albergo et al., 2019), the reason this makes the problem more challenging is it increases the gap between the positive and negative modes. The resulting parameters we use are therefore the following: Lattice size $14 \times 14$ ; $\lambda = 5.1$ and $m^2 = -4.75$ .
|
| 712 |
+
|
| 713 |
+
In these $\phi^4$ experiments, upon preliminary investigation of computation time using a Python execution profiler, we found
|
| 714 |
+
|
| 715 |
+

|
| 716 |
+
|
| 717 |
+

|
| 718 |
+
|
| 719 |
+

|
| 720 |
+
|
| 721 |
+

|
| 722 |
+
|
| 723 |
+

|
| 724 |
+
Figure 5. Timed comparison of MCMC methods for the $\phi^4$ example, based on fifteen repeats. CRAFT, SMC and VI serve as proposal mechanisms for particle MCMC. HMC is applied directly to the target. Note HMC never reaches the detailed level of error in the top row.
|
| 725 |
+
|
| 726 |
+

|
| 727 |
+
|
| 728 |
+

|
| 729 |
+
|
| 730 |
+

|
| 731 |
+
|
| 732 |
+

|
| 733 |
+
|
| 734 |
+

|
| 735 |
+
Figure 6. Five minutes of the raw chain used to compute the expectation averages for the SMC and CRAFT two point susceptibility. The estimated gold standard value is shown as the black dashed line. SMC and CRAFT are displayed because they are the two best algorithms. VI and HMC have much larger errors.
|
| 736 |
+
|
| 737 |
+
that time was sometimes being spent in parts of the code that might not be expected from algorithmic considerations alone. In particular for fast samplers like VI, we found that storing results and computing expectations were an (unoptimized) bottleneck. To make timed comparisons fairer, we therefore performed separate timing estimation on code that ran only the core sampling code and not on computing expectations and storing results. These adjusted timing estimates are what is
|
| 738 |
+
|
| 739 |
+

|
| 740 |
+
|
| 741 |
+

|
| 742 |
+
Figure 7. Five minutes of the raw chain used to compute the expectation averages for the SMC and CRAFT Ising energy density. The estimated gold standard value is shown as the black dashed line. SMC and CRAFT are displayed because they are the two best algorithms. VI and HMC have much larger errors.
|
| 743 |
+
|
| 744 |
+
reported in our results.
|
| 745 |
+
|
| 746 |
+
The gold standard values of the error for each reported expectation are computed using SMC. We verified in cases of simple known expectations that SMC and CRAFT gave the correct answers whereas HMC and VI gave large errors, just as for the physically motivated observables reported.
|
| 747 |
+
|
| 748 |
+
The normalizing flow used a convolutional affine coupling layer (Dinh et al., 2017) with periodic boundary conditions. We used alternating checkerboard masking patterns. The convolution kernels were of size $3 \times 3$ . The conditioner of each flow had one hidden convolutional layer with 10 channels. The final weights and the biases of the convolution were initialized to give an identity flow. We used a ReLU nonlinearity.
|
| 749 |
+
|
| 750 |
+
SMC used 90 transitions. Since the masking pattern required two coupling layers to change each site we used two coupling layers per transition in CRAFT. CRAFT used 10 transitions. VI, which has no annealing, used 20 coupling layers. We confirmed there was no benefit to increasing the number of VI coupling layers. We used 10 HMC steps per annealing transition with 10 leapfrog steps for SMC and CRAFT. Direct HMC used 10 leapfrog iterations per step.
|
| 751 |
+
|
| 752 |
+
# J. CRAFT to AFT algorithm comparison.
|
| 753 |
+
|
| 754 |
+
In this section we describe the two different variants of AFT from (Arbel et al., 2021) for clarity and completeness. We also describe the line difference between CRAFT and simple AFT. Note that the full source code for each method is also available at https://github.com/deepmind/annealed_flow_transport. Based on equation (7) we define:
|
| 755 |
+
|
| 756 |
+
$$
|
| 757 |
+
\mathcal {L} _ {k} ^ {N} (T) := \operatorname {K L} \left[ T _ {k} ^ {\#} \pi_ {k - 1} | | \pi_ {k} \right] - \log \left(\frac {Z _ {k}}{Z _ {k - 1}}\right) \approx \sum_ {i} W _ {k - 1} ^ {i} D _ {k} \left(X _ {k - 1} ^ {i}\right) \tag {55}
|
| 758 |
+
$$
|
| 759 |
+
|
| 760 |
+
which from equation (8) has approximate gradient given by
|
| 761 |
+
|
| 762 |
+
$$
|
| 763 |
+
\sum_ {i} W _ {k - 1} ^ {i} \frac {\partial D _ {k} \left(X _ {k - 1} ^ {i}\right)}{\partial \theta_ {k}}. \tag {56}
|
| 764 |
+
$$
|
| 765 |
+
|
| 766 |
+
Algorithm 5 Simple AFT: from Arbel, Matthews and Doucet 2021. This corresponds to Algorithm 1 of the prior paper.
|
| 767 |
+
1: Input: number of particles $N$ , unnormalized annealed targets $\{\gamma_k\}_{k=0}^K$ such that $\gamma_0 = \pi_0$ and $\gamma_K = \gamma$ , resampling threshold $A \in [1/N, 1)$ .
|
| 768 |
+
2: Output: Approximations $\pi_K^N$ and $Z_K^N$ of $\pi$ and $Z$ .
|
| 769 |
+
3: Sample $X_0^i \sim \pi_0$ and set $W_0^i = \frac{1}{N}$ and $Z_0^N = 1$ .
|
| 770 |
+
4: for $k = 1, \ldots, K$ do
|
| 771 |
+
5: Solve $T_k \leftarrow \operatorname*{argmin}_{T \in \mathcal{T}} \mathcal{L}_k^N(T)$ using e.g. SGD.
|
| 772 |
+
6: $(\pi_k^N, Z_k^N) \gets \text{SMC-NF-step}(\pi_{k-1}^N, Z_{k-1}^N, T_k)$
|
| 773 |
+
7: end for
|
| 774 |
+
|
| 775 |
+
Algorithm 6 CRAFT to simple AFT line difference: additions for CRAFT shown in green and removals from simple AFT shown in red
|
| 776 |
+
1: Input: number of particles $N$ , unnormalized annealed targets $\{\gamma_k\}_{k=0}^K$ such that $\gamma_0 = \pi_0$ and $\gamma_K = \gamma$ , resampling threshold $A \in [1/N, 1)$ .
|
| 777 |
+
2: Output: Length $J$ sequence of approximations $\pi_K^N$ and $Z_K^N$ of $\pi$ and $Z$ .
|
| 778 |
+
3: for $j = 1, \ldots, J$ do
|
| 779 |
+
4: Sample $X_0^i \sim \pi_0$ and set $W_0^i = \frac{1}{N}$ and $Z_0^N = 1$ .
|
| 780 |
+
5: for $k = 1, \ldots, K$ do
|
| 781 |
+
6: Solve $T_k \leftarrow \operatorname*{argmin}_{T \in T} \mathcal{L}_k^N(T)$ using e.g. SGD.
|
| 782 |
+
7: $\hat{h} \gets \text{flow - grad}(T_k, \pi_{k-1}^N)$ using eqn (8).
|
| 783 |
+
8: $(\pi_k^N, Z_k^N) \gets \text{SMC - NF - step}(\pi_{k-1}^N, Z_{k-1}^N, T_k)$
|
| 784 |
+
9: Apply gradient update $\hat{h}$ to flow $T_k$
|
| 785 |
+
10: end for
|
| 786 |
+
11: Yield $(\pi_K^N, Z_K^N)$ and continue for loop.
|
| 787 |
+
12: end for
|
| 788 |
+
|
| 789 |
+
Algorithm 7 Practical AFT: from Arbel, Matthews and Doucet 2021. This corresponds to Algorithm 2 of the prior paper.
|
| 790 |
+
1: Input: number of training, test and validation particles $N_{\mathrm{train}}$ , $N_{\mathrm{test}}$ , $N_{\mathrm{val}}$ , unnormalized annealed targets $\{\gamma_k\}_{k=0}^K$ such that $\gamma_0 = \pi_0$ and $\gamma_K = \gamma$ , resampling thresholds $A_a \in [1/N_a, 1)$ for $a \in \{\mathrm{train}, \mathrm{test}, \mathrm{val}\}$ , number of training iterations $J$ .
|
| 791 |
+
2: Output: Approximations $\pi_{K,\mathrm{test}}^{N_{\mathrm{test}}}$ and $Z_{K,\mathrm{test}}^{N_{\mathrm{test}}}$ of $\pi$ and $Z$ .
|
| 792 |
+
3: for $a \in \{\mathrm{train}, \mathrm{test}, \mathrm{val}\}$ do
|
| 793 |
+
4: Sample $X_0^{i,a} \sim \pi_0$ and set $W_0^{i,a} \gets \frac{1}{N_a}$ and $Z_0^{N,a} \gets 1$ .
|
| 794 |
+
5: end for
|
| 795 |
+
6: for $k = 1, \ldots, K$ do
|
| 796 |
+
7: Learn the flow $T_k \gets$ EarlyStopLearnFlow(J, $X_{k-1,\mathrm{train}}^{N_{\mathrm{train}}}$ , $W_{k-1,\mathrm{train}}^{N_{\mathrm{train}}}$ , $X_{k-1,\mathrm{val}}^{N_{\mathrm{val}}}$ , $W_{k-1,\mathrm{val}}^{N_{\mathrm{val}}}$ )
|
| 797 |
+
8: for $a \in \{\mathrm{train}, \mathrm{test}, \mathrm{val}\}$ do
|
| 798 |
+
9: $\left(\pi_{k,a}^{N_a}, Z_{k,a}^{N_a}\right) \gets$ SMC-NF-step $\left(\pi_{k-1,a}^{N_a}, Z_{k-1,a}^{N_a}, T_k\right)$
|
| 799 |
+
10: end for
|
| 800 |
+
11: end for
|
| 801 |
+
|
| 802 |
+
Algorithm 8 Subroutine EarlyStopLearnFlow for practical AFT.
|
| 803 |
+
1: Input: Number of training iterations $J$ , training and validation particles and weights $\left\{X_{k-1}^{i,\text{train}}, W_{k-1}^{i,\text{train}}\right\}_{i=1}^{N_{\text{train}}}$ and $\left\{X_{k-1}^{i,\text{val}}, W_{k-1}^{i,\text{val}}\right\}_{i=1}^{N_{\text{val}}}$ .
|
| 804 |
+
2: Output: Estimated flow $T_k$
|
| 805 |
+
3: Initialize flow to identity $T_k = I_D$ .
|
| 806 |
+
4: Initialize list of flows $\mathcal{T}_{opt} \gets \{T_k\}$ .
|
| 807 |
+
5: Initialize list of validation losses $\mathcal{E} \gets \left\{\sum_{i=1}^{N_{\text{val}}} W_{k-1}^{i,\text{val}} D_k(X_{k-1}^{i,\text{val}})\right\}$
|
| 808 |
+
6: for $j = 1, \ldots, J$ do
|
| 809 |
+
7: Compute training loss using (55) $\mathcal{L}_k^{N_{\text{train}}} (T_k) \gets \sum_{i=1}^{N_{\text{train}}} W_{k-1}^{i,\text{train}} D_k(X_{k-1}^{i,\text{train}})$ .
|
| 810 |
+
8: Update $T_k$ using SGD step on $\mathcal{L}_k^{N_{\text{train}}} (T_k)$ with approx. gradient (56).
|
| 811 |
+
9: Update list of flows $\mathcal{T}_{opt} \gets \mathcal{T}_{opt} \cup \{T_k\}$
|
| 812 |
+
10: Update list of validation losses $\mathcal{E} \gets \mathcal{E} \cup \left\{\sum_{i=1}^{N_{\text{val}}} W_{k-1}^{i,\text{val}} D_k(X_{k-1}^{i,\text{val}})\right\}$
|
| 813 |
+
11: end for
|
| 814 |
+
12: Return flow with smallest validation error from the list of flows $\mathcal{T}_{opt}$ .
|
2201.13xxx/2201.13117/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fdd3afe9747724b479b0a97b71bc316cb5f9163548b73801373d32985afc33ee
|
| 3 |
+
size 920858
|
2201.13xxx/2201.13117/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.13xxx/2201.13125/22a9b67e-4248-4898-877b-81213525c31c_content_list.json
ADDED
|
@@ -0,0 +1,1346 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Corpus for Automatic Structuring of Legal Documents",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
216,
|
| 8 |
+
95,
|
| 9 |
+
784,
|
| 10 |
+
114
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Prathamesh Kalamkar $^{1,2,*}$ , Aman Tiwari $^{1,2,*}$ , Astha Agarwal $^{1,2,*}$ , Saurabh Karn $^{3,*}$ , Smita Gupta $^{3}$ , Vivek Raghavan $^{1}$ , Ashutosh Modi $^{4}$",
|
| 17 |
+
"text_level": 1,
|
| 18 |
+
"bbox": [
|
| 19 |
+
144,
|
| 20 |
+
134,
|
| 21 |
+
855,
|
| 22 |
+
167
|
| 23 |
+
],
|
| 24 |
+
"page_idx": 0
|
| 25 |
+
},
|
| 26 |
+
{
|
| 27 |
+
"type": "text",
|
| 28 |
+
"text": "$^{1}$ EkStep Foundation, $^{2}$ Thoughtworks Technologies India Pvt Ltd.,",
|
| 29 |
+
"bbox": [
|
| 30 |
+
278,
|
| 31 |
+
167,
|
| 32 |
+
724,
|
| 33 |
+
181
|
| 34 |
+
],
|
| 35 |
+
"page_idx": 0
|
| 36 |
+
},
|
| 37 |
+
{
|
| 38 |
+
"type": "text",
|
| 39 |
+
"text": "$^{3}$ Agami, $^{4}$ Indian Institute of Technology Kanpur (IIT-K)",
|
| 40 |
+
"bbox": [
|
| 41 |
+
305,
|
| 42 |
+
181,
|
| 43 |
+
695,
|
| 44 |
+
196
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "{prathamk, aman.tiwari, astha.agarwal} @ thoughtworks.com,",
|
| 51 |
+
"bbox": [
|
| 52 |
+
295,
|
| 53 |
+
196,
|
| 54 |
+
707,
|
| 55 |
+
210
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "{saurabh, smita} @agami.in, Vivek@ekstep.org, ashutoshm@cse.iitk.ac.in",
|
| 62 |
+
"bbox": [
|
| 63 |
+
253,
|
| 64 |
+
211,
|
| 65 |
+
749,
|
| 66 |
+
224
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "Abstract",
|
| 73 |
+
"text_level": 1,
|
| 74 |
+
"bbox": [
|
| 75 |
+
465,
|
| 76 |
+
228,
|
| 77 |
+
532,
|
| 78 |
+
241
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "text",
|
| 84 |
+
"text": "In populous countries, pending legal cases have been growing exponentially. There is a need for developing techniques for processing and organizing legal documents. In this paper, we introduce a new corpus for structuring legal documents. In particular, we introduce a corpus of legal judgment documents in English that are segmented into topical and coherent parts. Each of these parts is annotated with a label coming from a list of pre-defined Rhetorical Roles. We develop baseline models for automatically predicting rhetorical roles in a legal document based on the annotated corpus. Further, we show the application of rhetorical roles to improve performance on the tasks of summarization and legal judgment prediction. We release the corpus and baseline model code along with the paper.",
|
| 85 |
+
"bbox": [
|
| 86 |
+
114,
|
| 87 |
+
243,
|
| 88 |
+
884,
|
| 89 |
+
335
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "Keywords: Legal NLP, Rhetorical Roles, Legal Document Segmentation",
|
| 96 |
+
"bbox": [
|
| 97 |
+
115,
|
| 98 |
+
347,
|
| 99 |
+
561,
|
| 100 |
+
361
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "1. Introduction",
|
| 107 |
+
"text_level": 1,
|
| 108 |
+
"bbox": [
|
| 109 |
+
228,
|
| 110 |
+
378,
|
| 111 |
+
376,
|
| 112 |
+
392
|
| 113 |
+
],
|
| 114 |
+
"page_idx": 0
|
| 115 |
+
},
|
| 116 |
+
{
|
| 117 |
+
"type": "text",
|
| 118 |
+
"text": "In populous countries (e.g., India), pending legal cases have been growing exponentially. For example, according to India's National Judicial Data Grid, as of December 2021, there are approximately 40 million cases pending in various courts of the country (National Judicial Data Grid, 2021). India follows a common-law system; consequently, due to subjectivity involved in the legal process, it may not be possible to automate the entire judicial pipeline completely; nevertheless, many intermediate tasks can be automated to augment legal practitioners, and hence expedite the system. For example, legal documents can be processed with the help of Natural Language Processing (NLP) techniques to organize and structure the data to be amenable to automatic search and retrieval. However, legal texts are different from commonly occurring texts typically used to train NLP models. Legal documents are quite long, running into tens (sometimes hundreds) of pages. Long documents make automatic processing challenging as information is spread throughout the document (Malik et al., 2021b). Another challenge with legal documents is the use of different lexicons. Though legal documents use natural language (e.g., English), many commonly occurring words/terms have different legal connotations. The use of different lexicons makes it challenging to adapt existing NLP models to legal texts (Malik et al., 2021b). Moreover, in countries like India, legal documents are manually typed and are highly unstructured and noisy (e.g., spelling and grammatical mistakes). Above mentioned challenges make it difficult to apply existing NLP models and techniques directly, which calls for the development of legal domain-specific techniques.",
|
| 119 |
+
"bbox": [
|
| 120 |
+
114,
|
| 121 |
+
397,
|
| 122 |
+
489,
|
| 123 |
+
866
|
| 124 |
+
],
|
| 125 |
+
"page_idx": 0
|
| 126 |
+
},
|
| 127 |
+
{
|
| 128 |
+
"type": "text",
|
| 129 |
+
"text": "Existing state-of-the-art models in NLP are data-driven",
|
| 130 |
+
"bbox": [
|
| 131 |
+
115,
|
| 132 |
+
866,
|
| 133 |
+
487,
|
| 134 |
+
879
|
| 135 |
+
],
|
| 136 |
+
"page_idx": 0
|
| 137 |
+
},
|
| 138 |
+
{
|
| 139 |
+
"type": "text",
|
| 140 |
+
"text": "and are trained on annotated corpora. However, the legal domain suffers from the deficiency of availability of annotated corpora. It has hindered the growth of the Legal NLP domain. For example, much of the recent success in the computer vision community can be owed to the creation and availability of annotated vision corpora such as ImageNet (Deng et al., 2009; Russakovsky et al., 2013; Russakovsky et al., 2015). In this paper, we contribute to creating annotated legal text corpora. In particular, we create a new corpus of Indian legal judgments in English that are structured and annotated with topically coherent semantic units. Since legal documents are long and unstructured, these can be divided into topically coherent parts (e.g., facts, arguments) referred to as Rhetorical Roles (Saravanan et al., 2008; Bhattacharya et al., 2019b; Malik et al., 2021a). In this paper, with the help of legal experts, we annotate legal documents with 12 different Rhetorical Roles (RRs) (details in §3). An example text annotated with some of the RRs is shown in Figure 1. As shown in the figure, an unstructured legal judgment document is segmented into semantically coherent parts, and each part is annotated with a rhetorical role label such as preamble, fact, ratio, etc. We experimented with different levels of granularity (phrase level, sentence level, paragraph level) for annotating RRs and decided to go for sentence-level RR annotations based on initial experiments. Each sentence in a legal document is annotated with a rhetorical role label in the proposed corpus. Typically, consecutive sentences can have a similar role in a judgment document. The rhetorical role corpus is part of a general open-source effort of creating various legal corpora for promoting the development and bench-marking of legal NLP systems. This project is called BUILDNyAI. $^{1}$ We make the following contribu",
|
| 141 |
+
"bbox": [
|
| 142 |
+
509,
|
| 143 |
+
379,
|
| 144 |
+
884,
|
| 145 |
+
878
|
| 146 |
+
],
|
| 147 |
+
"page_idx": 0
|
| 148 |
+
},
|
| 149 |
+
{
|
| 150 |
+
"type": "aside_text",
|
| 151 |
+
"text": "arXiv:2201.13125v2 [cs.CL] 19 Sep 2022",
|
| 152 |
+
"bbox": [
|
| 153 |
+
21,
|
| 154 |
+
309,
|
| 155 |
+
60,
|
| 156 |
+
724
|
| 157 |
+
],
|
| 158 |
+
"page_idx": 0
|
| 159 |
+
},
|
| 160 |
+
{
|
| 161 |
+
"type": "page_footnote",
|
| 162 |
+
"text": "* Authors contributed equally",
|
| 163 |
+
"bbox": [
|
| 164 |
+
144,
|
| 165 |
+
895,
|
| 166 |
+
327,
|
| 167 |
+
909
|
| 168 |
+
],
|
| 169 |
+
"page_idx": 0
|
| 170 |
+
},
|
| 171 |
+
{
|
| 172 |
+
"type": "page_footnote",
|
| 173 |
+
"text": "1The word BUILDNyAI is a code-mixed (English+Hindi)",
|
| 174 |
+
"bbox": [
|
| 175 |
+
532,
|
| 176 |
+
894,
|
| 177 |
+
882,
|
| 178 |
+
909
|
| 179 |
+
],
|
| 180 |
+
"page_idx": 0
|
| 181 |
+
},
|
| 182 |
+
{
|
| 183 |
+
"type": "text",
|
| 184 |
+
"text": "IN THE COURT OF THE V ADDL SESSIONS JUDGE, MYSORE. Dated this the 23rd day of May 2013 ... The Petitioner is a businessman and he is permanent resident of Mysore City... On behalf of the Prosecution the learned Public Prosecutor has filed objection to the bail Petition stating that, there ...Now, the points that arise for consideration of the Court are: 1. Whether the Petitioner has made out sufficient grounds to release him on Anticipatory Bail? ... Heard the arguments advanced by the learned advocate for the Petitioner and the learned Public Prosecutor... Considering all these aspects, the Court is of the view that, ...Point No.2: For the foregoing reasons and in view of my above discussions, I proceed to pass the following ...The High Court by its order dated October 26, 1982 set aside the order of the Tribunal and also the assessment on the ground ...The petitioners are falsely implicated and the charge sheet has been filed against the petitioners merely ...My findings on the above points are as follows: Point No.1: In the Positive Point No.2 : As per final order for the following...In a decision reported in (2013) 1 KCCR 334 case of K.Ramachandra Reddy Vs. State of Karnataka by the Station House Officer...The decision of the Andhra Pradesh High Court ... are not relevant for purposes of deciding the question which has arisen before us...",
|
| 185 |
+
"bbox": [
|
| 186 |
+
147,
|
| 187 |
+
124,
|
| 188 |
+
472,
|
| 189 |
+
300
|
| 190 |
+
],
|
| 191 |
+
"page_idx": 1
|
| 192 |
+
},
|
| 193 |
+
{
|
| 194 |
+
"type": "image",
|
| 195 |
+
"img_path": "images/dcd3269db746ddc21519e802e22edab5a3fb418159e658bc7c3edbac5bfe92aa.jpg",
|
| 196 |
+
"image_caption": [
|
| 197 |
+
"Figure 1: Example of document segmentation via Rhetorical Roles labels. On the left is excerpt from a legal document and on the right is document segmented and labelled with rhetorical role labels."
|
| 198 |
+
],
|
| 199 |
+
"image_footnote": [],
|
| 200 |
+
"bbox": [
|
| 201 |
+
472,
|
| 202 |
+
78,
|
| 203 |
+
860,
|
| 204 |
+
354
|
| 205 |
+
],
|
| 206 |
+
"page_idx": 1
|
| 207 |
+
},
|
| 208 |
+
{
|
| 209 |
+
"type": "text",
|
| 210 |
+
"text": "tions in this paper:",
|
| 211 |
+
"text_level": 1,
|
| 212 |
+
"bbox": [
|
| 213 |
+
115,
|
| 214 |
+
441,
|
| 215 |
+
243,
|
| 216 |
+
455
|
| 217 |
+
],
|
| 218 |
+
"page_idx": 1
|
| 219 |
+
},
|
| 220 |
+
{
|
| 221 |
+
"type": "list",
|
| 222 |
+
"sub_type": "text",
|
| 223 |
+
"list_items": [
|
| 224 |
+
"- We create a corpus of 354 Indian legal documents annotated with rhetorical roles. The corpus has 40,305 sentences annotated with 12 different RRs. To the best of our knowledge, this is the largest corpus of legal documents annotated with RRs.",
|
| 225 |
+
"- In order to be of practical value, using the corpus, we develop a transformer-based baseline model for automatically annotating legal documents with sentence-level RR.",
|
| 226 |
+
"- We show two use-cases for RRs. In particular, we show applications of RRs to the task of legal case summarization and legal judgment prediction.",
|
| 227 |
+
"- We release the corpus and the model implementations: https://legal-nlp-ekstep.github.io/Competitions/Rhetorical-Role/"
|
| 228 |
+
],
|
| 229 |
+
"bbox": [
|
| 230 |
+
134,
|
| 231 |
+
455,
|
| 232 |
+
487,
|
| 233 |
+
682
|
| 234 |
+
],
|
| 235 |
+
"page_idx": 1
|
| 236 |
+
},
|
| 237 |
+
{
|
| 238 |
+
"type": "text",
|
| 239 |
+
"text": "2. Related Work",
|
| 240 |
+
"text_level": 1,
|
| 241 |
+
"bbox": [
|
| 242 |
+
221,
|
| 243 |
+
695,
|
| 244 |
+
381,
|
| 245 |
+
709
|
| 246 |
+
],
|
| 247 |
+
"page_idx": 1
|
| 248 |
+
},
|
| 249 |
+
{
|
| 250 |
+
"type": "text",
|
| 251 |
+
"text": "In recent times, there has been lot of work in the area of legal text processing. Different tasks and techniques have been proposed. For example, Prior Case Retrieval (Jackson et al., 2003), Summarization (Moens et al., 1999; Saravanan et al., 2007), Case Prediction (Malik et al., 2021b; Chalkidis et al., 2019; Strickson and De La Iglesia, 2020a; Sulea et al., 2017; Kapoor et al., 2022), Argument Mining (Wyner et al., 2010; Moens et al., 2007), Information Extraction and Retrieval (Tran",
|
| 252 |
+
"bbox": [
|
| 253 |
+
114,
|
| 254 |
+
714,
|
| 255 |
+
489,
|
| 256 |
+
843
|
| 257 |
+
],
|
| 258 |
+
"page_idx": 1
|
| 259 |
+
},
|
| 260 |
+
{
|
| 261 |
+
"type": "text",
|
| 262 |
+
"text": "term having English word BUILD and Hindi word nyAI (short for nyayi, which means justice). The project is hosted at https://legal-nlp-ekstep.github.io/Competitions/Rhetorical-Role/",
|
| 263 |
+
"bbox": [
|
| 264 |
+
114,
|
| 265 |
+
856,
|
| 266 |
+
489,
|
| 267 |
+
909
|
| 268 |
+
],
|
| 269 |
+
"page_idx": 1
|
| 270 |
+
},
|
| 271 |
+
{
|
| 272 |
+
"type": "text",
|
| 273 |
+
"text": "et al., 2019; Grabmair et al., 2011; Tran et al., 2019), and Event Extraction (Lagos et al., 2010; Maxwell et al., 2009; Lagos et al., 2010).",
|
| 274 |
+
"bbox": [
|
| 275 |
+
509,
|
| 276 |
+
439,
|
| 277 |
+
884,
|
| 278 |
+
483
|
| 279 |
+
],
|
| 280 |
+
"page_idx": 1
|
| 281 |
+
},
|
| 282 |
+
{
|
| 283 |
+
"type": "text",
|
| 284 |
+
"text": "Recently, efforts have been made to develop corpora that could aid various legal NLP tasks; for example, Malik et al. (2021b) have released a corpus of 35K Indian Supreme Court documents for the task of judgment prediction and explanation. Chalkidis et al. (2019) have released 11,478 legal documents corresponding to the European Court of Human Rights (ECHR). Strickson and De La Iglesia (2020b) have proposed a corpus of 4,959 UK Supreme Court documents. Xiao et al. (2018) have created a large-scale corpus of 2.68 million criminal case documents and released CAIL (Chinese AI and Law Challenge) dataset for judgment prediction. A new multilingual dataset of European Union (EU) legal documents has been recently released by Chalkidis et al. (2021).",
|
| 285 |
+
"bbox": [
|
| 286 |
+
509,
|
| 287 |
+
489,
|
| 288 |
+
884,
|
| 289 |
+
703
|
| 290 |
+
],
|
| 291 |
+
"page_idx": 1
|
| 292 |
+
},
|
| 293 |
+
{
|
| 294 |
+
"type": "text",
|
| 295 |
+
"text": "Research in rhetorical roles for legal text processing has been active in the past few years. Farzindar and Lapalme (2004; Hachey and Grover (2006) have leveraged rhetorical roles to create summaries of legal texts. Saravanan et al. (2008) proposed a CRF-based model using hand-crafted features for segmenting documents using seven different roles. Bhatia (2014) created Genre Analysis of Legal Texts to create seven rhetorical categories. Bhattacharya et al. (2019b) have proposed CRF-BiLSTM model for automatically assigning rhetorical roles to sentences in Indian legal documents. (Malik et al., 2021a) have created a RR corpus and annotated with 13 fine-grained roles and further they have developed a multi-task learning based model",
|
| 296 |
+
"bbox": [
|
| 297 |
+
509,
|
| 298 |
+
709,
|
| 299 |
+
884,
|
| 300 |
+
910
|
| 301 |
+
],
|
| 302 |
+
"page_idx": 1
|
| 303 |
+
},
|
| 304 |
+
{
|
| 305 |
+
"type": "text",
|
| 306 |
+
"text": "for predicting RR. In this paper, we also propose a corpus of English Indian legal judgment documents annotated with Rhetorical Roles; however, we annotate the documents with a more extensive set of 12 rhetorical role labels and a NONE label (in the case none of the 12 labels are applicable). Moreover, to the best of our knowledge, we create the largest corpus of 354 documents (vs. 100 documents in previous RR corpus by Malik et al. (2021a)), with 40,315 sentences annotated with 13 $(12 + \\mathrm{NONE})$ different types of rhetorical role labels. We propose state-of-the-art transformer models for RR prediction and show the use case of RRs for case summarization and legal judgment prediction.",
|
| 307 |
+
"bbox": [
|
| 308 |
+
114,
|
| 309 |
+
74,
|
| 310 |
+
489,
|
| 311 |
+
259
|
| 312 |
+
],
|
| 313 |
+
"page_idx": 2
|
| 314 |
+
},
|
| 315 |
+
{
|
| 316 |
+
"type": "text",
|
| 317 |
+
"text": "Recent success in almost every area in NLP has been due to transformer-based neural architectures (Wang et al., 2018). We do not discuss the details of transformer architectures here and refer the reader to the survey on transformers by Tay et al. (2020). We develop transformer-based baseline models for automatically segmenting legal documents into RRs units.",
|
| 318 |
+
"bbox": [
|
| 319 |
+
114,
|
| 320 |
+
260,
|
| 321 |
+
489,
|
| 322 |
+
360
|
| 323 |
+
],
|
| 324 |
+
"page_idx": 2
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"type": "text",
|
| 328 |
+
"text": "3. Rhetorical Roles Corpus",
|
| 329 |
+
"text_level": 1,
|
| 330 |
+
"bbox": [
|
| 331 |
+
176,
|
| 332 |
+
375,
|
| 333 |
+
428,
|
| 334 |
+
392
|
| 335 |
+
],
|
| 336 |
+
"page_idx": 2
|
| 337 |
+
},
|
| 338 |
+
{
|
| 339 |
+
"type": "text",
|
| 340 |
+
"text": "As outlined earlier, legal documents are typically long, and information is spread throughout the document. In order to make the automatic processing of documents easier, documents are divided into topically coherent segments referred to as Rhetorical Roles (Malik et al., 2021a). In this paper, we propose the use of 12 RRs and a NONE label. We started with the list of RR labels proposed by Bhattacharya et al. (2019b); however, we found some of the RR to be ambiguous, hence after having elaborate discussions with law professors, we split some of the RRs (arguments and precedents) to arrive at the list of 12 main roles. Details and definitions for each of the RR are as follows:",
|
| 341 |
+
"bbox": [
|
| 342 |
+
114,
|
| 343 |
+
395,
|
| 344 |
+
489,
|
| 345 |
+
580
|
| 346 |
+
],
|
| 347 |
+
"page_idx": 2
|
| 348 |
+
},
|
| 349 |
+
{
|
| 350 |
+
"type": "text",
|
| 351 |
+
"text": "- Preamble (PREAMBLE): This covers the metadata related to the legal judgment document. A typical judgment would start with the court name, the details of parties, lawyers and judges' names, headnote (summary). This section typically would end with a keyword like (JUDGMENT or ORDER). Some documents also have HEADNOTES, ACTS sections in the beginning. These are also part of the Preamble.",
|
| 352 |
+
"bbox": [
|
| 353 |
+
134,
|
| 354 |
+
582,
|
| 355 |
+
489,
|
| 356 |
+
709
|
| 357 |
+
],
|
| 358 |
+
"page_idx": 2
|
| 359 |
+
},
|
| 360 |
+
{
|
| 361 |
+
"type": "text",
|
| 362 |
+
"text": "- Facts (FAC): This corresponds to the facts of the case. It refers to the chronology of events that led to filing the case and how it evolved (e.g., First Information Report (FIR) at a police station, filing an appeal to the Magistrate, etc.) Depositions and proceedings of the current court, and summary of lower court proceedings.",
|
| 363 |
+
"bbox": [
|
| 364 |
+
134,
|
| 365 |
+
709,
|
| 366 |
+
489,
|
| 367 |
+
808
|
| 368 |
+
],
|
| 369 |
+
"page_idx": 2
|
| 370 |
+
},
|
| 371 |
+
{
|
| 372 |
+
"type": "text",
|
| 373 |
+
"text": "- Ruling by Lower Court (RLC): Cases are not directly filed in the higher courts but are appealed from lower courts. Consequently, the documents contain judgments given by the lower courts (Trial Court, High Court) based on the present appeal (to the Supreme Court or high court). The lower court's verdict, analysis, and the ratio behind the",
|
| 374 |
+
"bbox": [
|
| 375 |
+
134,
|
| 376 |
+
809,
|
| 377 |
+
489,
|
| 378 |
+
909
|
| 379 |
+
],
|
| 380 |
+
"page_idx": 2
|
| 381 |
+
},
|
| 382 |
+
{
|
| 383 |
+
"type": "text",
|
| 384 |
+
"text": "judgment by the lower court is annotated with this label.",
|
| 385 |
+
"bbox": [
|
| 386 |
+
542,
|
| 387 |
+
74,
|
| 388 |
+
882,
|
| 389 |
+
101
|
| 390 |
+
],
|
| 391 |
+
"page_idx": 2
|
| 392 |
+
},
|
| 393 |
+
{
|
| 394 |
+
"type": "list",
|
| 395 |
+
"sub_type": "text",
|
| 396 |
+
"list_items": [
|
| 397 |
+
"- Issues (ISSUE): Some judgments mention the key points on which the verdict needs to be delivered. Such Legal Questions Framed by the Court are ISSUES.",
|
| 398 |
+
"- Argument by Petitioner (ARGPETITIONER): Arguments by petitioners' lawyers. Precedent cases argued by petitioner lawyers fall under this category, but when the court discusses them later, then they belong to either the relied / not relied upon category.",
|
| 399 |
+
"- Argument by Respondent (ARG_RESPONDENT): Arguments by respondents' lawyers. Precedent cases argued by respondent lawyers fall under this, but when the court discusses them later, they belong to either the relied / not relied category.",
|
| 400 |
+
"- Analysis (ANALYSIS): These are views of the court. This includes courts' discussion on the evidence, facts presented, prior cases, and statutes. Discussions on how the law is applicable or not applicable to the current case. Observations (non-binding) from the court. It is the parent tag for three tags: PRE_RLEIED, PRE_NOT_RELIED, and STATUTE i.e., every statement which belongs to these three tags should also be marked as ANALYSIS.",
|
| 401 |
+
"- Statute (STA): This includes texts in which the court discusses established laws, that can come from a mixture of sources: Acts, Sections, Articles, Rules, Order, Notices, Notifications, and Quotations directly from the bare act. The statute will have both the tags Analysis + Statute.",
|
| 402 |
+
"- Precedent Relied (PRE_RELIED): Texts in which the court discusses prior case documents, discussions and decisions which were relied upon by the court for final decisions. Precedent will have both the tags Analysis + Precedent.",
|
| 403 |
+
"- Precedent Not Relied (PRE_NOT_RELIED): Texts in which the court discusses prior case documents, discussions and decisions which were not relied upon by the court for final decisions. It could be due to the fact that the situation, in that case, is not relevant to the current case.",
|
| 404 |
+
"- Ratio of the decision (Ratio): This includes the main reason given for the application of any legal principle to the legal issue. It is the result of the analysis by the court. It typically appears right before the final decision. It is not the same as \"Ratio Decidendi\" taught in the legal academic curriculum.",
|
| 405 |
+
"- Ruling by Present Court (RPC): Final decision + conclusion + order of the Court following from the natural/logical outcome of the rationale.",
|
| 406 |
+
"- NONE: If a sentence does not belong to any of the above categories, it is labeled as NONE."
|
| 407 |
+
],
|
| 408 |
+
"bbox": [
|
| 409 |
+
529,
|
| 410 |
+
103,
|
| 411 |
+
884,
|
| 412 |
+
897
|
| 413 |
+
],
|
| 414 |
+
"page_idx": 2
|
| 415 |
+
},
|
| 416 |
+
{
|
| 417 |
+
"type": "table",
|
| 418 |
+
"img_path": "images/fe1c20d9b2ffe75934f36b969302b5d80595668b06e61948db7023accb8351e5.jpg",
|
| 419 |
+
"table_caption": [],
|
| 420 |
+
"table_footnote": [
|
| 421 |
+
"Table 1: Corpus Statistics: The corpus is split into train, val and test. The table shows number of documents, sentences, tokens and average number of tokens per document."
|
| 422 |
+
],
|
| 423 |
+
"table_body": "<table><tr><td>Dataset</td><td>Docs</td><td>Sentences</td><td>Tokens</td><td>Avg To-kens</td></tr><tr><td>Train</td><td>247</td><td>28986</td><td>938K</td><td>3797</td></tr><tr><td>Validation</td><td>30</td><td>2879</td><td>88K</td><td>2947</td></tr><tr><td>Test (in-domain)</td><td>50</td><td>4158</td><td>134K</td><td>2681</td></tr><tr><td>Test (out-domain)</td><td>27</td><td>4292</td><td>127K</td><td>4722</td></tr><tr><td>Total</td><td>354</td><td>40315</td><td>1.3M</td><td>3638</td></tr></table>",
|
| 424 |
+
"bbox": [
|
| 425 |
+
117,
|
| 426 |
+
72,
|
| 427 |
+
485,
|
| 428 |
+
181
|
| 429 |
+
],
|
| 430 |
+
"page_idx": 3
|
| 431 |
+
},
|
| 432 |
+
{
|
| 433 |
+
"type": "text",
|
| 434 |
+
"text": "3.1. Corpus Documents",
|
| 435 |
+
"text_level": 1,
|
| 436 |
+
"bbox": [
|
| 437 |
+
115,
|
| 438 |
+
261,
|
| 439 |
+
319,
|
| 440 |
+
275
|
| 441 |
+
],
|
| 442 |
+
"page_idx": 3
|
| 443 |
+
},
|
| 444 |
+
{
|
| 445 |
+
"type": "text",
|
| 446 |
+
"text": "The corpus consists of legal judgment documents from the Supreme Court of India, High Courts in different Indian states, and some district-level courts. Raw judgment text files were scraped from Indian Court websites.2 Data has a mix of Supreme Court judgments $(40\\%)$ , High Courts judgments $(40\\%)$ and district court judgments $(20\\%)$ . To develop baseline models, we divided the dataset into train, and validation. Test set was further divided into in-domain and out of domain. Train, validation and test (in-domain) datasets contain annotated judgments belonging to tax and criminal cases. Test (out-domains) contains annotated judgements from 3 domains: Motor Vehicles Act (9), Industrial and Labour law (8) and Land and Property law (10). The statistics of the corpus are shown in Table 1. Table 2 gives number of sentences for each role in the entire corpus. Qualified law experts annotated test data with cross checks.",
|
| 447 |
+
"bbox": [
|
| 448 |
+
114,
|
| 449 |
+
279,
|
| 450 |
+
489,
|
| 451 |
+
533
|
| 452 |
+
],
|
| 453 |
+
"page_idx": 3
|
| 454 |
+
},
|
| 455 |
+
{
|
| 456 |
+
"type": "table",
|
| 457 |
+
"img_path": "images/ee4eb8186968bc088036d5266830c7e0cee4525dcd55f69eb218dd17e8565a55.jpg",
|
| 458 |
+
"table_caption": [],
|
| 459 |
+
"table_footnote": [
|
| 460 |
+
"Table 2: Role-wise sentence count in the entire corpus"
|
| 461 |
+
],
|
| 462 |
+
"table_body": "<table><tr><td>Rhetorical Role</td><td>Sentences</td></tr><tr><td>ANALYSIS</td><td>14300</td></tr><tr><td>ARG PETITIONER</td><td>1771</td></tr><tr><td>ARG RESPONDENT</td><td>1068</td></tr><tr><td>FAC</td><td>8045</td></tr><tr><td>ISSUE</td><td>535</td></tr><tr><td>NONE</td><td>2037</td></tr><tr><td>PREAMBLE</td><td>6116</td></tr><tr><td>PRE NOT RELIED</td><td>217</td></tr><tr><td>PRE RELIED</td><td>1934</td></tr><tr><td>RATIO</td><td>1014</td></tr><tr><td>RLC</td><td>1081</td></tr><tr><td>RPC</td><td>1562</td></tr><tr><td>STA</td><td>625</td></tr><tr><td>Overall</td><td>40305</td></tr></table>",
|
| 463 |
+
"bbox": [
|
| 464 |
+
181,
|
| 465 |
+
543,
|
| 466 |
+
423,
|
| 467 |
+
751
|
| 468 |
+
],
|
| 469 |
+
"page_idx": 3
|
| 470 |
+
},
|
| 471 |
+
{
|
| 472 |
+
"type": "text",
|
| 473 |
+
"text": "3.2. Annotation Process",
|
| 474 |
+
"text_level": 1,
|
| 475 |
+
"bbox": [
|
| 476 |
+
115,
|
| 477 |
+
794,
|
| 478 |
+
319,
|
| 479 |
+
808
|
| 480 |
+
],
|
| 481 |
+
"page_idx": 3
|
| 482 |
+
},
|
| 483 |
+
{
|
| 484 |
+
"type": "text",
|
| 485 |
+
"text": "The annotation process was designed in consultation with legal experts (law professors and legal practitioners). Given the nature of the task, the RR annotations",
|
| 486 |
+
"bbox": [
|
| 487 |
+
114,
|
| 488 |
+
812,
|
| 489 |
+
489,
|
| 490 |
+
854
|
| 491 |
+
],
|
| 492 |
+
"page_idx": 3
|
| 493 |
+
},
|
| 494 |
+
{
|
| 495 |
+
"type": "text",
|
| 496 |
+
"text": "require a deep understanding of the law and the legal process. Consequently, we involved law students and legal practitioners in annotating the documents. The process involved annotating each sentence in a given document with one of the 12 RR + None labels described earlier. We experimented with different levels of granularity (phrase level, sentence level, paragraph level, etc.) for annotating the documents with RR. Pilot experiments indicated sentence level RR annotation to be appropriate as it maintains the balance (with regard to semantic coherence) between too short and too long texts. The legal documents were split using spaCy library (spaCy, 2021). Rhetorical role annotation is not a trivial task; we faced two main challenges in the annotation activity: availability for a large group of legal experts and, secondly, motivating the legal experts to perform annotation consistently while maintaining quality. We performed the annotation activity via crowdsourcing as described next.",
|
| 497 |
+
"bbox": [
|
| 498 |
+
509,
|
| 499 |
+
74,
|
| 500 |
+
884,
|
| 501 |
+
344
|
| 502 |
+
],
|
| 503 |
+
"page_idx": 3
|
| 504 |
+
},
|
| 505 |
+
{
|
| 506 |
+
"type": "text",
|
| 507 |
+
"text": "3.3. Data Annotation Pipeline",
|
| 508 |
+
"text_level": 1,
|
| 509 |
+
"bbox": [
|
| 510 |
+
510,
|
| 511 |
+
357,
|
| 512 |
+
763,
|
| 513 |
+
373
|
| 514 |
+
],
|
| 515 |
+
"page_idx": 3
|
| 516 |
+
},
|
| 517 |
+
{
|
| 518 |
+
"type": "text",
|
| 519 |
+
"text": "Corpus documents were annotated via a crowdsourcing activity. We invited law students from various law schools across the country to volunteer for the data annotation exercise. We created processes to onboard student volunteers and introduced them to the entire activity and its goal. Filtering was carried out at multiple stages to retain the most motivated and consistent (from the perspective of quality of the annotations) students. The entire pipeline is shown in Figure 2. We describe each stage of the pipeline in the next sections.",
|
| 520 |
+
"bbox": [
|
| 521 |
+
509,
|
| 522 |
+
376,
|
| 523 |
+
882,
|
| 524 |
+
520
|
| 525 |
+
],
|
| 526 |
+
"page_idx": 3
|
| 527 |
+
},
|
| 528 |
+
{
|
| 529 |
+
"type": "image",
|
| 530 |
+
"img_path": "images/df7ca02e50ecbc8aea398a207099d326bbe92ee3902d6a155fabf2d76f210f41.jpg",
|
| 531 |
+
"image_caption": [
|
| 532 |
+
"Figure 2: Data Annotation Pipeline"
|
| 533 |
+
],
|
| 534 |
+
"image_footnote": [],
|
| 535 |
+
"bbox": [
|
| 536 |
+
557,
|
| 537 |
+
533,
|
| 538 |
+
838,
|
| 539 |
+
556
|
| 540 |
+
],
|
| 541 |
+
"page_idx": 3
|
| 542 |
+
},
|
| 543 |
+
{
|
| 544 |
+
"type": "text",
|
| 545 |
+
"text": "3.3.1. Student Selection",
|
| 546 |
+
"text_level": 1,
|
| 547 |
+
"bbox": [
|
| 548 |
+
510,
|
| 549 |
+
608,
|
| 550 |
+
695,
|
| 551 |
+
621
|
| 552 |
+
],
|
| 553 |
+
"page_idx": 3
|
| 554 |
+
},
|
| 555 |
+
{
|
| 556 |
+
"type": "text",
|
| 557 |
+
"text": "We did a nationwide call for volunteers through a network of law students. The application required students to describe their motivation. A basic screening was done to eliminate applications that were partially filled. Finally, after filtering, we selected an initial group of 50 students. The selected students were then on-boarded and were motivated by explaining the big picture of the impact of their contribution. The data annotations were done voluntarily by law students from multiple Indian law universities. Interaction with the law students revealed that they were motivated to learn more about AI and contribute towards the development of the AI field, and hence they volunteered for the activity. In order to smoothly conduct the annotation activity via crowdsourcing, we organized the volunteers in a hierarchical structure based on their experience and performance during a pilot study. The organizational structure for this exercise is shown in Figure 3.",
|
| 558 |
+
"bbox": [
|
| 559 |
+
509,
|
| 560 |
+
623,
|
| 561 |
+
884,
|
| 562 |
+
878
|
| 563 |
+
],
|
| 564 |
+
"page_idx": 3
|
| 565 |
+
},
|
| 566 |
+
{
|
| 567 |
+
"type": "text",
|
| 568 |
+
"text": "Project Administrators: They designed data collection and communication processes, built tools for data",
|
| 569 |
+
"bbox": [
|
| 570 |
+
509,
|
| 571 |
+
879,
|
| 572 |
+
882,
|
| 573 |
+
909
|
| 574 |
+
],
|
| 575 |
+
"page_idx": 3
|
| 576 |
+
},
|
| 577 |
+
{
|
| 578 |
+
"type": "page_footnote",
|
| 579 |
+
"text": "$^{2}$ https://main.sci.gov.in/; https://ecourts.gov.in/ecourts_home/static/highcourts.php",
|
| 580 |
+
"bbox": [
|
| 581 |
+
115,
|
| 582 |
+
866,
|
| 583 |
+
487,
|
| 584 |
+
909
|
| 585 |
+
],
|
| 586 |
+
"page_idx": 3
|
| 587 |
+
},
|
| 588 |
+
{
|
| 589 |
+
"type": "image",
|
| 590 |
+
"img_path": "images/ce81e0b9f46056e078afe78c7dc4558e36973acddf3803887fb106485509937a.jpg",
|
| 591 |
+
"image_caption": [
|
| 592 |
+
"Figure 3: Organization Structure"
|
| 593 |
+
],
|
| 594 |
+
"image_footnote": [],
|
| 595 |
+
"bbox": [
|
| 596 |
+
188,
|
| 597 |
+
72,
|
| 598 |
+
421,
|
| 599 |
+
200
|
| 600 |
+
],
|
| 601 |
+
"page_idx": 4
|
| 602 |
+
},
|
| 603 |
+
{
|
| 604 |
+
"type": "text",
|
| 605 |
+
"text": "collection, and supervised the overall activity. This group included law experts and authors of the paper.",
|
| 606 |
+
"bbox": [
|
| 607 |
+
114,
|
| 608 |
+
244,
|
| 609 |
+
487,
|
| 610 |
+
273
|
| 611 |
+
],
|
| 612 |
+
"page_idx": 4
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"type": "text",
|
| 616 |
+
"text": "Project Coordinators: They mentored and resolved the doubts of the students. They were responsible for assuring the quality of the data. Coordinators identified and rectified conceptual errors among the students. Further, the coordinators assisted the administrators during the adjudication process.",
|
| 617 |
+
"bbox": [
|
| 618 |
+
114,
|
| 619 |
+
274,
|
| 620 |
+
487,
|
| 621 |
+
359
|
| 622 |
+
],
|
| 623 |
+
"page_idx": 4
|
| 624 |
+
},
|
| 625 |
+
{
|
| 626 |
+
"type": "text",
|
| 627 |
+
"text": "Student Volunteers: They annotated the data and also provided feedback on the entire process. Volunteers were in constant communication with the coordinators. At later stages of annotations, some of the best-performing students assisted in the adjudication process (§3.3.5). Best performing students were selected based on two criteria: timely submissions and ground truth agreement score. Students were assessed if they completed the task within a stipulated time at each annotation stage. Furthermore, each batch of annotation document consisted of sentences for which true (gold) RR labels were known apriori (also §3.3.4). Students were assessed for their performance on the ground truth (sentences with gold RR labels), and students who were correct on at least 90% of ground truth sentences were considered for the best performing category.",
|
| 628 |
+
"bbox": [
|
| 629 |
+
114,
|
| 630 |
+
360,
|
| 631 |
+
487,
|
| 632 |
+
586
|
| 633 |
+
],
|
| 634 |
+
"page_idx": 4
|
| 635 |
+
},
|
| 636 |
+
{
|
| 637 |
+
"type": "text",
|
| 638 |
+
"text": "Before beginning the entire activity, we conducted a small pilot to assess the feasibility of crowdsourcing with student volunteers. Volunteers who completed MOOC, calibration and annotation exercises with satisfactory performance were then invited to become project coordinators for the subsequent data collection phase. The chance to become coordinator further provided positive reinforcement for the efforts, thus keeping the students well motivated. In the end, we selected eight students as project coordinators.",
|
| 639 |
+
"bbox": [
|
| 640 |
+
114,
|
| 641 |
+
587,
|
| 642 |
+
487,
|
| 643 |
+
728
|
| 644 |
+
],
|
| 645 |
+
"page_idx": 4
|
| 646 |
+
},
|
| 647 |
+
{
|
| 648 |
+
"type": "text",
|
| 649 |
+
"text": "3.3.2. MOOC",
|
| 650 |
+
"text_level": 1,
|
| 651 |
+
"bbox": [
|
| 652 |
+
115,
|
| 653 |
+
738,
|
| 654 |
+
230,
|
| 655 |
+
751
|
| 656 |
+
],
|
| 657 |
+
"page_idx": 4
|
| 658 |
+
},
|
| 659 |
+
{
|
| 660 |
+
"type": "text",
|
| 661 |
+
"text": "Law students do not have an understanding of the workings of AI. We designed a MOOC (Massive Open Online Course)<sup>3</sup> for the annotators. The MOOC explained the AI technologies to the law students, described the process of building datasets for AI algorithms, and explained the concept of the rhetorical role. Students were expected to complete the MOOC in a stipulated amount of time and complete the associated",
|
| 662 |
+
"bbox": [
|
| 663 |
+
114,
|
| 664 |
+
752,
|
| 665 |
+
487,
|
| 666 |
+
866
|
| 667 |
+
],
|
| 668 |
+
"page_idx": 4
|
| 669 |
+
},
|
| 670 |
+
{
|
| 671 |
+
"type": "image",
|
| 672 |
+
"img_path": "images/f5ce3179bd5a35b4ae0883e2ed1082ff5be127d33b79b2112c0615b833fda951.jpg",
|
| 673 |
+
"image_caption": [
|
| 674 |
+
"Figure 4: Ground Truth Score Histogram"
|
| 675 |
+
],
|
| 676 |
+
"image_footnote": [],
|
| 677 |
+
"bbox": [
|
| 678 |
+
588,
|
| 679 |
+
76,
|
| 680 |
+
806,
|
| 681 |
+
172
|
| 682 |
+
],
|
| 683 |
+
"page_idx": 4
|
| 684 |
+
},
|
| 685 |
+
{
|
| 686 |
+
"type": "text",
|
| 687 |
+
"text": "quiz, which checked for a basic understanding of the rhetorical role definitions.",
|
| 688 |
+
"bbox": [
|
| 689 |
+
509,
|
| 690 |
+
217,
|
| 691 |
+
882,
|
| 692 |
+
244
|
| 693 |
+
],
|
| 694 |
+
"page_idx": 4
|
| 695 |
+
},
|
| 696 |
+
{
|
| 697 |
+
"type": "text",
|
| 698 |
+
"text": "3.3.3. Calibration",
|
| 699 |
+
"text_level": 1,
|
| 700 |
+
"bbox": [
|
| 701 |
+
510,
|
| 702 |
+
256,
|
| 703 |
+
653,
|
| 704 |
+
269
|
| 705 |
+
],
|
| 706 |
+
"page_idx": 4
|
| 707 |
+
},
|
| 708 |
+
{
|
| 709 |
+
"type": "text",
|
| 710 |
+
"text": "Since in the initial stages, students can differ in understanding RRs. We calibrated the students to bring them to a common ground. Calibration focused on shaping a common understanding of definitions among students. Students were asked to annotate three judgments that experts had already annotated. The sentences that differed from expert (gold) annotations were highlighted, and students were asked to calibrate their annotations. Calibration was an iterative process, and it was carried out till students came at the level of expert annotations.",
|
| 711 |
+
"bbox": [
|
| 712 |
+
509,
|
| 713 |
+
272,
|
| 714 |
+
882,
|
| 715 |
+
414
|
| 716 |
+
],
|
| 717 |
+
"page_idx": 4
|
| 718 |
+
},
|
| 719 |
+
{
|
| 720 |
+
"type": "text",
|
| 721 |
+
"text": "3.3.4. Data Annotation",
|
| 722 |
+
"text_level": 1,
|
| 723 |
+
"bbox": [
|
| 724 |
+
510,
|
| 725 |
+
426,
|
| 726 |
+
690,
|
| 727 |
+
439
|
| 728 |
+
],
|
| 729 |
+
"page_idx": 4
|
| 730 |
+
},
|
| 731 |
+
{
|
| 732 |
+
"type": "text",
|
| 733 |
+
"text": "In the end, 35 out of 50 selected students qualified for the calibration stage, and this was the final pool that annotated the entire corpus. Each student annotated 24 documents, and three students annotated each document. We did not observe any student dropout after the calibration stage. On average, it took about 40 minutes to annotate a single document. The entire annotation activity took around six weeks. Students annotated train and validation documents ( $= 277$ ), and experts annotated 77 test documents. As described earlier, during the annotation process, each student was also randomly assigned four documents (chosen randomly with replacement from the test set) for which gold (ground truth) annotations were known to coordinators and administrators but not to the students. The performance of students (referred to as Ground Truth Score) on these gold documents was assessed. Ground truth score is the percentage of sentences in gold documents that are correctly annotated. The average ground truth score for all students was $85\\%$ . Figure 4 shows histogram of ground truth scores for a judgment. It shows that the majority of documents are in the 90 to 100 percent range, indicative of consistent annotations with ground truth docs. Note that documents shown in Figure 4 (y-axis) are chosen randomly (with replacement) from the test set and hence there is overlap between documents across different batches. Furthermore, coordinators provided feedback to students with lower scores to improve their overall annotation quality.",
|
| 734 |
+
"bbox": [
|
| 735 |
+
509,
|
| 736 |
+
441,
|
| 737 |
+
882,
|
| 738 |
+
854
|
| 739 |
+
],
|
| 740 |
+
"page_idx": 4
|
| 741 |
+
},
|
| 742 |
+
{
|
| 743 |
+
"type": "text",
|
| 744 |
+
"text": "3.3.5. Adjudication",
|
| 745 |
+
"text_level": 1,
|
| 746 |
+
"bbox": [
|
| 747 |
+
510,
|
| 748 |
+
865,
|
| 749 |
+
665,
|
| 750 |
+
879
|
| 751 |
+
],
|
| 752 |
+
"page_idx": 4
|
| 753 |
+
},
|
| 754 |
+
{
|
| 755 |
+
"type": "text",
|
| 756 |
+
"text": "A majority voting scheme was used to decide the final RR label. However, in some instances, annotators as",
|
| 757 |
+
"bbox": [
|
| 758 |
+
509,
|
| 759 |
+
879,
|
| 760 |
+
882,
|
| 761 |
+
908
|
| 762 |
+
],
|
| 763 |
+
"page_idx": 4
|
| 764 |
+
},
|
| 765 |
+
{
|
| 766 |
+
"type": "page_footnote",
|
| 767 |
+
"text": "$^3$ https://www.youtube.com/playlist? list=PL1z52lLL6eWnDnc3Wgfcu6neczruU3fFw0",
|
| 768 |
+
"bbox": [
|
| 769 |
+
115,
|
| 770 |
+
879,
|
| 771 |
+
473,
|
| 772 |
+
908
|
| 773 |
+
],
|
| 774 |
+
"page_idx": 4
|
| 775 |
+
},
|
| 776 |
+
{
|
| 777 |
+
"type": "text",
|
| 778 |
+
"text": "signed three different labels; such documents were further sent for adjudication. The adjudication was done by experts, project coordinators, and some of the best-performing students (§3.3.1).",
|
| 779 |
+
"bbox": [
|
| 780 |
+
115,
|
| 781 |
+
74,
|
| 782 |
+
489,
|
| 783 |
+
134
|
| 784 |
+
],
|
| 785 |
+
"page_idx": 5
|
| 786 |
+
},
|
| 787 |
+
{
|
| 788 |
+
"type": "text",
|
| 789 |
+
"text": "3.3.6. Annotation Quality Assessment",
|
| 790 |
+
"text_level": 1,
|
| 791 |
+
"bbox": [
|
| 792 |
+
115,
|
| 793 |
+
145,
|
| 794 |
+
401,
|
| 795 |
+
158
|
| 796 |
+
],
|
| 797 |
+
"page_idx": 5
|
| 798 |
+
},
|
| 799 |
+
{
|
| 800 |
+
"type": "text",
|
| 801 |
+
"text": "Final annotation quality was evaluated using Fleiss Kappa (Fleiss et al., 2013). Overall, Fleiss Kappa score was 0.59, pointing towards moderate agreement. We saw high agreement amongst annotators on PREAMBLE, RPC, NONE, and ISSUE. There were medium agreements on FACTS, RLC, ANALYSIS, PRECEDENT, and ARGUMENTS. RATIO was the most ambiguous role. ANALYSIS was very often confused with FACTS and ARGUMENTS. In a judgment, a judge emphasizes some of the facts, which as per definition, are considered as analysis role; however, annotators often confuse them as facts role. Moreover, sometimes the judge may mention arguments and give their opinion on it; this, as per definition, is the analysis role, but annotators sometimes confuse it with the argument role. FACTS was sometimes confused with RLC (Ruling by Lower Court).",
|
| 802 |
+
"bbox": [
|
| 803 |
+
114,
|
| 804 |
+
160,
|
| 805 |
+
489,
|
| 806 |
+
401
|
| 807 |
+
],
|
| 808 |
+
"page_idx": 5
|
| 809 |
+
},
|
| 810 |
+
{
|
| 811 |
+
"type": "text",
|
| 812 |
+
"text": "4. RR Prediction Baseline Models",
|
| 813 |
+
"text_level": 1,
|
| 814 |
+
"bbox": [
|
| 815 |
+
147,
|
| 816 |
+
418,
|
| 817 |
+
457,
|
| 818 |
+
434
|
| 819 |
+
],
|
| 820 |
+
"page_idx": 5
|
| 821 |
+
},
|
| 822 |
+
{
|
| 823 |
+
"type": "text",
|
| 824 |
+
"text": "The end goal behind this work has been to encourage the development of systems that can segment a new legal document automatically in terms of rhetorical roles. Towards this goal, we experimented with some baseline models. Since transformer-based models (Wolf et al., 2020) have shown state-of-the-art (SOTA) performance on most of the NLP tasks, including the tasks in legal NLP domain (Malik et al., 2021b), we mainly experimented with them. In the RR prediction task, given a legal document, the task is to predict the RR label for each sentence in the document. We pose this as a multiclass sequence prediction problem. We initially experimented with variants of the model by Bhattacharya et al. (2019b). In particular, we use a CRF (Conditional Random Field) model for RR prediction. The features for this CRF model come from a transformer, i.e., the BERT-BASE (Devlin et al., 2018) model is used to get sentence embeddings corresponding to the CLS token. These sentence embeddings are then passed through the CRF layer to get final predictions. We call this model BERT_CRF. We also tried the architecture proposed by Cohan et al. (2019) which captures contextual dependencies using only BERT without the need for hierarchical encoding using a CRF. We call this model BERT_only. After experiments with vanilla transformer models, we finally created the baseline system using the SciBERT-HSLN architecture (Brack et al., 2021). Figure 5 shows the overall architecture of the proposed model. In the proposed model, each sentence is passed through BERT BASE model to get word embeddings, these embeddings are further processed by Bi-LSTM layer followed by attention-based pooling layer to get sentence representations $\\{s_1,s_2,\\dots s_n\\}$ .",
|
| 825 |
+
"bbox": [
|
| 826 |
+
114,
|
| 827 |
+
439,
|
| 828 |
+
489,
|
| 829 |
+
910
|
| 830 |
+
],
|
| 831 |
+
"page_idx": 5
|
| 832 |
+
},
|
| 833 |
+
{
|
| 834 |
+
"type": "image",
|
| 835 |
+
"img_path": "images/3d1fbaa4a99215c85a85c9bf82bb82f79ba23fc0eeea060a6b7de228751dee27.jpg",
|
| 836 |
+
"image_caption": [
|
| 837 |
+
"Figure 5: RR Prediction Baseline model inspired by Brack et al. (2021)"
|
| 838 |
+
],
|
| 839 |
+
"image_footnote": [],
|
| 840 |
+
"bbox": [
|
| 841 |
+
594,
|
| 842 |
+
74,
|
| 843 |
+
811,
|
| 844 |
+
218
|
| 845 |
+
],
|
| 846 |
+
"page_idx": 5
|
| 847 |
+
},
|
| 848 |
+
{
|
| 849 |
+
"type": "table",
|
| 850 |
+
"img_path": "images/6e65ff8d94fb064c8319fbeb9691ccfa704fca64c1e56e5a83f97c96b56b0a17.jpg",
|
| 851 |
+
"table_caption": [],
|
| 852 |
+
"table_footnote": [],
|
| 853 |
+
"table_body": "<table><tr><td>Model</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>BERT_CRF</td><td>0.24</td><td>0.24</td><td>0.23</td></tr><tr><td>BERT_only</td><td>0.67</td><td>0.68</td><td>0.67</td></tr><tr><td>SciBERT-HSLN</td><td>0.79</td><td>0.80</td><td>0.79</td></tr></table>",
|
| 854 |
+
"bbox": [
|
| 855 |
+
539,
|
| 856 |
+
291,
|
| 857 |
+
857,
|
| 858 |
+
349
|
| 859 |
+
],
|
| 860 |
+
"page_idx": 5
|
| 861 |
+
},
|
| 862 |
+
{
|
| 863 |
+
"type": "text",
|
| 864 |
+
"text": "Table 3: Performance of models on test (in-domain) data",
|
| 865 |
+
"bbox": [
|
| 866 |
+
509,
|
| 867 |
+
357,
|
| 868 |
+
882,
|
| 869 |
+
385
|
| 870 |
+
],
|
| 871 |
+
"page_idx": 5
|
| 872 |
+
},
|
| 873 |
+
{
|
| 874 |
+
"type": "text",
|
| 875 |
+
"text": "Context Enrichment layer encodes the contextual information, by taking sequence of sentence representations, resulting in contextualized sentence representations: $\\{c_1,c_2,\\dots ,c_n\\}$ . This is followed by MLP layers and CRF that leverage the distributed representation features to predict the RR label for each sentence via softmax activation.",
|
| 876 |
+
"bbox": [
|
| 877 |
+
509,
|
| 878 |
+
409,
|
| 879 |
+
882,
|
| 880 |
+
507
|
| 881 |
+
],
|
| 882 |
+
"page_idx": 5
|
| 883 |
+
},
|
| 884 |
+
{
|
| 885 |
+
"type": "text",
|
| 886 |
+
"text": "Results: The performance of different models was tested on test(in-domain) data and results are given in Table 3. We use standard weighted F1 score metric for evaluation. As can be observed, the BERT_CRF model performs the worst, and the BERT_only model performs worse than the proposed model SciBERT-HSLN, which achieved a weighted F1 score of $78\\%$ . It is perhaps because SciBERT-HSLN, being a sequential model, can capture longer range dependencies between sentences in a document. The results of the model on the test set for each of the RR labels are shown in Table 4. Figure 6 shows the confusion matrix for the SciBERT-HSLN model. As can be observed from Table 4 and Figure 6, ARGUMENTS based roles are miss-classified very often and confused among the two types of ARGUMENTS and also sometimes confused with FACTS and ANALYSIS. PREAMBLE is almost perfectly classified. As can be seen, PRECEDENT NOT RELIED is completely miss-classified and confused with PRECEDENT RELIED and ANALYSIS. RATIO is often confused with ANALYSIS, and this trend is similar to what was observed for annotators as well. Similar to what was observed for annotators, RPC, PREAMBLE, NONE and ISSUE are classified with decent F1 scores. STATUES are also not well classified as many times a judge mentions some laws in their opinion and model tends to learn these spurious patterns as analysis and miss-classifies actual stat-",
|
| 887 |
+
"bbox": [
|
| 888 |
+
509,
|
| 889 |
+
511,
|
| 890 |
+
884,
|
| 891 |
+
910
|
| 892 |
+
],
|
| 893 |
+
"page_idx": 5
|
| 894 |
+
},
|
| 895 |
+
{
|
| 896 |
+
"type": "table",
|
| 897 |
+
"img_path": "images/4c8e9c62d9c398172f611722c0ffe60fe8925d4cfbd3e6a00b9818217edef5ed.jpg",
|
| 898 |
+
"table_caption": [],
|
| 899 |
+
"table_footnote": [],
|
| 900 |
+
"table_body": "<table><tr><td>Rhetorical Role</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>ANALYSIS</td><td>0.77</td><td>0.89</td><td>0.83</td></tr><tr><td>ARGPETITIONER</td><td>0.60</td><td>0.64</td><td>0.62</td></tr><tr><td>ARGRESPONDENT</td><td>0.84</td><td>0.41</td><td>0.55</td></tr><tr><td>FAC</td><td>0.80</td><td>0.84</td><td>0.82</td></tr><tr><td>ISSUE</td><td>0.93</td><td>0.87</td><td>0.90</td></tr><tr><td>NONE</td><td>0.85</td><td>0.84</td><td>0.85</td></tr><tr><td>PREAMBLE</td><td>0.96</td><td>0.98</td><td>0.97</td></tr><tr><td>PRE_NOT_RELIED</td><td>0.00</td><td>0.00</td><td>0.00</td></tr><tr><td>PRE.RelIED</td><td>0.79</td><td>0.60</td><td>0.68</td></tr><tr><td>RATIO</td><td>0.53</td><td>0.56</td><td>0.54</td></tr><tr><td>RLC</td><td>0.75</td><td>0.45</td><td>0.57</td></tr><tr><td>RPC</td><td>0.78</td><td>0.87</td><td>0.82</td></tr><tr><td>STA</td><td>0.77</td><td>0.54</td><td>0.64</td></tr><tr><td>Overall</td><td>0.79</td><td>0.80</td><td>0.79</td></tr></table>",
|
| 901 |
+
"bbox": [
|
| 902 |
+
126,
|
| 903 |
+
71,
|
| 904 |
+
478,
|
| 905 |
+
280
|
| 906 |
+
],
|
| 907 |
+
"page_idx": 6
|
| 908 |
+
},
|
| 909 |
+
{
|
| 910 |
+
"type": "image",
|
| 911 |
+
"img_path": "images/762edb6787a6fecc5122ada59b43ef8e4a519afa09d44f69381f7aecad738a27.jpg",
|
| 912 |
+
"image_caption": [
|
| 913 |
+
"Figure 6: Confusion Matrix for SciBERT-HSLN model predictions on the test data"
|
| 914 |
+
],
|
| 915 |
+
"image_footnote": [],
|
| 916 |
+
"bbox": [
|
| 917 |
+
124,
|
| 918 |
+
338,
|
| 919 |
+
480,
|
| 920 |
+
549
|
| 921 |
+
],
|
| 922 |
+
"page_idx": 6
|
| 923 |
+
},
|
| 924 |
+
{
|
| 925 |
+
"type": "text",
|
| 926 |
+
"text": "ues as analysis. We have also created a leaderboard for the task of RR prediction where other researchers can experiment with various approaches.",
|
| 927 |
+
"bbox": [
|
| 928 |
+
114,
|
| 929 |
+
609,
|
| 930 |
+
487,
|
| 931 |
+
653
|
| 932 |
+
],
|
| 933 |
+
"page_idx": 6
|
| 934 |
+
},
|
| 935 |
+
{
|
| 936 |
+
"type": "text",
|
| 937 |
+
"text": "Results on test (out-domain) data: In order to check if the baseline model trained on Criminal and Tax cases generalized to other domains, we tested the baseline model on 27 judgments from Motor Vehicles, Industrial and Labour and Land and Property cases. Weighted F1 reduced to 0.70. This degradation in performance is mainly due to different style of writing in the judgments.",
|
| 938 |
+
"bbox": [
|
| 939 |
+
114,
|
| 940 |
+
653,
|
| 941 |
+
487,
|
| 942 |
+
766
|
| 943 |
+
],
|
| 944 |
+
"page_idx": 6
|
| 945 |
+
},
|
| 946 |
+
{
|
| 947 |
+
"type": "text",
|
| 948 |
+
"text": "5. Applications of Rhetorical Roles Prediction Task",
|
| 949 |
+
"text_level": 1,
|
| 950 |
+
"bbox": [
|
| 951 |
+
142,
|
| 952 |
+
775,
|
| 953 |
+
460,
|
| 954 |
+
806
|
| 955 |
+
],
|
| 956 |
+
"page_idx": 6
|
| 957 |
+
},
|
| 958 |
+
{
|
| 959 |
+
"type": "text",
|
| 960 |
+
"text": "The purpose of creating a rhetorical role corpus is to enable automated understanding of legal documents by segmenting them into topically coherent units. This can be helpful in various applications such legal document",
|
| 961 |
+
"bbox": [
|
| 962 |
+
115,
|
| 963 |
+
810,
|
| 964 |
+
489,
|
| 965 |
+
868
|
| 966 |
+
],
|
| 967 |
+
"page_idx": 6
|
| 968 |
+
},
|
| 969 |
+
{
|
| 970 |
+
"type": "table",
|
| 971 |
+
"img_path": "images/15aa4a63c10a9216a6638c955f8397bf817b5042384fff8bf649a64d1a67c76d.jpg",
|
| 972 |
+
"table_caption": [
|
| 973 |
+
"Table 4: F1 scores of RR baseline model for each of the rhetorical role on test data"
|
| 974 |
+
],
|
| 975 |
+
"table_footnote": [],
|
| 976 |
+
"table_body": "<table><tr><td>Model</td><td>ROUGE-1</td><td>ROUGE-2</td><td>ROUGE-L</td></tr><tr><td>BERTSUM</td><td>0.60</td><td>0.42</td><td>0.59</td></tr><tr><td>BERTSUM RR</td><td>0.62</td><td>0.46</td><td>0.61</td></tr></table>",
|
| 977 |
+
"bbox": [
|
| 978 |
+
512,
|
| 979 |
+
71,
|
| 980 |
+
882,
|
| 981 |
+
129
|
| 982 |
+
],
|
| 983 |
+
"page_idx": 6
|
| 984 |
+
},
|
| 985 |
+
{
|
| 986 |
+
"type": "text",
|
| 987 |
+
"text": "Table 5: Extractive Summarization Results",
|
| 988 |
+
"bbox": [
|
| 989 |
+
549,
|
| 990 |
+
137,
|
| 991 |
+
843,
|
| 992 |
+
151
|
| 993 |
+
],
|
| 994 |
+
"page_idx": 6
|
| 995 |
+
},
|
| 996 |
+
{
|
| 997 |
+
"type": "text",
|
| 998 |
+
"text": "summarization (Bhattacharya et al., 2019a), and legal judgment prediction (Malik et al., 2021b). In this paper, we explore both the use-cases. We experimented with how rhetorical roles prediction could help create abstractive, extractive summaries of Indian court judgments and predict the judgment outcome based on the judgment text.",
|
| 999 |
+
"bbox": [
|
| 1000 |
+
509,
|
| 1001 |
+
162,
|
| 1002 |
+
882,
|
| 1003 |
+
263
|
| 1004 |
+
],
|
| 1005 |
+
"page_idx": 6
|
| 1006 |
+
},
|
| 1007 |
+
{
|
| 1008 |
+
"type": "text",
|
| 1009 |
+
"text": "5.1. Extractive Summarization of Court Judgments using Rhetorical Roles",
|
| 1010 |
+
"text_level": 1,
|
| 1011 |
+
"bbox": [
|
| 1012 |
+
510,
|
| 1013 |
+
274,
|
| 1014 |
+
842,
|
| 1015 |
+
303
|
| 1016 |
+
],
|
| 1017 |
+
"page_idx": 6
|
| 1018 |
+
},
|
| 1019 |
+
{
|
| 1020 |
+
"type": "text",
|
| 1021 |
+
"text": "We explored the task of extractive summarization. For a given legal document, the task requires extracting the salient sentences that would summarize the document. We experimented with the LawBriEFs corpus consisting of 285 extractive summaries of Indian court judgments prepared by law students from a National Law University in India. The corpus was created by providing judgment documents to law students, followed by a questionnaire that required them to pick salient sentences that would answer the questions and, in the process, create the summaries. The questions pertained to facts, arguments, issues, ratio, and decisions. We wanted to experiment with how rhetorical roles could be helpful in extracting summaries.",
|
| 1022 |
+
"bbox": [
|
| 1023 |
+
509,
|
| 1024 |
+
307,
|
| 1025 |
+
882,
|
| 1026 |
+
507
|
| 1027 |
+
],
|
| 1028 |
+
"page_idx": 6
|
| 1029 |
+
},
|
| 1030 |
+
{
|
| 1031 |
+
"type": "text",
|
| 1032 |
+
"text": "We finetuned BERTSUM (Liu and Lapata, 2019) model on the Lawbriefs data to pick up the top $20\\%$ of the sentences as summaries. Since the judgments are much longer than 512 token limits of BERTSUM, we created non-overlapping chunks of 512 tokens and created 3151 chunks in training data from 235 judgments and 827 chunks from 50 judgments as test data. We then trained another model, which also takes as input a rhetorical role for each sentence. We concatenated 768-dimensional sentence vector from CLS token to one-hot encoded sentence rhetorical roles. The idea is that if certain rhetorical roles are more important than others while creating summaries, then the model will learn those. We call this model BERTSUM RR. Discussion with Legal Experts revealed that ISSUE, RATIO, and RPC are important in summary and must always be selected without the need of summarizing. So we copied all the sentences with predicted rhetorical roles ISSUE, RATIO and RPC regardless of whether they are present in the top $20\\%$ sentences. Model performance evaluated using ROUGE scores (Lin, 2004) are compared in Table 5. Results indicate that rhetorical roles are useful in selecting better summary sentences.",
|
| 1033 |
+
"bbox": [
|
| 1034 |
+
509,
|
| 1035 |
+
508,
|
| 1036 |
+
882,
|
| 1037 |
+
834
|
| 1038 |
+
],
|
| 1039 |
+
"page_idx": 6
|
| 1040 |
+
},
|
| 1041 |
+
{
|
| 1042 |
+
"type": "text",
|
| 1043 |
+
"text": "5.2. Abstractive Summarization of Court Judgments using Rhetorical Roles",
|
| 1044 |
+
"text_level": 1,
|
| 1045 |
+
"bbox": [
|
| 1046 |
+
510,
|
| 1047 |
+
846,
|
| 1048 |
+
852,
|
| 1049 |
+
876
|
| 1050 |
+
],
|
| 1051 |
+
"page_idx": 6
|
| 1052 |
+
},
|
| 1053 |
+
{
|
| 1054 |
+
"type": "text",
|
| 1055 |
+
"text": "The task of abstractive summarization requires generating concise text summaries of legal documents. For",
|
| 1056 |
+
"bbox": [
|
| 1057 |
+
509,
|
| 1058 |
+
879,
|
| 1059 |
+
882,
|
| 1060 |
+
909
|
| 1061 |
+
],
|
| 1062 |
+
"page_idx": 6
|
| 1063 |
+
},
|
| 1064 |
+
{
|
| 1065 |
+
"type": "page_footnote",
|
| 1066 |
+
"text": "4https://legal-nlp-ekstep.github.io/ Competitions/Rhetorical-Role/",
|
| 1067 |
+
"bbox": [
|
| 1068 |
+
115,
|
| 1069 |
+
881,
|
| 1070 |
+
465,
|
| 1071 |
+
908
|
| 1072 |
+
],
|
| 1073 |
+
"page_idx": 6
|
| 1074 |
+
},
|
| 1075 |
+
{
|
| 1076 |
+
"type": "text",
|
| 1077 |
+
"text": "our experiments, we considered 50 randomly selected documents from the Law Briefs dataset (as described in 5.1) as test data. For this task we used pre-trained Legal Pegasus model.5 Legal Pegasus is fine-tuned version of Pegasus (Zhang et al., 2020) on US securities litigation dataset.6 We used the pre-trained Legal Pegasus model for generating abstractive summaries for the baseline. In particular, we split the document into non-overlapping chunks of 1024 tokens, and each chunk was passed through the model to generate summaries. The final summary was obtained by concatenating summaries of each chunk. It constituted the baseline model. We wanted to see how RR could help generate better summaries. Towards this goal, we segmented the document in terms of rhetorical roles, and each of the segments was passed separately through the Legal Pegasus model to generate summaries. The final summary was obtained by concatenating the summaries corresponding to each of the rhetorical roles in the order they appear in the document. This corresponds to the Legal Pegasus RR model. Both models are compared on the test set and ROUGE scores for both the model are shown in Table 6. As can be observed in Table 6 use of rhetorical roles helps to improve the performance on the task of abstractive summarizing.",
|
| 1078 |
+
"bbox": [
|
| 1079 |
+
117,
|
| 1080 |
+
74,
|
| 1081 |
+
485,
|
| 1082 |
+
443
|
| 1083 |
+
],
|
| 1084 |
+
"page_idx": 7
|
| 1085 |
+
},
|
| 1086 |
+
{
|
| 1087 |
+
"type": "table",
|
| 1088 |
+
"img_path": "images/494df5b4673b9e3c30d7fd71786cecdb348c52434503bac05c61e7fdc023d21a.jpg",
|
| 1089 |
+
"table_caption": [],
|
| 1090 |
+
"table_footnote": [],
|
| 1091 |
+
"table_body": "<table><tr><td>Model</td><td>ROUGE-1</td><td>ROUGE-2</td><td>ROUGE-L</td></tr><tr><td>Legal Pegasus</td><td>0.55</td><td>0.34</td><td>0.47</td></tr><tr><td>Legal Pegasus RR</td><td>0.56</td><td>0.36</td><td>0.48</td></tr></table>",
|
| 1092 |
+
"bbox": [
|
| 1093 |
+
117,
|
| 1094 |
+
455,
|
| 1095 |
+
485,
|
| 1096 |
+
511
|
| 1097 |
+
],
|
| 1098 |
+
"page_idx": 7
|
| 1099 |
+
},
|
| 1100 |
+
{
|
| 1101 |
+
"type": "text",
|
| 1102 |
+
"text": "5.3. Court Judgment Prediction using Rhetorical Roles",
|
| 1103 |
+
"text_level": 1,
|
| 1104 |
+
"bbox": [
|
| 1105 |
+
117,
|
| 1106 |
+
552,
|
| 1107 |
+
428,
|
| 1108 |
+
580
|
| 1109 |
+
],
|
| 1110 |
+
"page_idx": 7
|
| 1111 |
+
},
|
| 1112 |
+
{
|
| 1113 |
+
"type": "text",
|
| 1114 |
+
"text": "Malik et al. (2021b) created the corpus (ILDC: Indian Legal Documents Corpus) and the task (CJPE: Court Judgment Prediction and Explanation) for predicting and explaining the court judgments based on legal judgment texts. It is essential for the judgment prediction task to identify which sentences provide hints about the final decision and use that filtered data as input for prediction. We predicted rhetorical role for each sentence of the train, test data using the baseline rhetorical role model. In the ILDC dataset, we removed the sentences with RPC and RATIO tags making the task more challenging. We also removed the judgments for which no ANALYSIS was predicted. Note that the ILDC dataset is already anonymized and takes care of the biases and ethical concerns associated with the task of judgment prediction. Moreover, we use judgment prediction only as a use case and do not believe that an automated system could remove a human judge; rather,",
|
| 1115 |
+
"bbox": [
|
| 1116 |
+
117,
|
| 1117 |
+
583,
|
| 1118 |
+
485,
|
| 1119 |
+
839
|
| 1120 |
+
],
|
| 1121 |
+
"page_idx": 7
|
| 1122 |
+
},
|
| 1123 |
+
{
|
| 1124 |
+
"type": "text",
|
| 1125 |
+
"text": "such a system could augment a human and expedite legal processes, especially in highly populated countries like India.",
|
| 1126 |
+
"bbox": [
|
| 1127 |
+
514,
|
| 1128 |
+
74,
|
| 1129 |
+
880,
|
| 1130 |
+
115
|
| 1131 |
+
],
|
| 1132 |
+
"page_idx": 7
|
| 1133 |
+
},
|
| 1134 |
+
{
|
| 1135 |
+
"type": "text",
|
| 1136 |
+
"text": "For the task of judgment prediction, training data had 5044 judgments, and test data had 977 judgments. The idea is to filter the training data using rhetorical roles to check the impact on model performance, keeping the model architecture the same. We used XLNet on the ILDC single model proposed in Malik et al. (2021b) to predict the judgment outcome on the last 512 tokens of the judgment text. We call this approach XLNet_last512. The model ran for 13 epochs, and then it was early stopped. In another experiment, we trained the same architecture to predict judgment outcome on the last 512 tokens of ANALYSIS role sentences. We call this model as XLNet_last512_Analysis. The model ran for 12 epochs, and then it was early stopped. The model performance comparison are given in Table 7. As observed from the results, filtering the input text for the ANALYSIS role improves the prediction.",
|
| 1137 |
+
"bbox": [
|
| 1138 |
+
514,
|
| 1139 |
+
117,
|
| 1140 |
+
880,
|
| 1141 |
+
357
|
| 1142 |
+
],
|
| 1143 |
+
"page_idx": 7
|
| 1144 |
+
},
|
| 1145 |
+
{
|
| 1146 |
+
"type": "table",
|
| 1147 |
+
"img_path": "images/ae0f67900e0f792f33dae9125cadbc8b79ccf08fe35c78690193285e2d4e834f.jpg",
|
| 1148 |
+
"table_caption": [
|
| 1149 |
+
"Table 6: Abstractive Summarization Results"
|
| 1150 |
+
],
|
| 1151 |
+
"table_footnote": [],
|
| 1152 |
+
"table_body": "<table><tr><td>Model</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>XLNet_last512</td><td>0.76</td><td>0.49</td><td>0.59</td></tr><tr><td>XLNet_last512_Analysis</td><td>0.71</td><td>0.55</td><td>0.62</td></tr></table>",
|
| 1153 |
+
"bbox": [
|
| 1154 |
+
514,
|
| 1155 |
+
370,
|
| 1156 |
+
880,
|
| 1157 |
+
413
|
| 1158 |
+
],
|
| 1159 |
+
"page_idx": 7
|
| 1160 |
+
},
|
| 1161 |
+
{
|
| 1162 |
+
"type": "text",
|
| 1163 |
+
"text": "Table 7: Judgment prediction Results",
|
| 1164 |
+
"bbox": [
|
| 1165 |
+
571,
|
| 1166 |
+
423,
|
| 1167 |
+
823,
|
| 1168 |
+
436
|
| 1169 |
+
],
|
| 1170 |
+
"page_idx": 7
|
| 1171 |
+
},
|
| 1172 |
+
{
|
| 1173 |
+
"type": "text",
|
| 1174 |
+
"text": "6. Conclusion and Future Directions",
|
| 1175 |
+
"text_level": 1,
|
| 1176 |
+
"bbox": [
|
| 1177 |
+
532,
|
| 1178 |
+
458,
|
| 1179 |
+
860,
|
| 1180 |
+
470
|
| 1181 |
+
],
|
| 1182 |
+
"page_idx": 7
|
| 1183 |
+
},
|
| 1184 |
+
{
|
| 1185 |
+
"type": "text",
|
| 1186 |
+
"text": "In this paper, we proposed a new corpus of legal judgment documents annotated with 13 different Rhetorical Roles. The corpus was created via crowdsourcing involving law students. We also proposed baseline models for automatic rhetorical role prediction in a legal document. For some of the roles, the model shows similar trends in predicting the roles as human annotators. Nevertheless, there is scope for further improvement and we have created a leaderboard for the task, so that researchers from community can contribute towards improving the RR prediction system. We also showed two applications of rhetorical roles: summarization and judgment prediction. For both the usecases use of rhetorical role helps to improve results. We have released the corpus and the baseline models and encourage the community to use these to develop other legal applications as well.",
|
| 1187 |
+
"bbox": [
|
| 1188 |
+
514,
|
| 1189 |
+
476,
|
| 1190 |
+
880,
|
| 1191 |
+
717
|
| 1192 |
+
],
|
| 1193 |
+
"page_idx": 7
|
| 1194 |
+
},
|
| 1195 |
+
{
|
| 1196 |
+
"type": "text",
|
| 1197 |
+
"text": "Acknowledgements",
|
| 1198 |
+
"text_level": 1,
|
| 1199 |
+
"bbox": [
|
| 1200 |
+
615,
|
| 1201 |
+
730,
|
| 1202 |
+
781,
|
| 1203 |
+
746
|
| 1204 |
+
],
|
| 1205 |
+
"page_idx": 7
|
| 1206 |
+
},
|
| 1207 |
+
{
|
| 1208 |
+
"type": "text",
|
| 1209 |
+
"text": "We thank EkStep Foundation for funding this work. We thank all the law experts, student volunteers, and coordinators for contributing to data annotation. We thank LawBriEFs for sharing the summaries. The author Ashutosh Modi would like to acknowledge the support of Google Research India via the Faculty Research Award Grant 2021.",
|
| 1210 |
+
"bbox": [
|
| 1211 |
+
514,
|
| 1212 |
+
749,
|
| 1213 |
+
880,
|
| 1214 |
+
847
|
| 1215 |
+
],
|
| 1216 |
+
"page_idx": 7
|
| 1217 |
+
},
|
| 1218 |
+
{
|
| 1219 |
+
"type": "text",
|
| 1220 |
+
"text": "7. Bibliographical References",
|
| 1221 |
+
"text_level": 1,
|
| 1222 |
+
"bbox": [
|
| 1223 |
+
564,
|
| 1224 |
+
860,
|
| 1225 |
+
831,
|
| 1226 |
+
876
|
| 1227 |
+
],
|
| 1228 |
+
"page_idx": 7
|
| 1229 |
+
},
|
| 1230 |
+
{
|
| 1231 |
+
"type": "text",
|
| 1232 |
+
"text": "Bhatia, V. K. (2014). Analysing genre: Language use in professional settings. Routledge.",
|
| 1233 |
+
"bbox": [
|
| 1234 |
+
514,
|
| 1235 |
+
881,
|
| 1236 |
+
880,
|
| 1237 |
+
909
|
| 1238 |
+
],
|
| 1239 |
+
"page_idx": 7
|
| 1240 |
+
},
|
| 1241 |
+
{
|
| 1242 |
+
"type": "page_footnote",
|
| 1243 |
+
"text": "$^{5}$ https://huggingface.co/ansi319/ legal-pegasus ${ }^{6}$ https://www.sec.gov/litigation/ litreleases.htm",
|
| 1244 |
+
"bbox": [
|
| 1245 |
+
117,
|
| 1246 |
+
854,
|
| 1247 |
+
423,
|
| 1248 |
+
907
|
| 1249 |
+
],
|
| 1250 |
+
"page_idx": 7
|
| 1251 |
+
},
|
| 1252 |
+
{
|
| 1253 |
+
"type": "list",
|
| 1254 |
+
"sub_type": "ref_text",
|
| 1255 |
+
"list_items": [
|
| 1256 |
+
"Bhattacharya, P., Hiware, K., Rajgaria, S., Pochhi, N., Ghosh, K., and Ghosh, S. (2019a). A comparative study of summarization algorithms applied to legal case judgments. In European Conference on Information Retrieval, pages 413-428. Springer.",
|
| 1257 |
+
"Bhattacharya, P., Paul, S., Ghosh, K., Ghosh, S., and Wyner, A. (2019b). Identification of rhetorical roles of sentences in indian legal judgments.",
|
| 1258 |
+
"Brack, A., Hoppe, A., Buschermohle, P., and Ewerth, R. (2021). Sequential sentence classification in research papers using cross-domain multi-task learning.",
|
| 1259 |
+
"Chalkidis, I., Androutsopoulos, I., and Aletras, N. (2019). Neural legal judgment prediction in English. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4317-4323, Florence, Italy, July. Association for Computational Linguistics.",
|
| 1260 |
+
"Chalkidis, I., Fergadiotis, M., and Androutsopoulos, I. (2021). MultiEURLEX - a multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6974–6996, Online and Punta Cana, Dominican Republic, November. Association for Computational Linguistics.",
|
| 1261 |
+
"Cohan, A., Beltagy, I., King, D., Dalvi, B., and Weld, D. (2019). Pretrained language models for sequential sentence classification. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).",
|
| 1262 |
+
"Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09.",
|
| 1263 |
+
"Devlin, J., Chang, M., Lee, K., and Toutanova, K. (2018). BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.",
|
| 1264 |
+
"Farzindar, A. and Lapalme, G. (2004). Letsum, an automatic legal text summarizing system.",
|
| 1265 |
+
"Fleiss, J. L., Levin, B., and Paik, M. C. (2013). Statistical methods for rates and proportions. John wiley & sons.",
|
| 1266 |
+
"Grabmair, M., Ashley, K. D., Hwa, R., and Sweeney, P. M. (2011). Toward extracting information from public health statutes using text classification machine learning. In Legal Knowledge and Information Systems, pages 73-82. IOS Press.",
|
| 1267 |
+
"Hachey, B. and Grover, C. (2006). Extractive summarisation of legal texts. Artificial Intelligence and Law, 14(4):305-345.",
|
| 1268 |
+
"Jackson, P., Al-Kofahi, K., Tyrrell, A., and Vachher, A. (2003). Information extraction from case law and retrieval of prior cases. Artificial Intelligence, 150(1-2):239-290.",
|
| 1269 |
+
"Kapoor, A., Dhawan, M., Goel, A., Arjun, T., Agrawal,"
|
| 1270 |
+
],
|
| 1271 |
+
"bbox": [
|
| 1272 |
+
117,
|
| 1273 |
+
74,
|
| 1274 |
+
487,
|
| 1275 |
+
909
|
| 1276 |
+
],
|
| 1277 |
+
"page_idx": 8
|
| 1278 |
+
},
|
| 1279 |
+
{
|
| 1280 |
+
"type": "list",
|
| 1281 |
+
"sub_type": "ref_text",
|
| 1282 |
+
"list_items": [
|
| 1283 |
+
"V., Agrawal, A., Bhattacharya, A., Kumaraguru, P., and Modi, A. (2022). HLDC: Hindi Legal Documents Corpus. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2022. Association for Computational Linguistics.",
|
| 1284 |
+
"Lagos, N., Segond, F., Castellani, S., and O?Neill, J. (2010). Event extraction for legal case building and reasoning. In International Conference on Intelligent Information Processing, pages 92-101. Springer.",
|
| 1285 |
+
"Lin, C.-Y. (2004). ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain, July. Association for Computational Linguistics.",
|
| 1286 |
+
"Liu, Y. and Lapata, M. (2019). Text summarization with pretrained encoders.",
|
| 1287 |
+
"Malik, V., Sanjay, R., Guha, S. K., Nigam, S. K., Hazarika, A., Bhattacharya, A., and Modi, A. (2021a). Semantic Segmentation of Legal Documents via Rhetorical Roles. CoRR, abs/2112.01836.",
|
| 1288 |
+
"Malik, V., Sanjay, R., Nigam, S. K., Ghosh, K., Guha, S. K., Bhattacharya, A., and Modi, A. (2021b). ILDC for CJPE: Indian legal documents corpus for court judgment prediction and explanation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4046-4062, Online, August. Association for Computational Linguistics.",
|
| 1289 |
+
"Maxwell, K. T., Oberlander, J., and Lavrenko, V. (2009). Evaluation of semantic events for legal case retrieval. In Proceedings of the WSDM'09 Workshop on Exploiting Semantic Annotations in Information Retrieval, pages 39-41.",
|
| 1290 |
+
"Moens, M.-F., Uytendaele, C., and Dumortier, J. (1999). Abstracting of legal cases: the potential of clustering based on the selection of representative objects. Journal of the American Society for Information Science, 50(2):151-161.",
|
| 1291 |
+
"Moens, M.-F., Boiy, E., Palau, R. M., and Reed, C. (2007). Automatic detection of arguments in legal texts. In Proceedings of the 11th international conference on Artificial intelligence and law, pages 225-230.",
|
| 1292 |
+
"National Judicial Data Grid. (2021). National judicial data grid statistics. https://www.njdg.ecourts.gov.in/njdgnew/index.php.",
|
| 1293 |
+
"Russakovsky, O., Deng, J., Huang, Z., Berg, A. C., and Fei-Fei, L. (2013). Detecting avocados to zucchini: what have we done, and where are we going? In International Conference on Computer Vision (ICCV).",
|
| 1294 |
+
"Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252."
|
| 1295 |
+
],
|
| 1296 |
+
"bbox": [
|
| 1297 |
+
512,
|
| 1298 |
+
74,
|
| 1299 |
+
882,
|
| 1300 |
+
909
|
| 1301 |
+
],
|
| 1302 |
+
"page_idx": 8
|
| 1303 |
+
},
|
| 1304 |
+
{
|
| 1305 |
+
"type": "list",
|
| 1306 |
+
"sub_type": "ref_text",
|
| 1307 |
+
"list_items": [
|
| 1308 |
+
"Saravanan, M., Ravindran, B., and Raman, S. (2007). Using legal ontology for query enhancement in generating a document summary. Frontiers In Artificial Intelligence and Applications, 165:171.",
|
| 1309 |
+
"Saravanan, M., Ravindran, B., and Raman, S. (2008). Automatic identification of rhetorical roles using conditional random fields for legal document summarization. In Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I.",
|
| 1310 |
+
"spaCy. (2021). spaCy Toolkit. https://spacy.io/.",
|
| 1311 |
+
"Strickson, B. and De La Iglesia, B. (2020a). Legal Judgement Prediction for UK Courts. In Proceedings of the 2020 The 3rd International Conference on Information Science and System, pages 204-209, Cambridge United Kingdom, March. ACM.",
|
| 1312 |
+
"Strickson, B. and De La Iglesia, B. (2020b). Legal Judgement Prediction for UK Courts. In Proceedings of the 2020 The 3rd International Conference on Information Science and System, pages 204-209, Cambridge United Kingdom, March. ACM.",
|
| 1313 |
+
"Sulea, O.-M., Zampieri, M., Vela, M., and van Genabith, J. (2017). Predicting the law area and decisions of French Supreme Court cases. In Proceedings of the International Conference on Advances in Natural Language Processing, RANLP 2017, pages 716-722, Varna, Bulgaria, September. INCOMA Ltd.",
|
| 1314 |
+
"Tay, Y., Dehghani, M., Bahri, D., and Metzler, D. (2020). Efficient transformers: A survey. arXiv preprint arXiv:2009.06732.",
|
| 1315 |
+
"Tran, V., Nguyen, M. L., and Satoh, K. (2019). Building legal case retrieval systems with lexical matching and summarization using a pre-trained phrase scoring model. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law, pages 275-282.",
|
| 1316 |
+
"Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. (2018). Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.",
|
| 1317 |
+
"Wolf, T., Debut, L., Sanh, V., Chaumont, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtopicz, M., Davison, J., Shleifer, S., von Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., and Rush, A. (2020). Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online, October. Association for Computational Linguistics.",
|
| 1318 |
+
"Wyner, A., Mochales-Palau, R., Moens, M.-F., and Milward, D. (2010). Approaches to text mining arguments from legal cases. In Semantic processing of legal texts, pages 60-79. Springer.",
|
| 1319 |
+
"Xiao, C., Zhong, H., Guo, Z., Tu, C., Liu, Z., Sun, M.,"
|
| 1320 |
+
],
|
| 1321 |
+
"bbox": [
|
| 1322 |
+
117,
|
| 1323 |
+
74,
|
| 1324 |
+
489,
|
| 1325 |
+
909
|
| 1326 |
+
],
|
| 1327 |
+
"page_idx": 9
|
| 1328 |
+
},
|
| 1329 |
+
{
|
| 1330 |
+
"type": "list",
|
| 1331 |
+
"sub_type": "ref_text",
|
| 1332 |
+
"list_items": [
|
| 1333 |
+
"Feng, Y., Han, X., Hu, Z., Wang, H., et al. (2018).",
|
| 1334 |
+
"Cail2018: A large-scale legal dataset for judgment prediction. arXiv preprint arXiv:1807.02478.",
|
| 1335 |
+
"Zhang, J., Zhao, Y., Saleh, M., and Liu, P. J. (2020).",
|
| 1336 |
+
"Pegasus: Pre-training with extracted gap-sentences for abstractive summarization."
|
| 1337 |
+
],
|
| 1338 |
+
"bbox": [
|
| 1339 |
+
512,
|
| 1340 |
+
74,
|
| 1341 |
+
882,
|
| 1342 |
+
159
|
| 1343 |
+
],
|
| 1344 |
+
"page_idx": 9
|
| 1345 |
+
}
|
| 1346 |
+
]
|
2201.13xxx/2201.13125/22a9b67e-4248-4898-877b-81213525c31c_model.json
ADDED
|
@@ -0,0 +1,1947 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "aside_text",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.023,
|
| 7 |
+
0.31,
|
| 8 |
+
0.061,
|
| 9 |
+
0.725
|
| 10 |
+
],
|
| 11 |
+
"angle": 270,
|
| 12 |
+
"content": "arXiv:2201.13125v2 [cs.CL] 19 Sep 2022"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "title",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.217,
|
| 18 |
+
0.096,
|
| 19 |
+
0.785,
|
| 20 |
+
0.115
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "Corpus for Automatic Structuring of Legal Documents"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "title",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.145,
|
| 29 |
+
0.135,
|
| 30 |
+
0.857,
|
| 31 |
+
0.168
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "Prathamesh Kalamkar\\(^{1,2,*}\\), Aman Tiwari\\(^{1,2,*}\\), Astha Agarwal\\(^{1,2,*}\\), Saurabh Karn\\(^{3,*}\\), Smita Gupta\\(^{3}\\), Vivek Raghavan\\(^{1}\\), Ashutosh Modi\\(^{4}\\)"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.28,
|
| 40 |
+
0.168,
|
| 41 |
+
0.725,
|
| 42 |
+
0.182
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "\\(^{1}\\)EkStep Foundation, \\(^{2}\\)Thoughtworks Technologies India Pvt Ltd.,"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.306,
|
| 51 |
+
0.182,
|
| 52 |
+
0.697,
|
| 53 |
+
0.197
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "\\(^{3}\\)Agami, \\(^{4}\\) Indian Institute of Technology Kanpur (IIT-K)"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.296,
|
| 62 |
+
0.197,
|
| 63 |
+
0.709,
|
| 64 |
+
0.211
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "{prathamk, aman.tiwari, astha.agarwal} @ thoughtworks.com,"
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.254,
|
| 73 |
+
0.212,
|
| 74 |
+
0.75,
|
| 75 |
+
0.225
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "{saurabh, smita} @agami.in, Vivek@ekstep.org, ashutoshm@cse.iitk.ac.in"
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "title",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.467,
|
| 84 |
+
0.229,
|
| 85 |
+
0.534,
|
| 86 |
+
0.242
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "Abstract"
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.115,
|
| 95 |
+
0.244,
|
| 96 |
+
0.885,
|
| 97 |
+
0.336
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "In populous countries, pending legal cases have been growing exponentially. There is a need for developing techniques for processing and organizing legal documents. In this paper, we introduce a new corpus for structuring legal documents. In particular, we introduce a corpus of legal judgment documents in English that are segmented into topical and coherent parts. Each of these parts is annotated with a label coming from a list of pre-defined Rhetorical Roles. We develop baseline models for automatically predicting rhetorical roles in a legal document based on the annotated corpus. Further, we show the application of rhetorical roles to improve performance on the tasks of summarization and legal judgment prediction. We release the corpus and baseline model code along with the paper."
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.116,
|
| 106 |
+
0.348,
|
| 107 |
+
0.562,
|
| 108 |
+
0.362
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "Keywords: Legal NLP, Rhetorical Roles, Legal Document Segmentation"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "title",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.229,
|
| 117 |
+
0.379,
|
| 118 |
+
0.377,
|
| 119 |
+
0.393
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "1. Introduction"
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.115,
|
| 128 |
+
0.398,
|
| 129 |
+
0.49,
|
| 130 |
+
0.867
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "In populous countries (e.g., India), pending legal cases have been growing exponentially. For example, according to India's National Judicial Data Grid, as of December 2021, there are approximately 40 million cases pending in various courts of the country (National Judicial Data Grid, 2021). India follows a common-law system; consequently, due to subjectivity involved in the legal process, it may not be possible to automate the entire judicial pipeline completely; nevertheless, many intermediate tasks can be automated to augment legal practitioners, and hence expedite the system. For example, legal documents can be processed with the help of Natural Language Processing (NLP) techniques to organize and structure the data to be amenable to automatic search and retrieval. However, legal texts are different from commonly occurring texts typically used to train NLP models. Legal documents are quite long, running into tens (sometimes hundreds) of pages. Long documents make automatic processing challenging as information is spread throughout the document (Malik et al., 2021b). Another challenge with legal documents is the use of different lexicons. Though legal documents use natural language (e.g., English), many commonly occurring words/terms have different legal connotations. The use of different lexicons makes it challenging to adapt existing NLP models to legal texts (Malik et al., 2021b). Moreover, in countries like India, legal documents are manually typed and are highly unstructured and noisy (e.g., spelling and grammatical mistakes). Above mentioned challenges make it difficult to apply existing NLP models and techniques directly, which calls for the development of legal domain-specific techniques."
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "text",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.116,
|
| 139 |
+
0.867,
|
| 140 |
+
0.489,
|
| 141 |
+
0.881
|
| 142 |
+
],
|
| 143 |
+
"angle": 0,
|
| 144 |
+
"content": "Existing state-of-the-art models in NLP are data-driven"
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"type": "text",
|
| 148 |
+
"bbox": [
|
| 149 |
+
0.51,
|
| 150 |
+
0.38,
|
| 151 |
+
0.885,
|
| 152 |
+
0.879
|
| 153 |
+
],
|
| 154 |
+
"angle": 0,
|
| 155 |
+
"content": "and are trained on annotated corpora. However, the legal domain suffers from the deficiency of availability of annotated corpora. It has hindered the growth of the Legal NLP domain. For example, much of the recent success in the computer vision community can be owed to the creation and availability of annotated vision corpora such as ImageNet (Deng et al., 2009; Russakovsky et al., 2013; Russakovsky et al., 2015). In this paper, we contribute to creating annotated legal text corpora. In particular, we create a new corpus of Indian legal judgments in English that are structured and annotated with topically coherent semantic units. Since legal documents are long and unstructured, these can be divided into topically coherent parts (e.g., facts, arguments) referred to as Rhetorical Roles (Saravanan et al., 2008; Bhattacharya et al., 2019b; Malik et al., 2021a). In this paper, with the help of legal experts, we annotate legal documents with 12 different Rhetorical Roles (RRs) (details in §3). An example text annotated with some of the RRs is shown in Figure 1. As shown in the figure, an unstructured legal judgment document is segmented into semantically coherent parts, and each part is annotated with a rhetorical role label such as preamble, fact, ratio, etc. We experimented with different levels of granularity (phrase level, sentence level, paragraph level) for annotating RRs and decided to go for sentence-level RR annotations based on initial experiments. Each sentence in a legal document is annotated with a rhetorical role label in the proposed corpus. Typically, consecutive sentences can have a similar role in a judgment document. The rhetorical role corpus is part of a general open-source effort of creating various legal corpora for promoting the development and bench-marking of legal NLP systems. This project is called BUILDNyAI.\\(^{1}\\) We make the following contribu"
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"type": "page_footnote",
|
| 159 |
+
"bbox": [
|
| 160 |
+
0.145,
|
| 161 |
+
0.896,
|
| 162 |
+
0.329,
|
| 163 |
+
0.91
|
| 164 |
+
],
|
| 165 |
+
"angle": 0,
|
| 166 |
+
"content": "* Authors contributed equally"
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"type": "page_footnote",
|
| 170 |
+
"bbox": [
|
| 171 |
+
0.533,
|
| 172 |
+
0.895,
|
| 173 |
+
0.884,
|
| 174 |
+
0.91
|
| 175 |
+
],
|
| 176 |
+
"angle": 0,
|
| 177 |
+
"content": "1The word BUILDNyAI is a code-mixed (English+Hindi)"
|
| 178 |
+
}
|
| 179 |
+
],
|
| 180 |
+
[
|
| 181 |
+
{
|
| 182 |
+
"type": "text",
|
| 183 |
+
"bbox": [
|
| 184 |
+
0.148,
|
| 185 |
+
0.125,
|
| 186 |
+
0.473,
|
| 187 |
+
0.301
|
| 188 |
+
],
|
| 189 |
+
"angle": 0,
|
| 190 |
+
"content": "IN THE COURT OF THE V ADDL SESSIONS JUDGE, MYSORE. Dated this the 23rd day of May 2013 ... The Petitioner is a businessman and he is permanent resident of Mysore City... On behalf of the Prosecution the learned Public Prosecutor has filed objection to the bail Petition stating that, there ...Now, the points that arise for consideration of the Court are: 1. Whether the Petitioner has made out sufficient grounds to release him on Anticipatory Bail? ... Heard the arguments advanced by the learned advocate for the Petitioner and the learned Public Prosecutor... Considering all these aspects, the Court is of the view that, ...Point No.2: For the foregoing reasons and in view of my above discussions, I proceed to pass the following ...The High Court by its order dated October 26, 1982 set aside the order of the Tribunal and also the assessment on the ground ...The petitioners are falsely implicated and the charge sheet has been filed against the petitioners merely ...My findings on the above points are as follows: Point No.1: In the Positive Point No.2 : As per final order for the following...In a decision reported in (2013) 1 KCCR 334 case of K.Ramachandra Reddy Vs. State of Karnataka by the Station House Officer...The decision of the Andhra Pradesh High Court ... are not relevant for purposes of deciding the question which has arisen before us..."
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "image",
|
| 194 |
+
"bbox": [
|
| 195 |
+
0.473,
|
| 196 |
+
0.079,
|
| 197 |
+
0.862,
|
| 198 |
+
0.355
|
| 199 |
+
],
|
| 200 |
+
"angle": 0,
|
| 201 |
+
"content": null
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "image_caption",
|
| 205 |
+
"bbox": [
|
| 206 |
+
0.115,
|
| 207 |
+
0.378,
|
| 208 |
+
0.884,
|
| 209 |
+
0.406
|
| 210 |
+
],
|
| 211 |
+
"angle": 0,
|
| 212 |
+
"content": "Figure 1: Example of document segmentation via Rhetorical Roles labels. On the left is excerpt from a legal document and on the right is document segmented and labelled with rhetorical role labels."
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "title",
|
| 216 |
+
"bbox": [
|
| 217 |
+
0.116,
|
| 218 |
+
0.442,
|
| 219 |
+
0.245,
|
| 220 |
+
0.456
|
| 221 |
+
],
|
| 222 |
+
"angle": 0,
|
| 223 |
+
"content": "tions in this paper:"
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "text",
|
| 227 |
+
"bbox": [
|
| 228 |
+
0.136,
|
| 229 |
+
0.456,
|
| 230 |
+
0.489,
|
| 231 |
+
0.526
|
| 232 |
+
],
|
| 233 |
+
"angle": 0,
|
| 234 |
+
"content": "- We create a corpus of 354 Indian legal documents annotated with rhetorical roles. The corpus has 40,305 sentences annotated with 12 different RRs. To the best of our knowledge, this is the largest corpus of legal documents annotated with RRs."
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "text",
|
| 238 |
+
"bbox": [
|
| 239 |
+
0.136,
|
| 240 |
+
0.527,
|
| 241 |
+
0.489,
|
| 242 |
+
0.582
|
| 243 |
+
],
|
| 244 |
+
"angle": 0,
|
| 245 |
+
"content": "- In order to be of practical value, using the corpus, we develop a transformer-based baseline model for automatically annotating legal documents with sentence-level RR."
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "text",
|
| 249 |
+
"bbox": [
|
| 250 |
+
0.136,
|
| 251 |
+
0.584,
|
| 252 |
+
0.489,
|
| 253 |
+
0.627
|
| 254 |
+
],
|
| 255 |
+
"angle": 0,
|
| 256 |
+
"content": "- We show two use-cases for RRs. In particular, we show applications of RRs to the task of legal case summarization and legal judgment prediction."
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"bbox": [
|
| 261 |
+
0.136,
|
| 262 |
+
0.627,
|
| 263 |
+
0.489,
|
| 264 |
+
0.683
|
| 265 |
+
],
|
| 266 |
+
"angle": 0,
|
| 267 |
+
"content": "- We release the corpus and the model implementations: https://legal-nlp-ekstep.github.io/Competitions/Rhetorical-Role/"
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "list",
|
| 271 |
+
"bbox": [
|
| 272 |
+
0.136,
|
| 273 |
+
0.456,
|
| 274 |
+
0.489,
|
| 275 |
+
0.683
|
| 276 |
+
],
|
| 277 |
+
"angle": 0,
|
| 278 |
+
"content": null
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "title",
|
| 282 |
+
"bbox": [
|
| 283 |
+
0.222,
|
| 284 |
+
0.696,
|
| 285 |
+
0.383,
|
| 286 |
+
0.71
|
| 287 |
+
],
|
| 288 |
+
"angle": 0,
|
| 289 |
+
"content": "2. Related Work"
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "text",
|
| 293 |
+
"bbox": [
|
| 294 |
+
0.115,
|
| 295 |
+
0.715,
|
| 296 |
+
0.49,
|
| 297 |
+
0.844
|
| 298 |
+
],
|
| 299 |
+
"angle": 0,
|
| 300 |
+
"content": "In recent times, there has been lot of work in the area of legal text processing. Different tasks and techniques have been proposed. For example, Prior Case Retrieval (Jackson et al., 2003), Summarization (Moens et al., 1999; Saravanan et al., 2007), Case Prediction (Malik et al., 2021b; Chalkidis et al., 2019; Strickson and De La Iglesia, 2020a; Sulea et al., 2017; Kapoor et al., 2022), Argument Mining (Wyner et al., 2010; Moens et al., 2007), Information Extraction and Retrieval (Tran"
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "text",
|
| 304 |
+
"bbox": [
|
| 305 |
+
0.115,
|
| 306 |
+
0.857,
|
| 307 |
+
0.49,
|
| 308 |
+
0.91
|
| 309 |
+
],
|
| 310 |
+
"angle": 0,
|
| 311 |
+
"content": "term having English word BUILD and Hindi word nyAI (short for nyayi, which means justice). The project is hosted at https://legal-nlp-ekstep.github.io/Competitions/Rhetorical-Role/"
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"type": "text",
|
| 315 |
+
"bbox": [
|
| 316 |
+
0.51,
|
| 317 |
+
0.441,
|
| 318 |
+
0.885,
|
| 319 |
+
0.485
|
| 320 |
+
],
|
| 321 |
+
"angle": 0,
|
| 322 |
+
"content": "et al., 2019; Grabmair et al., 2011; Tran et al., 2019), and Event Extraction (Lagos et al., 2010; Maxwell et al., 2009; Lagos et al., 2010)."
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"type": "text",
|
| 326 |
+
"bbox": [
|
| 327 |
+
0.51,
|
| 328 |
+
0.491,
|
| 329 |
+
0.885,
|
| 330 |
+
0.705
|
| 331 |
+
],
|
| 332 |
+
"angle": 0,
|
| 333 |
+
"content": "Recently, efforts have been made to develop corpora that could aid various legal NLP tasks; for example, Malik et al. (2021b) have released a corpus of 35K Indian Supreme Court documents for the task of judgment prediction and explanation. Chalkidis et al. (2019) have released 11,478 legal documents corresponding to the European Court of Human Rights (ECHR). Strickson and De La Iglesia (2020b) have proposed a corpus of 4,959 UK Supreme Court documents. Xiao et al. (2018) have created a large-scale corpus of 2.68 million criminal case documents and released CAIL (Chinese AI and Law Challenge) dataset for judgment prediction. A new multilingual dataset of European Union (EU) legal documents has been recently released by Chalkidis et al. (2021)."
|
| 334 |
+
},
|
| 335 |
+
{
|
| 336 |
+
"type": "text",
|
| 337 |
+
"bbox": [
|
| 338 |
+
0.51,
|
| 339 |
+
0.71,
|
| 340 |
+
0.885,
|
| 341 |
+
0.911
|
| 342 |
+
],
|
| 343 |
+
"angle": 0,
|
| 344 |
+
"content": "Research in rhetorical roles for legal text processing has been active in the past few years. Farzindar and Lapalme (2004; Hachey and Grover (2006) have leveraged rhetorical roles to create summaries of legal texts. Saravanan et al. (2008) proposed a CRF-based model using hand-crafted features for segmenting documents using seven different roles. Bhatia (2014) created Genre Analysis of Legal Texts to create seven rhetorical categories. Bhattacharya et al. (2019b) have proposed CRF-BiLSTM model for automatically assigning rhetorical roles to sentences in Indian legal documents. (Malik et al., 2021a) have created a RR corpus and annotated with 13 fine-grained roles and further they have developed a multi-task learning based model"
|
| 345 |
+
}
|
| 346 |
+
],
|
| 347 |
+
[
|
| 348 |
+
{
|
| 349 |
+
"type": "text",
|
| 350 |
+
"bbox": [
|
| 351 |
+
0.115,
|
| 352 |
+
0.075,
|
| 353 |
+
0.49,
|
| 354 |
+
0.26
|
| 355 |
+
],
|
| 356 |
+
"angle": 0,
|
| 357 |
+
"content": "for predicting RR. In this paper, we also propose a corpus of English Indian legal judgment documents annotated with Rhetorical Roles; however, we annotate the documents with a more extensive set of 12 rhetorical role labels and a NONE label (in the case none of the 12 labels are applicable). Moreover, to the best of our knowledge, we create the largest corpus of 354 documents (vs. 100 documents in previous RR corpus by Malik et al. (2021a)), with 40,315 sentences annotated with 13 \\((12 + \\mathrm{NONE})\\) different types of rhetorical role labels. We propose state-of-the-art transformer models for RR prediction and show the use case of RRs for case summarization and legal judgment prediction."
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"type": "text",
|
| 361 |
+
"bbox": [
|
| 362 |
+
0.115,
|
| 363 |
+
0.261,
|
| 364 |
+
0.49,
|
| 365 |
+
0.361
|
| 366 |
+
],
|
| 367 |
+
"angle": 0,
|
| 368 |
+
"content": "Recent success in almost every area in NLP has been due to transformer-based neural architectures (Wang et al., 2018). We do not discuss the details of transformer architectures here and refer the reader to the survey on transformers by Tay et al. (2020). We develop transformer-based baseline models for automatically segmenting legal documents into RRs units."
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"type": "title",
|
| 372 |
+
"bbox": [
|
| 373 |
+
0.177,
|
| 374 |
+
0.376,
|
| 375 |
+
0.43,
|
| 376 |
+
0.393
|
| 377 |
+
],
|
| 378 |
+
"angle": 0,
|
| 379 |
+
"content": "3. Rhetorical Roles Corpus"
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"type": "text",
|
| 383 |
+
"bbox": [
|
| 384 |
+
0.115,
|
| 385 |
+
0.397,
|
| 386 |
+
0.49,
|
| 387 |
+
0.581
|
| 388 |
+
],
|
| 389 |
+
"angle": 0,
|
| 390 |
+
"content": "As outlined earlier, legal documents are typically long, and information is spread throughout the document. In order to make the automatic processing of documents easier, documents are divided into topically coherent segments referred to as Rhetorical Roles (Malik et al., 2021a). In this paper, we propose the use of 12 RRs and a NONE label. We started with the list of RR labels proposed by Bhattacharya et al. (2019b); however, we found some of the RR to be ambiguous, hence after having elaborate discussions with law professors, we split some of the RRs (arguments and precedents) to arrive at the list of 12 main roles. Details and definitions for each of the RR are as follows:"
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"type": "text",
|
| 394 |
+
"bbox": [
|
| 395 |
+
0.136,
|
| 396 |
+
0.583,
|
| 397 |
+
0.49,
|
| 398 |
+
0.71
|
| 399 |
+
],
|
| 400 |
+
"angle": 0,
|
| 401 |
+
"content": "- Preamble (PREAMBLE): This covers the metadata related to the legal judgment document. A typical judgment would start with the court name, the details of parties, lawyers and judges' names, headnote (summary). This section typically would end with a keyword like (JUDGMENT or ORDER). Some documents also have HEADNOTES, ACTS sections in the beginning. These are also part of the Preamble."
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"type": "text",
|
| 405 |
+
"bbox": [
|
| 406 |
+
0.136,
|
| 407 |
+
0.711,
|
| 408 |
+
0.49,
|
| 409 |
+
0.809
|
| 410 |
+
],
|
| 411 |
+
"angle": 0,
|
| 412 |
+
"content": "- Facts (FAC): This corresponds to the facts of the case. It refers to the chronology of events that led to filing the case and how it evolved (e.g., First Information Report (FIR) at a police station, filing an appeal to the Magistrate, etc.) Depositions and proceedings of the current court, and summary of lower court proceedings."
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"type": "text",
|
| 416 |
+
"bbox": [
|
| 417 |
+
0.136,
|
| 418 |
+
0.81,
|
| 419 |
+
0.49,
|
| 420 |
+
0.91
|
| 421 |
+
],
|
| 422 |
+
"angle": 0,
|
| 423 |
+
"content": "- Ruling by Lower Court (RLC): Cases are not directly filed in the higher courts but are appealed from lower courts. Consequently, the documents contain judgments given by the lower courts (Trial Court, High Court) based on the present appeal (to the Supreme Court or high court). The lower court's verdict, analysis, and the ratio behind the"
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"type": "text",
|
| 427 |
+
"bbox": [
|
| 428 |
+
0.543,
|
| 429 |
+
0.075,
|
| 430 |
+
0.884,
|
| 431 |
+
0.102
|
| 432 |
+
],
|
| 433 |
+
"angle": 0,
|
| 434 |
+
"content": "judgment by the lower court is annotated with this label."
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"type": "text",
|
| 438 |
+
"bbox": [
|
| 439 |
+
0.531,
|
| 440 |
+
0.104,
|
| 441 |
+
0.884,
|
| 442 |
+
0.159
|
| 443 |
+
],
|
| 444 |
+
"angle": 0,
|
| 445 |
+
"content": "- Issues (ISSUE): Some judgments mention the key points on which the verdict needs to be delivered. Such Legal Questions Framed by the Court are ISSUES."
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "text",
|
| 449 |
+
"bbox": [
|
| 450 |
+
0.532,
|
| 451 |
+
0.161,
|
| 452 |
+
0.885,
|
| 453 |
+
0.26
|
| 454 |
+
],
|
| 455 |
+
"angle": 0,
|
| 456 |
+
"content": "- Argument by Petitioner (ARGPETITIONER): Arguments by petitioners' lawyers. Precedent cases argued by petitioner lawyers fall under this category, but when the court discusses them later, then they belong to either the relied / not relied upon category."
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"type": "text",
|
| 460 |
+
"bbox": [
|
| 461 |
+
0.532,
|
| 462 |
+
0.261,
|
| 463 |
+
0.884,
|
| 464 |
+
0.345
|
| 465 |
+
],
|
| 466 |
+
"angle": 0,
|
| 467 |
+
"content": "- Argument by Respondent (ARG_RESPONDENT): Arguments by respondents' lawyers. Precedent cases argued by respondent lawyers fall under this, but when the court discusses them later, they belong to either the relied / not relied category."
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"type": "text",
|
| 471 |
+
"bbox": [
|
| 472 |
+
0.532,
|
| 473 |
+
0.346,
|
| 474 |
+
0.884,
|
| 475 |
+
0.486
|
| 476 |
+
],
|
| 477 |
+
"angle": 0,
|
| 478 |
+
"content": "- Analysis (ANALYSIS): These are views of the court. This includes courts' discussion on the evidence, facts presented, prior cases, and statutes. Discussions on how the law is applicable or not applicable to the current case. Observations (non-binding) from the court. It is the parent tag for three tags: PRE_RLEIED, PRE_NOT_RELIED, and STATUTE i.e., every statement which belongs to these three tags should also be marked as ANALYSIS."
|
| 479 |
+
},
|
| 480 |
+
{
|
| 481 |
+
"type": "text",
|
| 482 |
+
"bbox": [
|
| 483 |
+
0.532,
|
| 484 |
+
0.487,
|
| 485 |
+
0.884,
|
| 486 |
+
0.572
|
| 487 |
+
],
|
| 488 |
+
"angle": 0,
|
| 489 |
+
"content": "- Statute (STA): This includes texts in which the court discusses established laws, that can come from a mixture of sources: Acts, Sections, Articles, Rules, Order, Notices, Notifications, and Quotations directly from the bare act. The statute will have both the tags Analysis + Statute."
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"type": "text",
|
| 493 |
+
"bbox": [
|
| 494 |
+
0.531,
|
| 495 |
+
0.573,
|
| 496 |
+
0.884,
|
| 497 |
+
0.643
|
| 498 |
+
],
|
| 499 |
+
"angle": 0,
|
| 500 |
+
"content": "- Precedent Relied (PRE_RELIED): Texts in which the court discusses prior case documents, discussions and decisions which were relied upon by the court for final decisions. Precedent will have both the tags Analysis + Precedent."
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"type": "text",
|
| 504 |
+
"bbox": [
|
| 505 |
+
0.531,
|
| 506 |
+
0.644,
|
| 507 |
+
0.884,
|
| 508 |
+
0.728
|
| 509 |
+
],
|
| 510 |
+
"angle": 0,
|
| 511 |
+
"content": "- Precedent Not Relied (PRE_NOT_RELIED): Texts in which the court discusses prior case documents, discussions and decisions which were not relied upon by the court for final decisions. It could be due to the fact that the situation, in that case, is not relevant to the current case."
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"type": "text",
|
| 515 |
+
"bbox": [
|
| 516 |
+
0.531,
|
| 517 |
+
0.729,
|
| 518 |
+
0.884,
|
| 519 |
+
0.827
|
| 520 |
+
],
|
| 521 |
+
"angle": 0,
|
| 522 |
+
"content": "- Ratio of the decision (Ratio): This includes the main reason given for the application of any legal principle to the legal issue. It is the result of the analysis by the court. It typically appears right before the final decision. It is not the same as \"Ratio Decidendi\" taught in the legal academic curriculum."
|
| 523 |
+
},
|
| 524 |
+
{
|
| 525 |
+
"type": "text",
|
| 526 |
+
"bbox": [
|
| 527 |
+
0.531,
|
| 528 |
+
0.828,
|
| 529 |
+
0.884,
|
| 530 |
+
0.87
|
| 531 |
+
],
|
| 532 |
+
"angle": 0,
|
| 533 |
+
"content": "- Ruling by Present Court (RPC): Final decision + conclusion + order of the Court following from the natural/logical outcome of the rationale."
|
| 534 |
+
},
|
| 535 |
+
{
|
| 536 |
+
"type": "text",
|
| 537 |
+
"bbox": [
|
| 538 |
+
0.531,
|
| 539 |
+
0.871,
|
| 540 |
+
0.885,
|
| 541 |
+
0.898
|
| 542 |
+
],
|
| 543 |
+
"angle": 0,
|
| 544 |
+
"content": "- NONE: If a sentence does not belong to any of the above categories, it is labeled as NONE."
|
| 545 |
+
},
|
| 546 |
+
{
|
| 547 |
+
"type": "list",
|
| 548 |
+
"bbox": [
|
| 549 |
+
0.531,
|
| 550 |
+
0.104,
|
| 551 |
+
0.885,
|
| 552 |
+
0.898
|
| 553 |
+
],
|
| 554 |
+
"angle": 0,
|
| 555 |
+
"content": null
|
| 556 |
+
}
|
| 557 |
+
],
|
| 558 |
+
[
|
| 559 |
+
{
|
| 560 |
+
"type": "table",
|
| 561 |
+
"bbox": [
|
| 562 |
+
0.118,
|
| 563 |
+
0.073,
|
| 564 |
+
0.487,
|
| 565 |
+
0.182
|
| 566 |
+
],
|
| 567 |
+
"angle": 0,
|
| 568 |
+
"content": "<table><tr><td>Dataset</td><td>Docs</td><td>Sentences</td><td>Tokens</td><td>Avg To-kens</td></tr><tr><td>Train</td><td>247</td><td>28986</td><td>938K</td><td>3797</td></tr><tr><td>Validation</td><td>30</td><td>2879</td><td>88K</td><td>2947</td></tr><tr><td>Test (in-domain)</td><td>50</td><td>4158</td><td>134K</td><td>2681</td></tr><tr><td>Test (out-domain)</td><td>27</td><td>4292</td><td>127K</td><td>4722</td></tr><tr><td>Total</td><td>354</td><td>40315</td><td>1.3M</td><td>3638</td></tr></table>"
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"type": "table_footnote",
|
| 572 |
+
"bbox": [
|
| 573 |
+
0.115,
|
| 574 |
+
0.192,
|
| 575 |
+
0.49,
|
| 576 |
+
0.248
|
| 577 |
+
],
|
| 578 |
+
"angle": 0,
|
| 579 |
+
"content": "Table 1: Corpus Statistics: The corpus is split into train, val and test. The table shows number of documents, sentences, tokens and average number of tokens per document."
|
| 580 |
+
},
|
| 581 |
+
{
|
| 582 |
+
"type": "title",
|
| 583 |
+
"bbox": [
|
| 584 |
+
0.116,
|
| 585 |
+
0.262,
|
| 586 |
+
0.32,
|
| 587 |
+
0.277
|
| 588 |
+
],
|
| 589 |
+
"angle": 0,
|
| 590 |
+
"content": "3.1. Corpus Documents"
|
| 591 |
+
},
|
| 592 |
+
{
|
| 593 |
+
"type": "text",
|
| 594 |
+
"bbox": [
|
| 595 |
+
0.115,
|
| 596 |
+
0.28,
|
| 597 |
+
0.49,
|
| 598 |
+
0.535
|
| 599 |
+
],
|
| 600 |
+
"angle": 0,
|
| 601 |
+
"content": "The corpus consists of legal judgment documents from the Supreme Court of India, High Courts in different Indian states, and some district-level courts. Raw judgment text files were scraped from Indian Court websites.2 Data has a mix of Supreme Court judgments \\((40\\%)\\), High Courts judgments \\((40\\%)\\) and district court judgments \\((20\\%)\\). To develop baseline models, we divided the dataset into train, and validation. Test set was further divided into in-domain and out of domain. Train, validation and test (in-domain) datasets contain annotated judgments belonging to tax and criminal cases. Test (out-domains) contains annotated judgements from 3 domains: Motor Vehicles Act (9), Industrial and Labour law (8) and Land and Property law (10). The statistics of the corpus are shown in Table 1. Table 2 gives number of sentences for each role in the entire corpus. Qualified law experts annotated test data with cross checks."
|
| 602 |
+
},
|
| 603 |
+
{
|
| 604 |
+
"type": "table",
|
| 605 |
+
"bbox": [
|
| 606 |
+
0.182,
|
| 607 |
+
0.544,
|
| 608 |
+
0.425,
|
| 609 |
+
0.752
|
| 610 |
+
],
|
| 611 |
+
"angle": 0,
|
| 612 |
+
"content": "<table><tr><td>Rhetorical Role</td><td>Sentences</td></tr><tr><td>ANALYSIS</td><td>14300</td></tr><tr><td>ARG PETITIONER</td><td>1771</td></tr><tr><td>ARG RESPONDENT</td><td>1068</td></tr><tr><td>FAC</td><td>8045</td></tr><tr><td>ISSUE</td><td>535</td></tr><tr><td>NONE</td><td>2037</td></tr><tr><td>PREAMBLE</td><td>6116</td></tr><tr><td>PRE NOT RELIED</td><td>217</td></tr><tr><td>PRE RELIED</td><td>1934</td></tr><tr><td>RATIO</td><td>1014</td></tr><tr><td>RLC</td><td>1081</td></tr><tr><td>RPC</td><td>1562</td></tr><tr><td>STA</td><td>625</td></tr><tr><td>Overall</td><td>40305</td></tr></table>"
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"type": "table_footnote",
|
| 616 |
+
"bbox": [
|
| 617 |
+
0.117,
|
| 618 |
+
0.762,
|
| 619 |
+
0.486,
|
| 620 |
+
0.776
|
| 621 |
+
],
|
| 622 |
+
"angle": 0,
|
| 623 |
+
"content": "Table 2: Role-wise sentence count in the entire corpus"
|
| 624 |
+
},
|
| 625 |
+
{
|
| 626 |
+
"type": "title",
|
| 627 |
+
"bbox": [
|
| 628 |
+
0.116,
|
| 629 |
+
0.795,
|
| 630 |
+
0.321,
|
| 631 |
+
0.809
|
| 632 |
+
],
|
| 633 |
+
"angle": 0,
|
| 634 |
+
"content": "3.2. Annotation Process"
|
| 635 |
+
},
|
| 636 |
+
{
|
| 637 |
+
"type": "text",
|
| 638 |
+
"bbox": [
|
| 639 |
+
0.115,
|
| 640 |
+
0.813,
|
| 641 |
+
0.49,
|
| 642 |
+
0.856
|
| 643 |
+
],
|
| 644 |
+
"angle": 0,
|
| 645 |
+
"content": "The annotation process was designed in consultation with legal experts (law professors and legal practitioners). Given the nature of the task, the RR annotations"
|
| 646 |
+
},
|
| 647 |
+
{
|
| 648 |
+
"type": "text",
|
| 649 |
+
"bbox": [
|
| 650 |
+
0.51,
|
| 651 |
+
0.075,
|
| 652 |
+
0.885,
|
| 653 |
+
0.345
|
| 654 |
+
],
|
| 655 |
+
"angle": 0,
|
| 656 |
+
"content": "require a deep understanding of the law and the legal process. Consequently, we involved law students and legal practitioners in annotating the documents. The process involved annotating each sentence in a given document with one of the 12 RR + None labels described earlier. We experimented with different levels of granularity (phrase level, sentence level, paragraph level, etc.) for annotating the documents with RR. Pilot experiments indicated sentence level RR annotation to be appropriate as it maintains the balance (with regard to semantic coherence) between too short and too long texts. The legal documents were split using spaCy library (spaCy, 2021). Rhetorical role annotation is not a trivial task; we faced two main challenges in the annotation activity: availability for a large group of legal experts and, secondly, motivating the legal experts to perform annotation consistently while maintaining quality. We performed the annotation activity via crowdsourcing as described next."
|
| 657 |
+
},
|
| 658 |
+
{
|
| 659 |
+
"type": "title",
|
| 660 |
+
"bbox": [
|
| 661 |
+
0.511,
|
| 662 |
+
0.358,
|
| 663 |
+
0.764,
|
| 664 |
+
0.374
|
| 665 |
+
],
|
| 666 |
+
"angle": 0,
|
| 667 |
+
"content": "3.3. Data Annotation Pipeline"
|
| 668 |
+
},
|
| 669 |
+
{
|
| 670 |
+
"type": "text",
|
| 671 |
+
"bbox": [
|
| 672 |
+
0.51,
|
| 673 |
+
0.378,
|
| 674 |
+
0.884,
|
| 675 |
+
0.521
|
| 676 |
+
],
|
| 677 |
+
"angle": 0,
|
| 678 |
+
"content": "Corpus documents were annotated via a crowdsourcing activity. We invited law students from various law schools across the country to volunteer for the data annotation exercise. We created processes to onboard student volunteers and introduced them to the entire activity and its goal. Filtering was carried out at multiple stages to retain the most motivated and consistent (from the perspective of quality of the annotations) students. The entire pipeline is shown in Figure 2. We describe each stage of the pipeline in the next sections."
|
| 679 |
+
},
|
| 680 |
+
{
|
| 681 |
+
"type": "image",
|
| 682 |
+
"bbox": [
|
| 683 |
+
0.558,
|
| 684 |
+
0.534,
|
| 685 |
+
0.84,
|
| 686 |
+
0.557
|
| 687 |
+
],
|
| 688 |
+
"angle": 0,
|
| 689 |
+
"content": null
|
| 690 |
+
},
|
| 691 |
+
{
|
| 692 |
+
"type": "image_caption",
|
| 693 |
+
"bbox": [
|
| 694 |
+
0.577,
|
| 695 |
+
0.571,
|
| 696 |
+
0.819,
|
| 697 |
+
0.585
|
| 698 |
+
],
|
| 699 |
+
"angle": 0,
|
| 700 |
+
"content": "Figure 2: Data Annotation Pipeline"
|
| 701 |
+
},
|
| 702 |
+
{
|
| 703 |
+
"type": "title",
|
| 704 |
+
"bbox": [
|
| 705 |
+
0.511,
|
| 706 |
+
0.609,
|
| 707 |
+
0.696,
|
| 708 |
+
0.622
|
| 709 |
+
],
|
| 710 |
+
"angle": 0,
|
| 711 |
+
"content": "3.3.1. Student Selection"
|
| 712 |
+
},
|
| 713 |
+
{
|
| 714 |
+
"type": "text",
|
| 715 |
+
"bbox": [
|
| 716 |
+
0.51,
|
| 717 |
+
0.624,
|
| 718 |
+
0.885,
|
| 719 |
+
0.879
|
| 720 |
+
],
|
| 721 |
+
"angle": 0,
|
| 722 |
+
"content": "We did a nationwide call for volunteers through a network of law students. The application required students to describe their motivation. A basic screening was done to eliminate applications that were partially filled. Finally, after filtering, we selected an initial group of 50 students. The selected students were then on-boarded and were motivated by explaining the big picture of the impact of their contribution. The data annotations were done voluntarily by law students from multiple Indian law universities. Interaction with the law students revealed that they were motivated to learn more about AI and contribute towards the development of the AI field, and hence they volunteered for the activity. In order to smoothly conduct the annotation activity via crowdsourcing, we organized the volunteers in a hierarchical structure based on their experience and performance during a pilot study. The organizational structure for this exercise is shown in Figure 3."
|
| 723 |
+
},
|
| 724 |
+
{
|
| 725 |
+
"type": "text",
|
| 726 |
+
"bbox": [
|
| 727 |
+
0.51,
|
| 728 |
+
0.881,
|
| 729 |
+
0.884,
|
| 730 |
+
0.91
|
| 731 |
+
],
|
| 732 |
+
"angle": 0,
|
| 733 |
+
"content": "Project Administrators: They designed data collection and communication processes, built tools for data"
|
| 734 |
+
},
|
| 735 |
+
{
|
| 736 |
+
"type": "page_footnote",
|
| 737 |
+
"bbox": [
|
| 738 |
+
0.116,
|
| 739 |
+
0.868,
|
| 740 |
+
0.488,
|
| 741 |
+
0.91
|
| 742 |
+
],
|
| 743 |
+
"angle": 0,
|
| 744 |
+
"content": "\\(^{2}\\)https://main.sci.gov.in/; https://ecourts.gov.in/ecourts_home/static/highcourts.php"
|
| 745 |
+
}
|
| 746 |
+
],
|
| 747 |
+
[
|
| 748 |
+
{
|
| 749 |
+
"type": "image",
|
| 750 |
+
"bbox": [
|
| 751 |
+
0.189,
|
| 752 |
+
0.073,
|
| 753 |
+
0.422,
|
| 754 |
+
0.201
|
| 755 |
+
],
|
| 756 |
+
"angle": 0,
|
| 757 |
+
"content": null
|
| 758 |
+
},
|
| 759 |
+
{
|
| 760 |
+
"type": "image_caption",
|
| 761 |
+
"bbox": [
|
| 762 |
+
0.19,
|
| 763 |
+
0.215,
|
| 764 |
+
0.416,
|
| 765 |
+
0.23
|
| 766 |
+
],
|
| 767 |
+
"angle": 0,
|
| 768 |
+
"content": "Figure 3: Organization Structure"
|
| 769 |
+
},
|
| 770 |
+
{
|
| 771 |
+
"type": "text",
|
| 772 |
+
"bbox": [
|
| 773 |
+
0.115,
|
| 774 |
+
0.246,
|
| 775 |
+
0.489,
|
| 776 |
+
0.274
|
| 777 |
+
],
|
| 778 |
+
"angle": 0,
|
| 779 |
+
"content": "collection, and supervised the overall activity. This group included law experts and authors of the paper."
|
| 780 |
+
},
|
| 781 |
+
{
|
| 782 |
+
"type": "text",
|
| 783 |
+
"bbox": [
|
| 784 |
+
0.115,
|
| 785 |
+
0.275,
|
| 786 |
+
0.489,
|
| 787 |
+
0.36
|
| 788 |
+
],
|
| 789 |
+
"angle": 0,
|
| 790 |
+
"content": "Project Coordinators: They mentored and resolved the doubts of the students. They were responsible for assuring the quality of the data. Coordinators identified and rectified conceptual errors among the students. Further, the coordinators assisted the administrators during the adjudication process."
|
| 791 |
+
},
|
| 792 |
+
{
|
| 793 |
+
"type": "text",
|
| 794 |
+
"bbox": [
|
| 795 |
+
0.115,
|
| 796 |
+
0.361,
|
| 797 |
+
0.489,
|
| 798 |
+
0.587
|
| 799 |
+
],
|
| 800 |
+
"angle": 0,
|
| 801 |
+
"content": "Student Volunteers: They annotated the data and also provided feedback on the entire process. Volunteers were in constant communication with the coordinators. At later stages of annotations, some of the best-performing students assisted in the adjudication process (§3.3.5). Best performing students were selected based on two criteria: timely submissions and ground truth agreement score. Students were assessed if they completed the task within a stipulated time at each annotation stage. Furthermore, each batch of annotation document consisted of sentences for which true (gold) RR labels were known apriori (also §3.3.4). Students were assessed for their performance on the ground truth (sentences with gold RR labels), and students who were correct on at least 90% of ground truth sentences were considered for the best performing category."
|
| 802 |
+
},
|
| 803 |
+
{
|
| 804 |
+
"type": "text",
|
| 805 |
+
"bbox": [
|
| 806 |
+
0.115,
|
| 807 |
+
0.588,
|
| 808 |
+
0.489,
|
| 809 |
+
0.73
|
| 810 |
+
],
|
| 811 |
+
"angle": 0,
|
| 812 |
+
"content": "Before beginning the entire activity, we conducted a small pilot to assess the feasibility of crowdsourcing with student volunteers. Volunteers who completed MOOC, calibration and annotation exercises with satisfactory performance were then invited to become project coordinators for the subsequent data collection phase. The chance to become coordinator further provided positive reinforcement for the efforts, thus keeping the students well motivated. In the end, we selected eight students as project coordinators."
|
| 813 |
+
},
|
| 814 |
+
{
|
| 815 |
+
"type": "title",
|
| 816 |
+
"bbox": [
|
| 817 |
+
0.116,
|
| 818 |
+
0.739,
|
| 819 |
+
0.231,
|
| 820 |
+
0.752
|
| 821 |
+
],
|
| 822 |
+
"angle": 0,
|
| 823 |
+
"content": "3.3.2. MOOC"
|
| 824 |
+
},
|
| 825 |
+
{
|
| 826 |
+
"type": "text",
|
| 827 |
+
"bbox": [
|
| 828 |
+
0.115,
|
| 829 |
+
0.753,
|
| 830 |
+
0.489,
|
| 831 |
+
0.868
|
| 832 |
+
],
|
| 833 |
+
"angle": 0,
|
| 834 |
+
"content": "Law students do not have an understanding of the workings of AI. We designed a MOOC (Massive Open Online Course)<sup>3</sup> for the annotators. The MOOC explained the AI technologies to the law students, described the process of building datasets for AI algorithms, and explained the concept of the rhetorical role. Students were expected to complete the MOOC in a stipulated amount of time and complete the associated"
|
| 835 |
+
},
|
| 836 |
+
{
|
| 837 |
+
"type": "image",
|
| 838 |
+
"bbox": [
|
| 839 |
+
0.589,
|
| 840 |
+
0.077,
|
| 841 |
+
0.808,
|
| 842 |
+
0.173
|
| 843 |
+
],
|
| 844 |
+
"angle": 0,
|
| 845 |
+
"content": null
|
| 846 |
+
},
|
| 847 |
+
{
|
| 848 |
+
"type": "image_caption",
|
| 849 |
+
"bbox": [
|
| 850 |
+
0.557,
|
| 851 |
+
0.188,
|
| 852 |
+
0.839,
|
| 853 |
+
0.203
|
| 854 |
+
],
|
| 855 |
+
"angle": 0,
|
| 856 |
+
"content": "Figure 4: Ground Truth Score Histogram"
|
| 857 |
+
},
|
| 858 |
+
{
|
| 859 |
+
"type": "text",
|
| 860 |
+
"bbox": [
|
| 861 |
+
0.51,
|
| 862 |
+
0.218,
|
| 863 |
+
0.884,
|
| 864 |
+
0.246
|
| 865 |
+
],
|
| 866 |
+
"angle": 0,
|
| 867 |
+
"content": "quiz, which checked for a basic understanding of the rhetorical role definitions."
|
| 868 |
+
},
|
| 869 |
+
{
|
| 870 |
+
"type": "title",
|
| 871 |
+
"bbox": [
|
| 872 |
+
0.511,
|
| 873 |
+
0.258,
|
| 874 |
+
0.655,
|
| 875 |
+
0.271
|
| 876 |
+
],
|
| 877 |
+
"angle": 0,
|
| 878 |
+
"content": "3.3.3. Calibration"
|
| 879 |
+
},
|
| 880 |
+
{
|
| 881 |
+
"type": "text",
|
| 882 |
+
"bbox": [
|
| 883 |
+
0.51,
|
| 884 |
+
0.273,
|
| 885 |
+
0.884,
|
| 886 |
+
0.416
|
| 887 |
+
],
|
| 888 |
+
"angle": 0,
|
| 889 |
+
"content": "Since in the initial stages, students can differ in understanding RRs. We calibrated the students to bring them to a common ground. Calibration focused on shaping a common understanding of definitions among students. Students were asked to annotate three judgments that experts had already annotated. The sentences that differed from expert (gold) annotations were highlighted, and students were asked to calibrate their annotations. Calibration was an iterative process, and it was carried out till students came at the level of expert annotations."
|
| 890 |
+
},
|
| 891 |
+
{
|
| 892 |
+
"type": "title",
|
| 893 |
+
"bbox": [
|
| 894 |
+
0.511,
|
| 895 |
+
0.428,
|
| 896 |
+
0.692,
|
| 897 |
+
0.44
|
| 898 |
+
],
|
| 899 |
+
"angle": 0,
|
| 900 |
+
"content": "3.3.4. Data Annotation"
|
| 901 |
+
},
|
| 902 |
+
{
|
| 903 |
+
"type": "text",
|
| 904 |
+
"bbox": [
|
| 905 |
+
0.51,
|
| 906 |
+
0.442,
|
| 907 |
+
0.884,
|
| 908 |
+
0.855
|
| 909 |
+
],
|
| 910 |
+
"angle": 0,
|
| 911 |
+
"content": "In the end, 35 out of 50 selected students qualified for the calibration stage, and this was the final pool that annotated the entire corpus. Each student annotated 24 documents, and three students annotated each document. We did not observe any student dropout after the calibration stage. On average, it took about 40 minutes to annotate a single document. The entire annotation activity took around six weeks. Students annotated train and validation documents (\\(= 277\\)), and experts annotated 77 test documents. As described earlier, during the annotation process, each student was also randomly assigned four documents (chosen randomly with replacement from the test set) for which gold (ground truth) annotations were known to coordinators and administrators but not to the students. The performance of students (referred to as Ground Truth Score) on these gold documents was assessed. Ground truth score is the percentage of sentences in gold documents that are correctly annotated. The average ground truth score for all students was \\(85\\%\\). Figure 4 shows histogram of ground truth scores for a judgment. It shows that the majority of documents are in the 90 to 100 percent range, indicative of consistent annotations with ground truth docs. Note that documents shown in Figure 4 (y-axis) are chosen randomly (with replacement) from the test set and hence there is overlap between documents across different batches. Furthermore, coordinators provided feedback to students with lower scores to improve their overall annotation quality."
|
| 912 |
+
},
|
| 913 |
+
{
|
| 914 |
+
"type": "title",
|
| 915 |
+
"bbox": [
|
| 916 |
+
0.511,
|
| 917 |
+
0.866,
|
| 918 |
+
0.667,
|
| 919 |
+
0.88
|
| 920 |
+
],
|
| 921 |
+
"angle": 0,
|
| 922 |
+
"content": "3.3.5. Adjudication"
|
| 923 |
+
},
|
| 924 |
+
{
|
| 925 |
+
"type": "text",
|
| 926 |
+
"bbox": [
|
| 927 |
+
0.51,
|
| 928 |
+
0.881,
|
| 929 |
+
0.884,
|
| 930 |
+
0.909
|
| 931 |
+
],
|
| 932 |
+
"angle": 0,
|
| 933 |
+
"content": "A majority voting scheme was used to decide the final RR label. However, in some instances, annotators as"
|
| 934 |
+
},
|
| 935 |
+
{
|
| 936 |
+
"type": "page_footnote",
|
| 937 |
+
"bbox": [
|
| 938 |
+
0.116,
|
| 939 |
+
0.881,
|
| 940 |
+
0.474,
|
| 941 |
+
0.909
|
| 942 |
+
],
|
| 943 |
+
"angle": 0,
|
| 944 |
+
"content": "\\(^3\\)https://www.youtube.com/playlist? list=PL1z52lLL6eWnDnc3Wgfcu6neczruU3fFw0"
|
| 945 |
+
}
|
| 946 |
+
],
|
| 947 |
+
[
|
| 948 |
+
{
|
| 949 |
+
"type": "text",
|
| 950 |
+
"bbox": [
|
| 951 |
+
0.116,
|
| 952 |
+
0.075,
|
| 953 |
+
0.49,
|
| 954 |
+
0.135
|
| 955 |
+
],
|
| 956 |
+
"angle": 0,
|
| 957 |
+
"content": "signed three different labels; such documents were further sent for adjudication. The adjudication was done by experts, project coordinators, and some of the best-performing students (§3.3.1)."
|
| 958 |
+
},
|
| 959 |
+
{
|
| 960 |
+
"type": "title",
|
| 961 |
+
"bbox": [
|
| 962 |
+
0.116,
|
| 963 |
+
0.146,
|
| 964 |
+
0.403,
|
| 965 |
+
0.159
|
| 966 |
+
],
|
| 967 |
+
"angle": 0,
|
| 968 |
+
"content": "3.3.6. Annotation Quality Assessment"
|
| 969 |
+
},
|
| 970 |
+
{
|
| 971 |
+
"type": "text",
|
| 972 |
+
"bbox": [
|
| 973 |
+
0.115,
|
| 974 |
+
0.161,
|
| 975 |
+
0.49,
|
| 976 |
+
0.403
|
| 977 |
+
],
|
| 978 |
+
"angle": 0,
|
| 979 |
+
"content": "Final annotation quality was evaluated using Fleiss Kappa (Fleiss et al., 2013). Overall, Fleiss Kappa score was 0.59, pointing towards moderate agreement. We saw high agreement amongst annotators on PREAMBLE, RPC, NONE, and ISSUE. There were medium agreements on FACTS, RLC, ANALYSIS, PRECEDENT, and ARGUMENTS. RATIO was the most ambiguous role. ANALYSIS was very often confused with FACTS and ARGUMENTS. In a judgment, a judge emphasizes some of the facts, which as per definition, are considered as analysis role; however, annotators often confuse them as facts role. Moreover, sometimes the judge may mention arguments and give their opinion on it; this, as per definition, is the analysis role, but annotators sometimes confuse it with the argument role. FACTS was sometimes confused with RLC (Ruling by Lower Court)."
|
| 980 |
+
},
|
| 981 |
+
{
|
| 982 |
+
"type": "title",
|
| 983 |
+
"bbox": [
|
| 984 |
+
0.148,
|
| 985 |
+
0.419,
|
| 986 |
+
0.458,
|
| 987 |
+
0.435
|
| 988 |
+
],
|
| 989 |
+
"angle": 0,
|
| 990 |
+
"content": "4. RR Prediction Baseline Models"
|
| 991 |
+
},
|
| 992 |
+
{
|
| 993 |
+
"type": "text",
|
| 994 |
+
"bbox": [
|
| 995 |
+
0.115,
|
| 996 |
+
0.44,
|
| 997 |
+
0.49,
|
| 998 |
+
0.911
|
| 999 |
+
],
|
| 1000 |
+
"angle": 0,
|
| 1001 |
+
"content": "The end goal behind this work has been to encourage the development of systems that can segment a new legal document automatically in terms of rhetorical roles. Towards this goal, we experimented with some baseline models. Since transformer-based models (Wolf et al., 2020) have shown state-of-the-art (SOTA) performance on most of the NLP tasks, including the tasks in legal NLP domain (Malik et al., 2021b), we mainly experimented with them. In the RR prediction task, given a legal document, the task is to predict the RR label for each sentence in the document. We pose this as a multiclass sequence prediction problem. We initially experimented with variants of the model by Bhattacharya et al. (2019b). In particular, we use a CRF (Conditional Random Field) model for RR prediction. The features for this CRF model come from a transformer, i.e., the BERT-BASE (Devlin et al., 2018) model is used to get sentence embeddings corresponding to the CLS token. These sentence embeddings are then passed through the CRF layer to get final predictions. We call this model BERT_CRF. We also tried the architecture proposed by Cohan et al. (2019) which captures contextual dependencies using only BERT without the need for hierarchical encoding using a CRF. We call this model BERT_only. After experiments with vanilla transformer models, we finally created the baseline system using the SciBERT-HSLN architecture (Brack et al., 2021). Figure 5 shows the overall architecture of the proposed model. In the proposed model, each sentence is passed through BERT BASE model to get word embeddings, these embeddings are further processed by Bi-LSTM layer followed by attention-based pooling layer to get sentence representations \\(\\{s_1,s_2,\\dots s_n\\}\\)."
|
| 1002 |
+
},
|
| 1003 |
+
{
|
| 1004 |
+
"type": "image",
|
| 1005 |
+
"bbox": [
|
| 1006 |
+
0.596,
|
| 1007 |
+
0.075,
|
| 1008 |
+
0.813,
|
| 1009 |
+
0.219
|
| 1010 |
+
],
|
| 1011 |
+
"angle": 0,
|
| 1012 |
+
"content": null
|
| 1013 |
+
},
|
| 1014 |
+
{
|
| 1015 |
+
"type": "image_caption",
|
| 1016 |
+
"bbox": [
|
| 1017 |
+
0.51,
|
| 1018 |
+
0.234,
|
| 1019 |
+
0.885,
|
| 1020 |
+
0.264
|
| 1021 |
+
],
|
| 1022 |
+
"angle": 0,
|
| 1023 |
+
"content": "Figure 5: RR Prediction Baseline model inspired by Brack et al. (2021)"
|
| 1024 |
+
},
|
| 1025 |
+
{
|
| 1026 |
+
"type": "table",
|
| 1027 |
+
"bbox": [
|
| 1028 |
+
0.541,
|
| 1029 |
+
0.292,
|
| 1030 |
+
0.858,
|
| 1031 |
+
0.35
|
| 1032 |
+
],
|
| 1033 |
+
"angle": 0,
|
| 1034 |
+
"content": "<table><tr><td>Model</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>BERT_CRF</td><td>0.24</td><td>0.24</td><td>0.23</td></tr><tr><td>BERT_only</td><td>0.67</td><td>0.68</td><td>0.67</td></tr><tr><td>SciBERT-HSLN</td><td>0.79</td><td>0.80</td><td>0.79</td></tr></table>"
|
| 1035 |
+
},
|
| 1036 |
+
{
|
| 1037 |
+
"type": "table_caption",
|
| 1038 |
+
"bbox": [
|
| 1039 |
+
0.51,
|
| 1040 |
+
0.359,
|
| 1041 |
+
0.884,
|
| 1042 |
+
0.386
|
| 1043 |
+
],
|
| 1044 |
+
"angle": 0,
|
| 1045 |
+
"content": "Table 3: Performance of models on test (in-domain) data"
|
| 1046 |
+
},
|
| 1047 |
+
{
|
| 1048 |
+
"type": "text",
|
| 1049 |
+
"bbox": [
|
| 1050 |
+
0.51,
|
| 1051 |
+
0.41,
|
| 1052 |
+
0.884,
|
| 1053 |
+
0.508
|
| 1054 |
+
],
|
| 1055 |
+
"angle": 0,
|
| 1056 |
+
"content": "Context Enrichment layer encodes the contextual information, by taking sequence of sentence representations, resulting in contextualized sentence representations: \\(\\{c_1,c_2,\\dots ,c_n\\}\\). This is followed by MLP layers and CRF that leverage the distributed representation features to predict the RR label for each sentence via softmax activation."
|
| 1057 |
+
},
|
| 1058 |
+
{
|
| 1059 |
+
"type": "text",
|
| 1060 |
+
"bbox": [
|
| 1061 |
+
0.51,
|
| 1062 |
+
0.512,
|
| 1063 |
+
0.885,
|
| 1064 |
+
0.911
|
| 1065 |
+
],
|
| 1066 |
+
"angle": 0,
|
| 1067 |
+
"content": "Results: The performance of different models was tested on test(in-domain) data and results are given in Table 3. We use standard weighted F1 score metric for evaluation. As can be observed, the BERT_CRF model performs the worst, and the BERT_only model performs worse than the proposed model SciBERT-HSLN, which achieved a weighted F1 score of \\(78\\%\\). It is perhaps because SciBERT-HSLN, being a sequential model, can capture longer range dependencies between sentences in a document. The results of the model on the test set for each of the RR labels are shown in Table 4. Figure 6 shows the confusion matrix for the SciBERT-HSLN model. As can be observed from Table 4 and Figure 6, ARGUMENTS based roles are miss-classified very often and confused among the two types of ARGUMENTS and also sometimes confused with FACTS and ANALYSIS. PREAMBLE is almost perfectly classified. As can be seen, PRECEDENT NOT RELIED is completely miss-classified and confused with PRECEDENT RELIED and ANALYSIS. RATIO is often confused with ANALYSIS, and this trend is similar to what was observed for annotators as well. Similar to what was observed for annotators, RPC, PREAMBLE, NONE and ISSUE are classified with decent F1 scores. STATUES are also not well classified as many times a judge mentions some laws in their opinion and model tends to learn these spurious patterns as analysis and miss-classifies actual stat-"
|
| 1068 |
+
}
|
| 1069 |
+
],
|
| 1070 |
+
[
|
| 1071 |
+
{
|
| 1072 |
+
"type": "table",
|
| 1073 |
+
"bbox": [
|
| 1074 |
+
0.127,
|
| 1075 |
+
0.072,
|
| 1076 |
+
0.48,
|
| 1077 |
+
0.281
|
| 1078 |
+
],
|
| 1079 |
+
"angle": 0,
|
| 1080 |
+
"content": "<table><tr><td>Rhetorical Role</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>ANALYSIS</td><td>0.77</td><td>0.89</td><td>0.83</td></tr><tr><td>ARGPETITIONER</td><td>0.60</td><td>0.64</td><td>0.62</td></tr><tr><td>ARGRESPONDENT</td><td>0.84</td><td>0.41</td><td>0.55</td></tr><tr><td>FAC</td><td>0.80</td><td>0.84</td><td>0.82</td></tr><tr><td>ISSUE</td><td>0.93</td><td>0.87</td><td>0.90</td></tr><tr><td>NONE</td><td>0.85</td><td>0.84</td><td>0.85</td></tr><tr><td>PREAMBLE</td><td>0.96</td><td>0.98</td><td>0.97</td></tr><tr><td>PRE_NOT_RELIED</td><td>0.00</td><td>0.00</td><td>0.00</td></tr><tr><td>PRE.RelIED</td><td>0.79</td><td>0.60</td><td>0.68</td></tr><tr><td>RATIO</td><td>0.53</td><td>0.56</td><td>0.54</td></tr><tr><td>RLC</td><td>0.75</td><td>0.45</td><td>0.57</td></tr><tr><td>RPC</td><td>0.78</td><td>0.87</td><td>0.82</td></tr><tr><td>STA</td><td>0.77</td><td>0.54</td><td>0.64</td></tr><tr><td>Overall</td><td>0.79</td><td>0.80</td><td>0.79</td></tr></table>"
|
| 1081 |
+
},
|
| 1082 |
+
{
|
| 1083 |
+
"type": "table_caption",
|
| 1084 |
+
"bbox": [
|
| 1085 |
+
0.116,
|
| 1086 |
+
0.29,
|
| 1087 |
+
0.49,
|
| 1088 |
+
0.318
|
| 1089 |
+
],
|
| 1090 |
+
"angle": 0,
|
| 1091 |
+
"content": "Table 4: F1 scores of RR baseline model for each of the rhetorical role on test data"
|
| 1092 |
+
},
|
| 1093 |
+
{
|
| 1094 |
+
"type": "image",
|
| 1095 |
+
"bbox": [
|
| 1096 |
+
0.126,
|
| 1097 |
+
0.34,
|
| 1098 |
+
0.481,
|
| 1099 |
+
0.55
|
| 1100 |
+
],
|
| 1101 |
+
"angle": 0,
|
| 1102 |
+
"content": null
|
| 1103 |
+
},
|
| 1104 |
+
{
|
| 1105 |
+
"type": "image_caption",
|
| 1106 |
+
"bbox": [
|
| 1107 |
+
0.116,
|
| 1108 |
+
0.567,
|
| 1109 |
+
0.489,
|
| 1110 |
+
0.595
|
| 1111 |
+
],
|
| 1112 |
+
"angle": 0,
|
| 1113 |
+
"content": "Figure 6: Confusion Matrix for SciBERT-HSLN model predictions on the test data"
|
| 1114 |
+
},
|
| 1115 |
+
{
|
| 1116 |
+
"type": "text",
|
| 1117 |
+
"bbox": [
|
| 1118 |
+
0.115,
|
| 1119 |
+
0.611,
|
| 1120 |
+
0.489,
|
| 1121 |
+
0.654
|
| 1122 |
+
],
|
| 1123 |
+
"angle": 0,
|
| 1124 |
+
"content": "ues as analysis. We have also created a leaderboard for the task of RR prediction where other researchers can experiment with various approaches."
|
| 1125 |
+
},
|
| 1126 |
+
{
|
| 1127 |
+
"type": "text",
|
| 1128 |
+
"bbox": [
|
| 1129 |
+
0.115,
|
| 1130 |
+
0.654,
|
| 1131 |
+
0.489,
|
| 1132 |
+
0.768
|
| 1133 |
+
],
|
| 1134 |
+
"angle": 0,
|
| 1135 |
+
"content": "Results on test (out-domain) data: In order to check if the baseline model trained on Criminal and Tax cases generalized to other domains, we tested the baseline model on 27 judgments from Motor Vehicles, Industrial and Labour and Land and Property cases. Weighted F1 reduced to 0.70. This degradation in performance is mainly due to different style of writing in the judgments."
|
| 1136 |
+
},
|
| 1137 |
+
{
|
| 1138 |
+
"type": "title",
|
| 1139 |
+
"bbox": [
|
| 1140 |
+
0.144,
|
| 1141 |
+
0.776,
|
| 1142 |
+
0.462,
|
| 1143 |
+
0.807
|
| 1144 |
+
],
|
| 1145 |
+
"angle": 0,
|
| 1146 |
+
"content": "5. Applications of Rhetorical Roles Prediction Task"
|
| 1147 |
+
},
|
| 1148 |
+
{
|
| 1149 |
+
"type": "text",
|
| 1150 |
+
"bbox": [
|
| 1151 |
+
0.116,
|
| 1152 |
+
0.812,
|
| 1153 |
+
0.49,
|
| 1154 |
+
0.869
|
| 1155 |
+
],
|
| 1156 |
+
"angle": 0,
|
| 1157 |
+
"content": "The purpose of creating a rhetorical role corpus is to enable automated understanding of legal documents by segmenting them into topically coherent units. This can be helpful in various applications such legal document"
|
| 1158 |
+
},
|
| 1159 |
+
{
|
| 1160 |
+
"type": "table",
|
| 1161 |
+
"bbox": [
|
| 1162 |
+
0.514,
|
| 1163 |
+
0.072,
|
| 1164 |
+
0.883,
|
| 1165 |
+
0.13
|
| 1166 |
+
],
|
| 1167 |
+
"angle": 0,
|
| 1168 |
+
"content": "<table><tr><td>Model</td><td>ROUGE-1</td><td>ROUGE-2</td><td>ROUGE-L</td></tr><tr><td>BERTSUM</td><td>0.60</td><td>0.42</td><td>0.59</td></tr><tr><td>BERTSUM RR</td><td>0.62</td><td>0.46</td><td>0.61</td></tr></table>"
|
| 1169 |
+
},
|
| 1170 |
+
{
|
| 1171 |
+
"type": "table_caption",
|
| 1172 |
+
"bbox": [
|
| 1173 |
+
0.551,
|
| 1174 |
+
0.139,
|
| 1175 |
+
0.844,
|
| 1176 |
+
0.152
|
| 1177 |
+
],
|
| 1178 |
+
"angle": 0,
|
| 1179 |
+
"content": "Table 5: Extractive Summarization Results"
|
| 1180 |
+
},
|
| 1181 |
+
{
|
| 1182 |
+
"type": "text",
|
| 1183 |
+
"bbox": [
|
| 1184 |
+
0.51,
|
| 1185 |
+
0.164,
|
| 1186 |
+
0.884,
|
| 1187 |
+
0.264
|
| 1188 |
+
],
|
| 1189 |
+
"angle": 0,
|
| 1190 |
+
"content": "summarization (Bhattacharya et al., 2019a), and legal judgment prediction (Malik et al., 2021b). In this paper, we explore both the use-cases. We experimented with how rhetorical roles prediction could help create abstractive, extractive summaries of Indian court judgments and predict the judgment outcome based on the judgment text."
|
| 1191 |
+
},
|
| 1192 |
+
{
|
| 1193 |
+
"type": "title",
|
| 1194 |
+
"bbox": [
|
| 1195 |
+
0.511,
|
| 1196 |
+
0.275,
|
| 1197 |
+
0.843,
|
| 1198 |
+
0.304
|
| 1199 |
+
],
|
| 1200 |
+
"angle": 0,
|
| 1201 |
+
"content": "5.1. Extractive Summarization of Court Judgments using Rhetorical Roles"
|
| 1202 |
+
},
|
| 1203 |
+
{
|
| 1204 |
+
"type": "text",
|
| 1205 |
+
"bbox": [
|
| 1206 |
+
0.51,
|
| 1207 |
+
0.309,
|
| 1208 |
+
0.884,
|
| 1209 |
+
0.508
|
| 1210 |
+
],
|
| 1211 |
+
"angle": 0,
|
| 1212 |
+
"content": "We explored the task of extractive summarization. For a given legal document, the task requires extracting the salient sentences that would summarize the document. We experimented with the LawBriEFs corpus consisting of 285 extractive summaries of Indian court judgments prepared by law students from a National Law University in India. The corpus was created by providing judgment documents to law students, followed by a questionnaire that required them to pick salient sentences that would answer the questions and, in the process, create the summaries. The questions pertained to facts, arguments, issues, ratio, and decisions. We wanted to experiment with how rhetorical roles could be helpful in extracting summaries."
|
| 1213 |
+
},
|
| 1214 |
+
{
|
| 1215 |
+
"type": "text",
|
| 1216 |
+
"bbox": [
|
| 1217 |
+
0.51,
|
| 1218 |
+
0.509,
|
| 1219 |
+
0.884,
|
| 1220 |
+
0.835
|
| 1221 |
+
],
|
| 1222 |
+
"angle": 0,
|
| 1223 |
+
"content": "We finetuned BERTSUM (Liu and Lapata, 2019) model on the Lawbriefs data to pick up the top \\(20\\%\\) of the sentences as summaries. Since the judgments are much longer than 512 token limits of BERTSUM, we created non-overlapping chunks of 512 tokens and created 3151 chunks in training data from 235 judgments and 827 chunks from 50 judgments as test data. We then trained another model, which also takes as input a rhetorical role for each sentence. We concatenated 768-dimensional sentence vector from CLS token to one-hot encoded sentence rhetorical roles. The idea is that if certain rhetorical roles are more important than others while creating summaries, then the model will learn those. We call this model BERTSUM RR. Discussion with Legal Experts revealed that ISSUE, RATIO, and RPC are important in summary and must always be selected without the need of summarizing. So we copied all the sentences with predicted rhetorical roles ISSUE, RATIO and RPC regardless of whether they are present in the top \\(20\\%\\) sentences. Model performance evaluated using ROUGE scores (Lin, 2004) are compared in Table 5. Results indicate that rhetorical roles are useful in selecting better summary sentences."
|
| 1224 |
+
},
|
| 1225 |
+
{
|
| 1226 |
+
"type": "title",
|
| 1227 |
+
"bbox": [
|
| 1228 |
+
0.511,
|
| 1229 |
+
0.847,
|
| 1230 |
+
0.853,
|
| 1231 |
+
0.877
|
| 1232 |
+
],
|
| 1233 |
+
"angle": 0,
|
| 1234 |
+
"content": "5.2. Abstractive Summarization of Court Judgments using Rhetorical Roles"
|
| 1235 |
+
},
|
| 1236 |
+
{
|
| 1237 |
+
"type": "text",
|
| 1238 |
+
"bbox": [
|
| 1239 |
+
0.51,
|
| 1240 |
+
0.881,
|
| 1241 |
+
0.884,
|
| 1242 |
+
0.91
|
| 1243 |
+
],
|
| 1244 |
+
"angle": 0,
|
| 1245 |
+
"content": "The task of abstractive summarization requires generating concise text summaries of legal documents. For"
|
| 1246 |
+
},
|
| 1247 |
+
{
|
| 1248 |
+
"type": "page_footnote",
|
| 1249 |
+
"bbox": [
|
| 1250 |
+
0.116,
|
| 1251 |
+
0.882,
|
| 1252 |
+
0.466,
|
| 1253 |
+
0.909
|
| 1254 |
+
],
|
| 1255 |
+
"angle": 0,
|
| 1256 |
+
"content": "4https://legal-nlp-ekstep.github.io/ Competitions/Rhetorical-Role/"
|
| 1257 |
+
}
|
| 1258 |
+
],
|
| 1259 |
+
[
|
| 1260 |
+
{
|
| 1261 |
+
"type": "text",
|
| 1262 |
+
"bbox": [
|
| 1263 |
+
0.119,
|
| 1264 |
+
0.076,
|
| 1265 |
+
0.487,
|
| 1266 |
+
0.444
|
| 1267 |
+
],
|
| 1268 |
+
"angle": 0,
|
| 1269 |
+
"content": "our experiments, we considered 50 randomly selected documents from the Law Briefs dataset (as described in 5.1) as test data. For this task we used pre-trained Legal Pegasus model.5 Legal Pegasus is fine-tuned version of Pegasus (Zhang et al., 2020) on US securities litigation dataset.6 We used the pre-trained Legal Pegasus model for generating abstractive summaries for the baseline. In particular, we split the document into non-overlapping chunks of 1024 tokens, and each chunk was passed through the model to generate summaries. The final summary was obtained by concatenating summaries of each chunk. It constituted the baseline model. We wanted to see how RR could help generate better summaries. Towards this goal, we segmented the document in terms of rhetorical roles, and each of the segments was passed separately through the Legal Pegasus model to generate summaries. The final summary was obtained by concatenating the summaries corresponding to each of the rhetorical roles in the order they appear in the document. This corresponds to the Legal Pegasus RR model. Both models are compared on the test set and ROUGE scores for both the model are shown in Table 6. As can be observed in Table 6 use of rhetorical roles helps to improve the performance on the task of abstractive summarizing."
|
| 1270 |
+
},
|
| 1271 |
+
{
|
| 1272 |
+
"type": "table",
|
| 1273 |
+
"bbox": [
|
| 1274 |
+
0.119,
|
| 1275 |
+
0.456,
|
| 1276 |
+
0.487,
|
| 1277 |
+
0.512
|
| 1278 |
+
],
|
| 1279 |
+
"angle": 0,
|
| 1280 |
+
"content": "<table><tr><td>Model</td><td>ROUGE-1</td><td>ROUGE-2</td><td>ROUGE-L</td></tr><tr><td>Legal Pegasus</td><td>0.55</td><td>0.34</td><td>0.47</td></tr><tr><td>Legal Pegasus RR</td><td>0.56</td><td>0.36</td><td>0.48</td></tr></table>"
|
| 1281 |
+
},
|
| 1282 |
+
{
|
| 1283 |
+
"type": "table_caption",
|
| 1284 |
+
"bbox": [
|
| 1285 |
+
0.155,
|
| 1286 |
+
0.522,
|
| 1287 |
+
0.451,
|
| 1288 |
+
0.535
|
| 1289 |
+
],
|
| 1290 |
+
"angle": 0,
|
| 1291 |
+
"content": "Table 6: Abstractive Summarization Results"
|
| 1292 |
+
},
|
| 1293 |
+
{
|
| 1294 |
+
"type": "title",
|
| 1295 |
+
"bbox": [
|
| 1296 |
+
0.119,
|
| 1297 |
+
0.553,
|
| 1298 |
+
0.43,
|
| 1299 |
+
0.581
|
| 1300 |
+
],
|
| 1301 |
+
"angle": 0,
|
| 1302 |
+
"content": "5.3. Court Judgment Prediction using Rhetorical Roles"
|
| 1303 |
+
},
|
| 1304 |
+
{
|
| 1305 |
+
"type": "text",
|
| 1306 |
+
"bbox": [
|
| 1307 |
+
0.119,
|
| 1308 |
+
0.585,
|
| 1309 |
+
0.487,
|
| 1310 |
+
0.84
|
| 1311 |
+
],
|
| 1312 |
+
"angle": 0,
|
| 1313 |
+
"content": "Malik et al. (2021b) created the corpus (ILDC: Indian Legal Documents Corpus) and the task (CJPE: Court Judgment Prediction and Explanation) for predicting and explaining the court judgments based on legal judgment texts. It is essential for the judgment prediction task to identify which sentences provide hints about the final decision and use that filtered data as input for prediction. We predicted rhetorical role for each sentence of the train, test data using the baseline rhetorical role model. In the ILDC dataset, we removed the sentences with RPC and RATIO tags making the task more challenging. We also removed the judgments for which no ANALYSIS was predicted. Note that the ILDC dataset is already anonymized and takes care of the biases and ethical concerns associated with the task of judgment prediction. Moreover, we use judgment prediction only as a use case and do not believe that an automated system could remove a human judge; rather,"
|
| 1314 |
+
},
|
| 1315 |
+
{
|
| 1316 |
+
"type": "text",
|
| 1317 |
+
"bbox": [
|
| 1318 |
+
0.515,
|
| 1319 |
+
0.076,
|
| 1320 |
+
0.882,
|
| 1321 |
+
0.116
|
| 1322 |
+
],
|
| 1323 |
+
"angle": 0,
|
| 1324 |
+
"content": "such a system could augment a human and expedite legal processes, especially in highly populated countries like India."
|
| 1325 |
+
},
|
| 1326 |
+
{
|
| 1327 |
+
"type": "text",
|
| 1328 |
+
"bbox": [
|
| 1329 |
+
0.515,
|
| 1330 |
+
0.118,
|
| 1331 |
+
0.882,
|
| 1332 |
+
0.359
|
| 1333 |
+
],
|
| 1334 |
+
"angle": 0,
|
| 1335 |
+
"content": "For the task of judgment prediction, training data had 5044 judgments, and test data had 977 judgments. The idea is to filter the training data using rhetorical roles to check the impact on model performance, keeping the model architecture the same. We used XLNet on the ILDC single model proposed in Malik et al. (2021b) to predict the judgment outcome on the last 512 tokens of the judgment text. We call this approach XLNet_last512. The model ran for 13 epochs, and then it was early stopped. In another experiment, we trained the same architecture to predict judgment outcome on the last 512 tokens of ANALYSIS role sentences. We call this model as XLNet_last512_Analysis. The model ran for 12 epochs, and then it was early stopped. The model performance comparison are given in Table 7. As observed from the results, filtering the input text for the ANALYSIS role improves the prediction."
|
| 1336 |
+
},
|
| 1337 |
+
{
|
| 1338 |
+
"type": "table",
|
| 1339 |
+
"bbox": [
|
| 1340 |
+
0.515,
|
| 1341 |
+
0.371,
|
| 1342 |
+
0.882,
|
| 1343 |
+
0.414
|
| 1344 |
+
],
|
| 1345 |
+
"angle": 0,
|
| 1346 |
+
"content": "<table><tr><td>Model</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>XLNet_last512</td><td>0.76</td><td>0.49</td><td>0.59</td></tr><tr><td>XLNet_last512_Analysis</td><td>0.71</td><td>0.55</td><td>0.62</td></tr></table>"
|
| 1347 |
+
},
|
| 1348 |
+
{
|
| 1349 |
+
"type": "table_caption",
|
| 1350 |
+
"bbox": [
|
| 1351 |
+
0.573,
|
| 1352 |
+
0.424,
|
| 1353 |
+
0.824,
|
| 1354 |
+
0.437
|
| 1355 |
+
],
|
| 1356 |
+
"angle": 0,
|
| 1357 |
+
"content": "Table 7: Judgment prediction Results"
|
| 1358 |
+
},
|
| 1359 |
+
{
|
| 1360 |
+
"type": "title",
|
| 1361 |
+
"bbox": [
|
| 1362 |
+
0.534,
|
| 1363 |
+
0.459,
|
| 1364 |
+
0.862,
|
| 1365 |
+
0.472
|
| 1366 |
+
],
|
| 1367 |
+
"angle": 0,
|
| 1368 |
+
"content": "6. Conclusion and Future Directions"
|
| 1369 |
+
},
|
| 1370 |
+
{
|
| 1371 |
+
"type": "text",
|
| 1372 |
+
"bbox": [
|
| 1373 |
+
0.515,
|
| 1374 |
+
0.478,
|
| 1375 |
+
0.882,
|
| 1376 |
+
0.718
|
| 1377 |
+
],
|
| 1378 |
+
"angle": 0,
|
| 1379 |
+
"content": "In this paper, we proposed a new corpus of legal judgment documents annotated with 13 different Rhetorical Roles. The corpus was created via crowdsourcing involving law students. We also proposed baseline models for automatic rhetorical role prediction in a legal document. For some of the roles, the model shows similar trends in predicting the roles as human annotators. Nevertheless, there is scope for further improvement and we have created a leaderboard for the task, so that researchers from community can contribute towards improving the RR prediction system. We also showed two applications of rhetorical roles: summarization and judgment prediction. For both the usecases use of rhetorical role helps to improve results. We have released the corpus and the baseline models and encourage the community to use these to develop other legal applications as well."
|
| 1380 |
+
},
|
| 1381 |
+
{
|
| 1382 |
+
"type": "title",
|
| 1383 |
+
"bbox": [
|
| 1384 |
+
0.616,
|
| 1385 |
+
0.731,
|
| 1386 |
+
0.782,
|
| 1387 |
+
0.747
|
| 1388 |
+
],
|
| 1389 |
+
"angle": 0,
|
| 1390 |
+
"content": "Acknowledgements"
|
| 1391 |
+
},
|
| 1392 |
+
{
|
| 1393 |
+
"type": "text",
|
| 1394 |
+
"bbox": [
|
| 1395 |
+
0.515,
|
| 1396 |
+
0.75,
|
| 1397 |
+
0.882,
|
| 1398 |
+
0.848
|
| 1399 |
+
],
|
| 1400 |
+
"angle": 0,
|
| 1401 |
+
"content": "We thank EkStep Foundation for funding this work. We thank all the law experts, student volunteers, and coordinators for contributing to data annotation. We thank LawBriEFs for sharing the summaries. The author Ashutosh Modi would like to acknowledge the support of Google Research India via the Faculty Research Award Grant 2021."
|
| 1402 |
+
},
|
| 1403 |
+
{
|
| 1404 |
+
"type": "title",
|
| 1405 |
+
"bbox": [
|
| 1406 |
+
0.565,
|
| 1407 |
+
0.861,
|
| 1408 |
+
0.832,
|
| 1409 |
+
0.877
|
| 1410 |
+
],
|
| 1411 |
+
"angle": 0,
|
| 1412 |
+
"content": "7. Bibliographical References"
|
| 1413 |
+
},
|
| 1414 |
+
{
|
| 1415 |
+
"type": "text",
|
| 1416 |
+
"bbox": [
|
| 1417 |
+
0.515,
|
| 1418 |
+
0.882,
|
| 1419 |
+
0.882,
|
| 1420 |
+
0.91
|
| 1421 |
+
],
|
| 1422 |
+
"angle": 0,
|
| 1423 |
+
"content": "Bhatia, V. K. (2014). Analysing genre: Language use in professional settings. Routledge."
|
| 1424 |
+
},
|
| 1425 |
+
{
|
| 1426 |
+
"type": "page_footnote",
|
| 1427 |
+
"bbox": [
|
| 1428 |
+
0.119,
|
| 1429 |
+
0.856,
|
| 1430 |
+
0.424,
|
| 1431 |
+
0.908
|
| 1432 |
+
],
|
| 1433 |
+
"angle": 0,
|
| 1434 |
+
"content": "\\(^{5}\\)https://huggingface.co/ansi319/ legal-pegasus \\({ }^{6}\\)https://www.sec.gov/litigation/ litreleases.htm"
|
| 1435 |
+
}
|
| 1436 |
+
],
|
| 1437 |
+
[
|
| 1438 |
+
{
|
| 1439 |
+
"type": "ref_text",
|
| 1440 |
+
"bbox": [
|
| 1441 |
+
0.118,
|
| 1442 |
+
0.075,
|
| 1443 |
+
0.489,
|
| 1444 |
+
0.147
|
| 1445 |
+
],
|
| 1446 |
+
"angle": 0,
|
| 1447 |
+
"content": "Bhattacharya, P., Hiware, K., Rajgaria, S., Pochhi, N., Ghosh, K., and Ghosh, S. (2019a). A comparative study of summarization algorithms applied to legal case judgments. In European Conference on Information Retrieval, pages 413-428. Springer."
|
| 1448 |
+
},
|
| 1449 |
+
{
|
| 1450 |
+
"type": "ref_text",
|
| 1451 |
+
"bbox": [
|
| 1452 |
+
0.118,
|
| 1453 |
+
0.148,
|
| 1454 |
+
0.489,
|
| 1455 |
+
0.19
|
| 1456 |
+
],
|
| 1457 |
+
"angle": 0,
|
| 1458 |
+
"content": "Bhattacharya, P., Paul, S., Ghosh, K., Ghosh, S., and Wyner, A. (2019b). Identification of rhetorical roles of sentences in indian legal judgments."
|
| 1459 |
+
},
|
| 1460 |
+
{
|
| 1461 |
+
"type": "ref_text",
|
| 1462 |
+
"bbox": [
|
| 1463 |
+
0.118,
|
| 1464 |
+
0.191,
|
| 1465 |
+
0.489,
|
| 1466 |
+
0.247
|
| 1467 |
+
],
|
| 1468 |
+
"angle": 0,
|
| 1469 |
+
"content": "Brack, A., Hoppe, A., Buschermohle, P., and Ewerth, R. (2021). Sequential sentence classification in research papers using cross-domain multi-task learning."
|
| 1470 |
+
},
|
| 1471 |
+
{
|
| 1472 |
+
"type": "ref_text",
|
| 1473 |
+
"bbox": [
|
| 1474 |
+
0.118,
|
| 1475 |
+
0.249,
|
| 1476 |
+
0.489,
|
| 1477 |
+
0.333
|
| 1478 |
+
],
|
| 1479 |
+
"angle": 0,
|
| 1480 |
+
"content": "Chalkidis, I., Androutsopoulos, I., and Aletras, N. (2019). Neural legal judgment prediction in English. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4317-4323, Florence, Italy, July. Association for Computational Linguistics."
|
| 1481 |
+
},
|
| 1482 |
+
{
|
| 1483 |
+
"type": "ref_text",
|
| 1484 |
+
"bbox": [
|
| 1485 |
+
0.118,
|
| 1486 |
+
0.335,
|
| 1487 |
+
0.489,
|
| 1488 |
+
0.447
|
| 1489 |
+
],
|
| 1490 |
+
"angle": 0,
|
| 1491 |
+
"content": "Chalkidis, I., Fergadiotis, M., and Androutsopoulos, I. (2021). MultiEURLEX - a multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6974–6996, Online and Punta Cana, Dominican Republic, November. Association for Computational Linguistics."
|
| 1492 |
+
},
|
| 1493 |
+
{
|
| 1494 |
+
"type": "ref_text",
|
| 1495 |
+
"bbox": [
|
| 1496 |
+
0.118,
|
| 1497 |
+
0.449,
|
| 1498 |
+
0.489,
|
| 1499 |
+
0.547
|
| 1500 |
+
],
|
| 1501 |
+
"angle": 0,
|
| 1502 |
+
"content": "Cohan, A., Beltagy, I., King, D., Dalvi, B., and Weld, D. (2019). Pretrained language models for sequential sentence classification. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)."
|
| 1503 |
+
},
|
| 1504 |
+
{
|
| 1505 |
+
"type": "ref_text",
|
| 1506 |
+
"bbox": [
|
| 1507 |
+
0.118,
|
| 1508 |
+
0.549,
|
| 1509 |
+
0.489,
|
| 1510 |
+
0.591
|
| 1511 |
+
],
|
| 1512 |
+
"angle": 0,
|
| 1513 |
+
"content": "Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09."
|
| 1514 |
+
},
|
| 1515 |
+
{
|
| 1516 |
+
"type": "ref_text",
|
| 1517 |
+
"bbox": [
|
| 1518 |
+
0.118,
|
| 1519 |
+
0.593,
|
| 1520 |
+
0.489,
|
| 1521 |
+
0.648
|
| 1522 |
+
],
|
| 1523 |
+
"angle": 0,
|
| 1524 |
+
"content": "Devlin, J., Chang, M., Lee, K., and Toutanova, K. (2018). BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805."
|
| 1525 |
+
},
|
| 1526 |
+
{
|
| 1527 |
+
"type": "ref_text",
|
| 1528 |
+
"bbox": [
|
| 1529 |
+
0.118,
|
| 1530 |
+
0.65,
|
| 1531 |
+
0.489,
|
| 1532 |
+
0.678
|
| 1533 |
+
],
|
| 1534 |
+
"angle": 0,
|
| 1535 |
+
"content": "Farzindar, A. and Lapalme, G. (2004). Letsum, an automatic legal text summarizing system."
|
| 1536 |
+
},
|
| 1537 |
+
{
|
| 1538 |
+
"type": "ref_text",
|
| 1539 |
+
"bbox": [
|
| 1540 |
+
0.118,
|
| 1541 |
+
0.679,
|
| 1542 |
+
0.489,
|
| 1543 |
+
0.72
|
| 1544 |
+
],
|
| 1545 |
+
"angle": 0,
|
| 1546 |
+
"content": "Fleiss, J. L., Levin, B., and Paik, M. C. (2013). Statistical methods for rates and proportions. John wiley & sons."
|
| 1547 |
+
},
|
| 1548 |
+
{
|
| 1549 |
+
"type": "ref_text",
|
| 1550 |
+
"bbox": [
|
| 1551 |
+
0.118,
|
| 1552 |
+
0.723,
|
| 1553 |
+
0.489,
|
| 1554 |
+
0.793
|
| 1555 |
+
],
|
| 1556 |
+
"angle": 0,
|
| 1557 |
+
"content": "Grabmair, M., Ashley, K. D., Hwa, R., and Sweeney, P. M. (2011). Toward extracting information from public health statutes using text classification machine learning. In Legal Knowledge and Information Systems, pages 73-82. IOS Press."
|
| 1558 |
+
},
|
| 1559 |
+
{
|
| 1560 |
+
"type": "ref_text",
|
| 1561 |
+
"bbox": [
|
| 1562 |
+
0.118,
|
| 1563 |
+
0.795,
|
| 1564 |
+
0.489,
|
| 1565 |
+
0.836
|
| 1566 |
+
],
|
| 1567 |
+
"angle": 0,
|
| 1568 |
+
"content": "Hachey, B. and Grover, C. (2006). Extractive summarisation of legal texts. Artificial Intelligence and Law, 14(4):305-345."
|
| 1569 |
+
},
|
| 1570 |
+
{
|
| 1571 |
+
"type": "ref_text",
|
| 1572 |
+
"bbox": [
|
| 1573 |
+
0.118,
|
| 1574 |
+
0.838,
|
| 1575 |
+
0.489,
|
| 1576 |
+
0.893
|
| 1577 |
+
],
|
| 1578 |
+
"angle": 0,
|
| 1579 |
+
"content": "Jackson, P., Al-Kofahi, K., Tyrrell, A., and Vachher, A. (2003). Information extraction from case law and retrieval of prior cases. Artificial Intelligence, 150(1-2):239-290."
|
| 1580 |
+
},
|
| 1581 |
+
{
|
| 1582 |
+
"type": "ref_text",
|
| 1583 |
+
"bbox": [
|
| 1584 |
+
0.118,
|
| 1585 |
+
0.895,
|
| 1586 |
+
0.489,
|
| 1587 |
+
0.91
|
| 1588 |
+
],
|
| 1589 |
+
"angle": 0,
|
| 1590 |
+
"content": "Kapoor, A., Dhawan, M., Goel, A., Arjun, T., Agrawal,"
|
| 1591 |
+
},
|
| 1592 |
+
{
|
| 1593 |
+
"type": "list",
|
| 1594 |
+
"bbox": [
|
| 1595 |
+
0.118,
|
| 1596 |
+
0.075,
|
| 1597 |
+
0.489,
|
| 1598 |
+
0.91
|
| 1599 |
+
],
|
| 1600 |
+
"angle": 0,
|
| 1601 |
+
"content": null
|
| 1602 |
+
},
|
| 1603 |
+
{
|
| 1604 |
+
"type": "ref_text",
|
| 1605 |
+
"bbox": [
|
| 1606 |
+
0.531,
|
| 1607 |
+
0.075,
|
| 1608 |
+
0.884,
|
| 1609 |
+
0.147
|
| 1610 |
+
],
|
| 1611 |
+
"angle": 0,
|
| 1612 |
+
"content": "V., Agrawal, A., Bhattacharya, A., Kumaraguru, P., and Modi, A. (2022). HLDC: Hindi Legal Documents Corpus. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2022. Association for Computational Linguistics."
|
| 1613 |
+
},
|
| 1614 |
+
{
|
| 1615 |
+
"type": "ref_text",
|
| 1616 |
+
"bbox": [
|
| 1617 |
+
0.514,
|
| 1618 |
+
0.148,
|
| 1619 |
+
0.883,
|
| 1620 |
+
0.218
|
| 1621 |
+
],
|
| 1622 |
+
"angle": 0,
|
| 1623 |
+
"content": "Lagos, N., Segond, F., Castellani, S., and O?Neill, J. (2010). Event extraction for legal case building and reasoning. In International Conference on Intelligent Information Processing, pages 92-101. Springer."
|
| 1624 |
+
},
|
| 1625 |
+
{
|
| 1626 |
+
"type": "ref_text",
|
| 1627 |
+
"bbox": [
|
| 1628 |
+
0.513,
|
| 1629 |
+
0.22,
|
| 1630 |
+
0.883,
|
| 1631 |
+
0.276
|
| 1632 |
+
],
|
| 1633 |
+
"angle": 0,
|
| 1634 |
+
"content": "Lin, C.-Y. (2004). ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain, July. Association for Computational Linguistics."
|
| 1635 |
+
},
|
| 1636 |
+
{
|
| 1637 |
+
"type": "ref_text",
|
| 1638 |
+
"bbox": [
|
| 1639 |
+
0.514,
|
| 1640 |
+
0.278,
|
| 1641 |
+
0.883,
|
| 1642 |
+
0.305
|
| 1643 |
+
],
|
| 1644 |
+
"angle": 0,
|
| 1645 |
+
"content": "Liu, Y. and Lapata, M. (2019). Text summarization with pretrained encoders."
|
| 1646 |
+
},
|
| 1647 |
+
{
|
| 1648 |
+
"type": "ref_text",
|
| 1649 |
+
"bbox": [
|
| 1650 |
+
0.514,
|
| 1651 |
+
0.307,
|
| 1652 |
+
0.883,
|
| 1653 |
+
0.362
|
| 1654 |
+
],
|
| 1655 |
+
"angle": 0,
|
| 1656 |
+
"content": "Malik, V., Sanjay, R., Guha, S. K., Nigam, S. K., Hazarika, A., Bhattacharya, A., and Modi, A. (2021a). Semantic Segmentation of Legal Documents via Rhetorical Roles. CoRR, abs/2112.01836."
|
| 1657 |
+
},
|
| 1658 |
+
{
|
| 1659 |
+
"type": "ref_text",
|
| 1660 |
+
"bbox": [
|
| 1661 |
+
0.514,
|
| 1662 |
+
0.365,
|
| 1663 |
+
0.883,
|
| 1664 |
+
0.506
|
| 1665 |
+
],
|
| 1666 |
+
"angle": 0,
|
| 1667 |
+
"content": "Malik, V., Sanjay, R., Nigam, S. K., Ghosh, K., Guha, S. K., Bhattacharya, A., and Modi, A. (2021b). ILDC for CJPE: Indian legal documents corpus for court judgment prediction and explanation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4046-4062, Online, August. Association for Computational Linguistics."
|
| 1668 |
+
},
|
| 1669 |
+
{
|
| 1670 |
+
"type": "ref_text",
|
| 1671 |
+
"bbox": [
|
| 1672 |
+
0.514,
|
| 1673 |
+
0.508,
|
| 1674 |
+
0.883,
|
| 1675 |
+
0.578
|
| 1676 |
+
],
|
| 1677 |
+
"angle": 0,
|
| 1678 |
+
"content": "Maxwell, K. T., Oberlander, J., and Lavrenko, V. (2009). Evaluation of semantic events for legal case retrieval. In Proceedings of the WSDM'09 Workshop on Exploiting Semantic Annotations in Information Retrieval, pages 39-41."
|
| 1679 |
+
},
|
| 1680 |
+
{
|
| 1681 |
+
"type": "ref_text",
|
| 1682 |
+
"bbox": [
|
| 1683 |
+
0.514,
|
| 1684 |
+
0.58,
|
| 1685 |
+
0.883,
|
| 1686 |
+
0.649
|
| 1687 |
+
],
|
| 1688 |
+
"angle": 0,
|
| 1689 |
+
"content": "Moens, M.-F., Uytendaele, C., and Dumortier, J. (1999). Abstracting of legal cases: the potential of clustering based on the selection of representative objects. Journal of the American Society for Information Science, 50(2):151-161."
|
| 1690 |
+
},
|
| 1691 |
+
{
|
| 1692 |
+
"type": "ref_text",
|
| 1693 |
+
"bbox": [
|
| 1694 |
+
0.514,
|
| 1695 |
+
0.651,
|
| 1696 |
+
0.883,
|
| 1697 |
+
0.72
|
| 1698 |
+
],
|
| 1699 |
+
"angle": 0,
|
| 1700 |
+
"content": "Moens, M.-F., Boiy, E., Palau, R. M., and Reed, C. (2007). Automatic detection of arguments in legal texts. In Proceedings of the 11th international conference on Artificial intelligence and law, pages 225-230."
|
| 1701 |
+
},
|
| 1702 |
+
{
|
| 1703 |
+
"type": "ref_text",
|
| 1704 |
+
"bbox": [
|
| 1705 |
+
0.514,
|
| 1706 |
+
0.723,
|
| 1707 |
+
0.883,
|
| 1708 |
+
0.766
|
| 1709 |
+
],
|
| 1710 |
+
"angle": 0,
|
| 1711 |
+
"content": "National Judicial Data Grid. (2021). National judicial data grid statistics. https://www.njdg.ecourts.gov.in/njdgnew/index.php."
|
| 1712 |
+
},
|
| 1713 |
+
{
|
| 1714 |
+
"type": "ref_text",
|
| 1715 |
+
"bbox": [
|
| 1716 |
+
0.514,
|
| 1717 |
+
0.767,
|
| 1718 |
+
0.883,
|
| 1719 |
+
0.823
|
| 1720 |
+
],
|
| 1721 |
+
"angle": 0,
|
| 1722 |
+
"content": "Russakovsky, O., Deng, J., Huang, Z., Berg, A. C., and Fei-Fei, L. (2013). Detecting avocados to zucchini: what have we done, and where are we going? In International Conference on Computer Vision (ICCV)."
|
| 1723 |
+
},
|
| 1724 |
+
{
|
| 1725 |
+
"type": "ref_text",
|
| 1726 |
+
"bbox": [
|
| 1727 |
+
0.514,
|
| 1728 |
+
0.825,
|
| 1729 |
+
0.883,
|
| 1730 |
+
0.91
|
| 1731 |
+
],
|
| 1732 |
+
"angle": 0,
|
| 1733 |
+
"content": "Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252."
|
| 1734 |
+
},
|
| 1735 |
+
{
|
| 1736 |
+
"type": "list",
|
| 1737 |
+
"bbox": [
|
| 1738 |
+
0.513,
|
| 1739 |
+
0.075,
|
| 1740 |
+
0.884,
|
| 1741 |
+
0.91
|
| 1742 |
+
],
|
| 1743 |
+
"angle": 0,
|
| 1744 |
+
"content": null
|
| 1745 |
+
}
|
| 1746 |
+
],
|
| 1747 |
+
[
|
| 1748 |
+
{
|
| 1749 |
+
"type": "ref_text",
|
| 1750 |
+
"bbox": [
|
| 1751 |
+
0.118,
|
| 1752 |
+
0.075,
|
| 1753 |
+
0.489,
|
| 1754 |
+
0.132
|
| 1755 |
+
],
|
| 1756 |
+
"angle": 0,
|
| 1757 |
+
"content": "Saravanan, M., Ravindran, B., and Raman, S. (2007). Using legal ontology for query enhancement in generating a document summary. Frontiers In Artificial Intelligence and Applications, 165:171."
|
| 1758 |
+
},
|
| 1759 |
+
{
|
| 1760 |
+
"type": "ref_text",
|
| 1761 |
+
"bbox": [
|
| 1762 |
+
0.118,
|
| 1763 |
+
0.134,
|
| 1764 |
+
0.489,
|
| 1765 |
+
0.218
|
| 1766 |
+
],
|
| 1767 |
+
"angle": 0,
|
| 1768 |
+
"content": "Saravanan, M., Ravindran, B., and Raman, S. (2008). Automatic identification of rhetorical roles using conditional random fields for legal document summarization. In Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I."
|
| 1769 |
+
},
|
| 1770 |
+
{
|
| 1771 |
+
"type": "ref_text",
|
| 1772 |
+
"bbox": [
|
| 1773 |
+
0.118,
|
| 1774 |
+
0.22,
|
| 1775 |
+
0.487,
|
| 1776 |
+
0.247
|
| 1777 |
+
],
|
| 1778 |
+
"angle": 0,
|
| 1779 |
+
"content": "spaCy. (2021). spaCy Toolkit. https://spacy.io/."
|
| 1780 |
+
},
|
| 1781 |
+
{
|
| 1782 |
+
"type": "ref_text",
|
| 1783 |
+
"bbox": [
|
| 1784 |
+
0.118,
|
| 1785 |
+
0.249,
|
| 1786 |
+
0.489,
|
| 1787 |
+
0.319
|
| 1788 |
+
],
|
| 1789 |
+
"angle": 0,
|
| 1790 |
+
"content": "Strickson, B. and De La Iglesia, B. (2020a). Legal Judgement Prediction for UK Courts. In Proceedings of the 2020 The 3rd International Conference on Information Science and System, pages 204-209, Cambridge United Kingdom, March. ACM."
|
| 1791 |
+
},
|
| 1792 |
+
{
|
| 1793 |
+
"type": "ref_text",
|
| 1794 |
+
"bbox": [
|
| 1795 |
+
0.118,
|
| 1796 |
+
0.321,
|
| 1797 |
+
0.489,
|
| 1798 |
+
0.391
|
| 1799 |
+
],
|
| 1800 |
+
"angle": 0,
|
| 1801 |
+
"content": "Strickson, B. and De La Iglesia, B. (2020b). Legal Judgement Prediction for UK Courts. In Proceedings of the 2020 The 3rd International Conference on Information Science and System, pages 204-209, Cambridge United Kingdom, March. ACM."
|
| 1802 |
+
},
|
| 1803 |
+
{
|
| 1804 |
+
"type": "ref_text",
|
| 1805 |
+
"bbox": [
|
| 1806 |
+
0.118,
|
| 1807 |
+
0.393,
|
| 1808 |
+
0.489,
|
| 1809 |
+
0.49
|
| 1810 |
+
],
|
| 1811 |
+
"angle": 0,
|
| 1812 |
+
"content": "Sulea, O.-M., Zampieri, M., Vela, M., and van Genabith, J. (2017). Predicting the law area and decisions of French Supreme Court cases. In Proceedings of the International Conference on Advances in Natural Language Processing, RANLP 2017, pages 716-722, Varna, Bulgaria, September. INCOMA Ltd."
|
| 1813 |
+
},
|
| 1814 |
+
{
|
| 1815 |
+
"type": "ref_text",
|
| 1816 |
+
"bbox": [
|
| 1817 |
+
0.118,
|
| 1818 |
+
0.493,
|
| 1819 |
+
0.489,
|
| 1820 |
+
0.535
|
| 1821 |
+
],
|
| 1822 |
+
"angle": 0,
|
| 1823 |
+
"content": "Tay, Y., Dehghani, M., Bahri, D., and Metzler, D. (2020). Efficient transformers: A survey. arXiv preprint arXiv:2009.06732."
|
| 1824 |
+
},
|
| 1825 |
+
{
|
| 1826 |
+
"type": "ref_text",
|
| 1827 |
+
"bbox": [
|
| 1828 |
+
0.118,
|
| 1829 |
+
0.537,
|
| 1830 |
+
0.489,
|
| 1831 |
+
0.621
|
| 1832 |
+
],
|
| 1833 |
+
"angle": 0,
|
| 1834 |
+
"content": "Tran, V., Nguyen, M. L., and Satoh, K. (2019). Building legal case retrieval systems with lexical matching and summarization using a pre-trained phrase scoring model. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law, pages 275-282."
|
| 1835 |
+
},
|
| 1836 |
+
{
|
| 1837 |
+
"type": "ref_text",
|
| 1838 |
+
"bbox": [
|
| 1839 |
+
0.118,
|
| 1840 |
+
0.623,
|
| 1841 |
+
0.489,
|
| 1842 |
+
0.679
|
| 1843 |
+
],
|
| 1844 |
+
"angle": 0,
|
| 1845 |
+
"content": "Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. (2018). Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461."
|
| 1846 |
+
},
|
| 1847 |
+
{
|
| 1848 |
+
"type": "ref_text",
|
| 1849 |
+
"bbox": [
|
| 1850 |
+
0.118,
|
| 1851 |
+
0.681,
|
| 1852 |
+
0.489,
|
| 1853 |
+
0.836
|
| 1854 |
+
],
|
| 1855 |
+
"angle": 0,
|
| 1856 |
+
"content": "Wolf, T., Debut, L., Sanh, V., Chaumont, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtopicz, M., Davison, J., Shleifer, S., von Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., and Rush, A. (2020). Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online, October. Association for Computational Linguistics."
|
| 1857 |
+
},
|
| 1858 |
+
{
|
| 1859 |
+
"type": "ref_text",
|
| 1860 |
+
"bbox": [
|
| 1861 |
+
0.118,
|
| 1862 |
+
0.838,
|
| 1863 |
+
0.49,
|
| 1864 |
+
0.894
|
| 1865 |
+
],
|
| 1866 |
+
"angle": 0,
|
| 1867 |
+
"content": "Wyner, A., Mochales-Palau, R., Moens, M.-F., and Milward, D. (2010). Approaches to text mining arguments from legal cases. In Semantic processing of legal texts, pages 60-79. Springer."
|
| 1868 |
+
},
|
| 1869 |
+
{
|
| 1870 |
+
"type": "ref_text",
|
| 1871 |
+
"bbox": [
|
| 1872 |
+
0.118,
|
| 1873 |
+
0.896,
|
| 1874 |
+
0.489,
|
| 1875 |
+
0.91
|
| 1876 |
+
],
|
| 1877 |
+
"angle": 0,
|
| 1878 |
+
"content": "Xiao, C., Zhong, H., Guo, Z., Tu, C., Liu, Z., Sun, M.,"
|
| 1879 |
+
},
|
| 1880 |
+
{
|
| 1881 |
+
"type": "list",
|
| 1882 |
+
"bbox": [
|
| 1883 |
+
0.118,
|
| 1884 |
+
0.075,
|
| 1885 |
+
0.49,
|
| 1886 |
+
0.91
|
| 1887 |
+
],
|
| 1888 |
+
"angle": 0,
|
| 1889 |
+
"content": null
|
| 1890 |
+
},
|
| 1891 |
+
{
|
| 1892 |
+
"type": "ref_text",
|
| 1893 |
+
"bbox": [
|
| 1894 |
+
0.53,
|
| 1895 |
+
0.075,
|
| 1896 |
+
0.883,
|
| 1897 |
+
0.09
|
| 1898 |
+
],
|
| 1899 |
+
"angle": 0,
|
| 1900 |
+
"content": "Feng, Y., Han, X., Hu, Z., Wang, H., et al. (2018)."
|
| 1901 |
+
},
|
| 1902 |
+
{
|
| 1903 |
+
"type": "ref_text",
|
| 1904 |
+
"bbox": [
|
| 1905 |
+
0.53,
|
| 1906 |
+
0.091,
|
| 1907 |
+
0.883,
|
| 1908 |
+
0.118
|
| 1909 |
+
],
|
| 1910 |
+
"angle": 0,
|
| 1911 |
+
"content": "Cail2018: A large-scale legal dataset for judgment prediction. arXiv preprint arXiv:1807.02478."
|
| 1912 |
+
},
|
| 1913 |
+
{
|
| 1914 |
+
"type": "ref_text",
|
| 1915 |
+
"bbox": [
|
| 1916 |
+
0.514,
|
| 1917 |
+
0.12,
|
| 1918 |
+
0.883,
|
| 1919 |
+
0.133
|
| 1920 |
+
],
|
| 1921 |
+
"angle": 0,
|
| 1922 |
+
"content": "Zhang, J., Zhao, Y., Saleh, M., and Liu, P. J. (2020)."
|
| 1923 |
+
},
|
| 1924 |
+
{
|
| 1925 |
+
"type": "ref_text",
|
| 1926 |
+
"bbox": [
|
| 1927 |
+
0.53,
|
| 1928 |
+
0.134,
|
| 1929 |
+
0.883,
|
| 1930 |
+
0.16
|
| 1931 |
+
],
|
| 1932 |
+
"angle": 0,
|
| 1933 |
+
"content": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization."
|
| 1934 |
+
},
|
| 1935 |
+
{
|
| 1936 |
+
"type": "list",
|
| 1937 |
+
"bbox": [
|
| 1938 |
+
0.514,
|
| 1939 |
+
0.075,
|
| 1940 |
+
0.883,
|
| 1941 |
+
0.16
|
| 1942 |
+
],
|
| 1943 |
+
"angle": 0,
|
| 1944 |
+
"content": null
|
| 1945 |
+
}
|
| 1946 |
+
]
|
| 1947 |
+
]
|
2201.13xxx/2201.13125/22a9b67e-4248-4898-877b-81213525c31c_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5894bca6d3d25f064b71a84b6fc4d07b5d2e45cfe18a9817d82fcb41b83b5eae
|
| 3 |
+
size 334991
|
2201.13xxx/2201.13125/full.md
ADDED
|
@@ -0,0 +1,267 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Corpus for Automatic Structuring of Legal Documents
|
| 2 |
+
|
| 3 |
+
# Prathamesh Kalamkar $^{1,2,*}$ , Aman Tiwari $^{1,2,*}$ , Astha Agarwal $^{1,2,*}$ , Saurabh Karn $^{3,*}$ , Smita Gupta $^{3}$ , Vivek Raghavan $^{1}$ , Ashutosh Modi $^{4}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ EkStep Foundation, $^{2}$ Thoughtworks Technologies India Pvt Ltd.,
|
| 6 |
+
|
| 7 |
+
$^{3}$ Agami, $^{4}$ Indian Institute of Technology Kanpur (IIT-K)
|
| 8 |
+
|
| 9 |
+
{prathamk, aman.tiwari, astha.agarwal} @ thoughtworks.com,
|
| 10 |
+
|
| 11 |
+
{saurabh, smita} @agami.in, Vivek@ekstep.org, ashutoshm@cse.iitk.ac.in
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
In populous countries, pending legal cases have been growing exponentially. There is a need for developing techniques for processing and organizing legal documents. In this paper, we introduce a new corpus for structuring legal documents. In particular, we introduce a corpus of legal judgment documents in English that are segmented into topical and coherent parts. Each of these parts is annotated with a label coming from a list of pre-defined Rhetorical Roles. We develop baseline models for automatically predicting rhetorical roles in a legal document based on the annotated corpus. Further, we show the application of rhetorical roles to improve performance on the tasks of summarization and legal judgment prediction. We release the corpus and baseline model code along with the paper.
|
| 16 |
+
|
| 17 |
+
Keywords: Legal NLP, Rhetorical Roles, Legal Document Segmentation
|
| 18 |
+
|
| 19 |
+
# 1. Introduction
|
| 20 |
+
|
| 21 |
+
In populous countries (e.g., India), pending legal cases have been growing exponentially. For example, according to India's National Judicial Data Grid, as of December 2021, there are approximately 40 million cases pending in various courts of the country (National Judicial Data Grid, 2021). India follows a common-law system; consequently, due to subjectivity involved in the legal process, it may not be possible to automate the entire judicial pipeline completely; nevertheless, many intermediate tasks can be automated to augment legal practitioners, and hence expedite the system. For example, legal documents can be processed with the help of Natural Language Processing (NLP) techniques to organize and structure the data to be amenable to automatic search and retrieval. However, legal texts are different from commonly occurring texts typically used to train NLP models. Legal documents are quite long, running into tens (sometimes hundreds) of pages. Long documents make automatic processing challenging as information is spread throughout the document (Malik et al., 2021b). Another challenge with legal documents is the use of different lexicons. Though legal documents use natural language (e.g., English), many commonly occurring words/terms have different legal connotations. The use of different lexicons makes it challenging to adapt existing NLP models to legal texts (Malik et al., 2021b). Moreover, in countries like India, legal documents are manually typed and are highly unstructured and noisy (e.g., spelling and grammatical mistakes). Above mentioned challenges make it difficult to apply existing NLP models and techniques directly, which calls for the development of legal domain-specific techniques.
|
| 22 |
+
|
| 23 |
+
Existing state-of-the-art models in NLP are data-driven
|
| 24 |
+
|
| 25 |
+
and are trained on annotated corpora. However, the legal domain suffers from the deficiency of availability of annotated corpora. It has hindered the growth of the Legal NLP domain. For example, much of the recent success in the computer vision community can be owed to the creation and availability of annotated vision corpora such as ImageNet (Deng et al., 2009; Russakovsky et al., 2013; Russakovsky et al., 2015). In this paper, we contribute to creating annotated legal text corpora. In particular, we create a new corpus of Indian legal judgments in English that are structured and annotated with topically coherent semantic units. Since legal documents are long and unstructured, these can be divided into topically coherent parts (e.g., facts, arguments) referred to as Rhetorical Roles (Saravanan et al., 2008; Bhattacharya et al., 2019b; Malik et al., 2021a). In this paper, with the help of legal experts, we annotate legal documents with 12 different Rhetorical Roles (RRs) (details in §3). An example text annotated with some of the RRs is shown in Figure 1. As shown in the figure, an unstructured legal judgment document is segmented into semantically coherent parts, and each part is annotated with a rhetorical role label such as preamble, fact, ratio, etc. We experimented with different levels of granularity (phrase level, sentence level, paragraph level) for annotating RRs and decided to go for sentence-level RR annotations based on initial experiments. Each sentence in a legal document is annotated with a rhetorical role label in the proposed corpus. Typically, consecutive sentences can have a similar role in a judgment document. The rhetorical role corpus is part of a general open-source effort of creating various legal corpora for promoting the development and bench-marking of legal NLP systems. This project is called BUILDNyAI. $^{1}$ We make the following contribu
|
| 26 |
+
|
| 27 |
+
IN THE COURT OF THE V ADDL SESSIONS JUDGE, MYSORE. Dated this the 23rd day of May 2013 ... The Petitioner is a businessman and he is permanent resident of Mysore City... On behalf of the Prosecution the learned Public Prosecutor has filed objection to the bail Petition stating that, there ...Now, the points that arise for consideration of the Court are: 1. Whether the Petitioner has made out sufficient grounds to release him on Anticipatory Bail? ... Heard the arguments advanced by the learned advocate for the Petitioner and the learned Public Prosecutor... Considering all these aspects, the Court is of the view that, ...Point No.2: For the foregoing reasons and in view of my above discussions, I proceed to pass the following ...The High Court by its order dated October 26, 1982 set aside the order of the Tribunal and also the assessment on the ground ...The petitioners are falsely implicated and the charge sheet has been filed against the petitioners merely ...My findings on the above points are as follows: Point No.1: In the Positive Point No.2 : As per final order for the following...In a decision reported in (2013) 1 KCCR 334 case of K.Ramachandra Reddy Vs. State of Karnataka by the Station House Officer...The decision of the Andhra Pradesh High Court ... are not relevant for purposes of deciding the question which has arisen before us...
|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
Figure 1: Example of document segmentation via Rhetorical Roles labels. On the left is excerpt from a legal document and on the right is document segmented and labelled with rhetorical role labels.
|
| 31 |
+
|
| 32 |
+
# tions in this paper:
|
| 33 |
+
|
| 34 |
+
- We create a corpus of 354 Indian legal documents annotated with rhetorical roles. The corpus has 40,305 sentences annotated with 12 different RRs. To the best of our knowledge, this is the largest corpus of legal documents annotated with RRs.
|
| 35 |
+
- In order to be of practical value, using the corpus, we develop a transformer-based baseline model for automatically annotating legal documents with sentence-level RR.
|
| 36 |
+
- We show two use-cases for RRs. In particular, we show applications of RRs to the task of legal case summarization and legal judgment prediction.
|
| 37 |
+
- We release the corpus and the model implementations: https://legal-nlp-ekstep.github.io/Competitions/Rhetorical-Role/
|
| 38 |
+
|
| 39 |
+
# 2. Related Work
|
| 40 |
+
|
| 41 |
+
In recent times, there has been lot of work in the area of legal text processing. Different tasks and techniques have been proposed. For example, Prior Case Retrieval (Jackson et al., 2003), Summarization (Moens et al., 1999; Saravanan et al., 2007), Case Prediction (Malik et al., 2021b; Chalkidis et al., 2019; Strickson and De La Iglesia, 2020a; Sulea et al., 2017; Kapoor et al., 2022), Argument Mining (Wyner et al., 2010; Moens et al., 2007), Information Extraction and Retrieval (Tran
|
| 42 |
+
|
| 43 |
+
term having English word BUILD and Hindi word nyAI (short for nyayi, which means justice). The project is hosted at https://legal-nlp-ekstep.github.io/Competitions/Rhetorical-Role/
|
| 44 |
+
|
| 45 |
+
et al., 2019; Grabmair et al., 2011; Tran et al., 2019), and Event Extraction (Lagos et al., 2010; Maxwell et al., 2009; Lagos et al., 2010).
|
| 46 |
+
|
| 47 |
+
Recently, efforts have been made to develop corpora that could aid various legal NLP tasks; for example, Malik et al. (2021b) have released a corpus of 35K Indian Supreme Court documents for the task of judgment prediction and explanation. Chalkidis et al. (2019) have released 11,478 legal documents corresponding to the European Court of Human Rights (ECHR). Strickson and De La Iglesia (2020b) have proposed a corpus of 4,959 UK Supreme Court documents. Xiao et al. (2018) have created a large-scale corpus of 2.68 million criminal case documents and released CAIL (Chinese AI and Law Challenge) dataset for judgment prediction. A new multilingual dataset of European Union (EU) legal documents has been recently released by Chalkidis et al. (2021).
|
| 48 |
+
|
| 49 |
+
Research in rhetorical roles for legal text processing has been active in the past few years. Farzindar and Lapalme (2004; Hachey and Grover (2006) have leveraged rhetorical roles to create summaries of legal texts. Saravanan et al. (2008) proposed a CRF-based model using hand-crafted features for segmenting documents using seven different roles. Bhatia (2014) created Genre Analysis of Legal Texts to create seven rhetorical categories. Bhattacharya et al. (2019b) have proposed CRF-BiLSTM model for automatically assigning rhetorical roles to sentences in Indian legal documents. (Malik et al., 2021a) have created a RR corpus and annotated with 13 fine-grained roles and further they have developed a multi-task learning based model
|
| 50 |
+
|
| 51 |
+
for predicting RR. In this paper, we also propose a corpus of English Indian legal judgment documents annotated with Rhetorical Roles; however, we annotate the documents with a more extensive set of 12 rhetorical role labels and a NONE label (in the case none of the 12 labels are applicable). Moreover, to the best of our knowledge, we create the largest corpus of 354 documents (vs. 100 documents in previous RR corpus by Malik et al. (2021a)), with 40,315 sentences annotated with 13 $(12 + \mathrm{NONE})$ different types of rhetorical role labels. We propose state-of-the-art transformer models for RR prediction and show the use case of RRs for case summarization and legal judgment prediction.
|
| 52 |
+
|
| 53 |
+
Recent success in almost every area in NLP has been due to transformer-based neural architectures (Wang et al., 2018). We do not discuss the details of transformer architectures here and refer the reader to the survey on transformers by Tay et al. (2020). We develop transformer-based baseline models for automatically segmenting legal documents into RRs units.
|
| 54 |
+
|
| 55 |
+
# 3. Rhetorical Roles Corpus
|
| 56 |
+
|
| 57 |
+
As outlined earlier, legal documents are typically long, and information is spread throughout the document. In order to make the automatic processing of documents easier, documents are divided into topically coherent segments referred to as Rhetorical Roles (Malik et al., 2021a). In this paper, we propose the use of 12 RRs and a NONE label. We started with the list of RR labels proposed by Bhattacharya et al. (2019b); however, we found some of the RR to be ambiguous, hence after having elaborate discussions with law professors, we split some of the RRs (arguments and precedents) to arrive at the list of 12 main roles. Details and definitions for each of the RR are as follows:
|
| 58 |
+
|
| 59 |
+
- Preamble (PREAMBLE): This covers the metadata related to the legal judgment document. A typical judgment would start with the court name, the details of parties, lawyers and judges' names, headnote (summary). This section typically would end with a keyword like (JUDGMENT or ORDER). Some documents also have HEADNOTES, ACTS sections in the beginning. These are also part of the Preamble.
|
| 60 |
+
|
| 61 |
+
- Facts (FAC): This corresponds to the facts of the case. It refers to the chronology of events that led to filing the case and how it evolved (e.g., First Information Report (FIR) at a police station, filing an appeal to the Magistrate, etc.) Depositions and proceedings of the current court, and summary of lower court proceedings.
|
| 62 |
+
|
| 63 |
+
- Ruling by Lower Court (RLC): Cases are not directly filed in the higher courts but are appealed from lower courts. Consequently, the documents contain judgments given by the lower courts (Trial Court, High Court) based on the present appeal (to the Supreme Court or high court). The lower court's verdict, analysis, and the ratio behind the
|
| 64 |
+
|
| 65 |
+
judgment by the lower court is annotated with this label.
|
| 66 |
+
|
| 67 |
+
- Issues (ISSUE): Some judgments mention the key points on which the verdict needs to be delivered. Such Legal Questions Framed by the Court are ISSUES.
|
| 68 |
+
- Argument by Petitioner (ARGPETITIONER): Arguments by petitioners' lawyers. Precedent cases argued by petitioner lawyers fall under this category, but when the court discusses them later, then they belong to either the relied / not relied upon category.
|
| 69 |
+
- Argument by Respondent (ARG_RESPONDENT): Arguments by respondents' lawyers. Precedent cases argued by respondent lawyers fall under this, but when the court discusses them later, they belong to either the relied / not relied category.
|
| 70 |
+
- Analysis (ANALYSIS): These are views of the court. This includes courts' discussion on the evidence, facts presented, prior cases, and statutes. Discussions on how the law is applicable or not applicable to the current case. Observations (non-binding) from the court. It is the parent tag for three tags: PRE_RLEIED, PRE_NOT_RELIED, and STATUTE i.e., every statement which belongs to these three tags should also be marked as ANALYSIS.
|
| 71 |
+
- Statute (STA): This includes texts in which the court discusses established laws, that can come from a mixture of sources: Acts, Sections, Articles, Rules, Order, Notices, Notifications, and Quotations directly from the bare act. The statute will have both the tags Analysis + Statute.
|
| 72 |
+
- Precedent Relied (PRE_RELIED): Texts in which the court discusses prior case documents, discussions and decisions which were relied upon by the court for final decisions. Precedent will have both the tags Analysis + Precedent.
|
| 73 |
+
- Precedent Not Relied (PRE_NOT_RELIED): Texts in which the court discusses prior case documents, discussions and decisions which were not relied upon by the court for final decisions. It could be due to the fact that the situation, in that case, is not relevant to the current case.
|
| 74 |
+
- Ratio of the decision (Ratio): This includes the main reason given for the application of any legal principle to the legal issue. It is the result of the analysis by the court. It typically appears right before the final decision. It is not the same as "Ratio Decidendi" taught in the legal academic curriculum.
|
| 75 |
+
- Ruling by Present Court (RPC): Final decision + conclusion + order of the Court following from the natural/logical outcome of the rationale.
|
| 76 |
+
- NONE: If a sentence does not belong to any of the above categories, it is labeled as NONE.
|
| 77 |
+
|
| 78 |
+
<table><tr><td>Dataset</td><td>Docs</td><td>Sentences</td><td>Tokens</td><td>Avg To-kens</td></tr><tr><td>Train</td><td>247</td><td>28986</td><td>938K</td><td>3797</td></tr><tr><td>Validation</td><td>30</td><td>2879</td><td>88K</td><td>2947</td></tr><tr><td>Test (in-domain)</td><td>50</td><td>4158</td><td>134K</td><td>2681</td></tr><tr><td>Test (out-domain)</td><td>27</td><td>4292</td><td>127K</td><td>4722</td></tr><tr><td>Total</td><td>354</td><td>40315</td><td>1.3M</td><td>3638</td></tr></table>
|
| 79 |
+
|
| 80 |
+
Table 1: Corpus Statistics: The corpus is split into train, val and test. The table shows number of documents, sentences, tokens and average number of tokens per document.
|
| 81 |
+
|
| 82 |
+
# 3.1. Corpus Documents
|
| 83 |
+
|
| 84 |
+
The corpus consists of legal judgment documents from the Supreme Court of India, High Courts in different Indian states, and some district-level courts. Raw judgment text files were scraped from Indian Court websites.2 Data has a mix of Supreme Court judgments $(40\%)$ , High Courts judgments $(40\%)$ and district court judgments $(20\%)$ . To develop baseline models, we divided the dataset into train, and validation. Test set was further divided into in-domain and out of domain. Train, validation and test (in-domain) datasets contain annotated judgments belonging to tax and criminal cases. Test (out-domains) contains annotated judgements from 3 domains: Motor Vehicles Act (9), Industrial and Labour law (8) and Land and Property law (10). The statistics of the corpus are shown in Table 1. Table 2 gives number of sentences for each role in the entire corpus. Qualified law experts annotated test data with cross checks.
|
| 85 |
+
|
| 86 |
+
<table><tr><td>Rhetorical Role</td><td>Sentences</td></tr><tr><td>ANALYSIS</td><td>14300</td></tr><tr><td>ARG PETITIONER</td><td>1771</td></tr><tr><td>ARG RESPONDENT</td><td>1068</td></tr><tr><td>FAC</td><td>8045</td></tr><tr><td>ISSUE</td><td>535</td></tr><tr><td>NONE</td><td>2037</td></tr><tr><td>PREAMBLE</td><td>6116</td></tr><tr><td>PRE NOT RELIED</td><td>217</td></tr><tr><td>PRE RELIED</td><td>1934</td></tr><tr><td>RATIO</td><td>1014</td></tr><tr><td>RLC</td><td>1081</td></tr><tr><td>RPC</td><td>1562</td></tr><tr><td>STA</td><td>625</td></tr><tr><td>Overall</td><td>40305</td></tr></table>
|
| 87 |
+
|
| 88 |
+
Table 2: Role-wise sentence count in the entire corpus
|
| 89 |
+
|
| 90 |
+
# 3.2. Annotation Process
|
| 91 |
+
|
| 92 |
+
The annotation process was designed in consultation with legal experts (law professors and legal practitioners). Given the nature of the task, the RR annotations
|
| 93 |
+
|
| 94 |
+
require a deep understanding of the law and the legal process. Consequently, we involved law students and legal practitioners in annotating the documents. The process involved annotating each sentence in a given document with one of the 12 RR + None labels described earlier. We experimented with different levels of granularity (phrase level, sentence level, paragraph level, etc.) for annotating the documents with RR. Pilot experiments indicated sentence level RR annotation to be appropriate as it maintains the balance (with regard to semantic coherence) between too short and too long texts. The legal documents were split using spaCy library (spaCy, 2021). Rhetorical role annotation is not a trivial task; we faced two main challenges in the annotation activity: availability for a large group of legal experts and, secondly, motivating the legal experts to perform annotation consistently while maintaining quality. We performed the annotation activity via crowdsourcing as described next.
|
| 95 |
+
|
| 96 |
+
# 3.3. Data Annotation Pipeline
|
| 97 |
+
|
| 98 |
+
Corpus documents were annotated via a crowdsourcing activity. We invited law students from various law schools across the country to volunteer for the data annotation exercise. We created processes to onboard student volunteers and introduced them to the entire activity and its goal. Filtering was carried out at multiple stages to retain the most motivated and consistent (from the perspective of quality of the annotations) students. The entire pipeline is shown in Figure 2. We describe each stage of the pipeline in the next sections.
|
| 99 |
+
|
| 100 |
+

|
| 101 |
+
Figure 2: Data Annotation Pipeline
|
| 102 |
+
|
| 103 |
+
# 3.3.1. Student Selection
|
| 104 |
+
|
| 105 |
+
We did a nationwide call for volunteers through a network of law students. The application required students to describe their motivation. A basic screening was done to eliminate applications that were partially filled. Finally, after filtering, we selected an initial group of 50 students. The selected students were then on-boarded and were motivated by explaining the big picture of the impact of their contribution. The data annotations were done voluntarily by law students from multiple Indian law universities. Interaction with the law students revealed that they were motivated to learn more about AI and contribute towards the development of the AI field, and hence they volunteered for the activity. In order to smoothly conduct the annotation activity via crowdsourcing, we organized the volunteers in a hierarchical structure based on their experience and performance during a pilot study. The organizational structure for this exercise is shown in Figure 3.
|
| 106 |
+
|
| 107 |
+
Project Administrators: They designed data collection and communication processes, built tools for data
|
| 108 |
+
|
| 109 |
+

|
| 110 |
+
Figure 3: Organization Structure
|
| 111 |
+
|
| 112 |
+
collection, and supervised the overall activity. This group included law experts and authors of the paper.
|
| 113 |
+
|
| 114 |
+
Project Coordinators: They mentored and resolved the doubts of the students. They were responsible for assuring the quality of the data. Coordinators identified and rectified conceptual errors among the students. Further, the coordinators assisted the administrators during the adjudication process.
|
| 115 |
+
|
| 116 |
+
Student Volunteers: They annotated the data and also provided feedback on the entire process. Volunteers were in constant communication with the coordinators. At later stages of annotations, some of the best-performing students assisted in the adjudication process (§3.3.5). Best performing students were selected based on two criteria: timely submissions and ground truth agreement score. Students were assessed if they completed the task within a stipulated time at each annotation stage. Furthermore, each batch of annotation document consisted of sentences for which true (gold) RR labels were known apriori (also §3.3.4). Students were assessed for their performance on the ground truth (sentences with gold RR labels), and students who were correct on at least 90% of ground truth sentences were considered for the best performing category.
|
| 117 |
+
|
| 118 |
+
Before beginning the entire activity, we conducted a small pilot to assess the feasibility of crowdsourcing with student volunteers. Volunteers who completed MOOC, calibration and annotation exercises with satisfactory performance were then invited to become project coordinators for the subsequent data collection phase. The chance to become coordinator further provided positive reinforcement for the efforts, thus keeping the students well motivated. In the end, we selected eight students as project coordinators.
|
| 119 |
+
|
| 120 |
+
# 3.3.2. MOOC
|
| 121 |
+
|
| 122 |
+
Law students do not have an understanding of the workings of AI. We designed a MOOC (Massive Open Online Course)<sup>3</sup> for the annotators. The MOOC explained the AI technologies to the law students, described the process of building datasets for AI algorithms, and explained the concept of the rhetorical role. Students were expected to complete the MOOC in a stipulated amount of time and complete the associated
|
| 123 |
+
|
| 124 |
+

|
| 125 |
+
Figure 4: Ground Truth Score Histogram
|
| 126 |
+
|
| 127 |
+
quiz, which checked for a basic understanding of the rhetorical role definitions.
|
| 128 |
+
|
| 129 |
+
# 3.3.3. Calibration
|
| 130 |
+
|
| 131 |
+
Since in the initial stages, students can differ in understanding RRs. We calibrated the students to bring them to a common ground. Calibration focused on shaping a common understanding of definitions among students. Students were asked to annotate three judgments that experts had already annotated. The sentences that differed from expert (gold) annotations were highlighted, and students were asked to calibrate their annotations. Calibration was an iterative process, and it was carried out till students came at the level of expert annotations.
|
| 132 |
+
|
| 133 |
+
# 3.3.4. Data Annotation
|
| 134 |
+
|
| 135 |
+
In the end, 35 out of 50 selected students qualified for the calibration stage, and this was the final pool that annotated the entire corpus. Each student annotated 24 documents, and three students annotated each document. We did not observe any student dropout after the calibration stage. On average, it took about 40 minutes to annotate a single document. The entire annotation activity took around six weeks. Students annotated train and validation documents ( $= 277$ ), and experts annotated 77 test documents. As described earlier, during the annotation process, each student was also randomly assigned four documents (chosen randomly with replacement from the test set) for which gold (ground truth) annotations were known to coordinators and administrators but not to the students. The performance of students (referred to as Ground Truth Score) on these gold documents was assessed. Ground truth score is the percentage of sentences in gold documents that are correctly annotated. The average ground truth score for all students was $85\%$ . Figure 4 shows histogram of ground truth scores for a judgment. It shows that the majority of documents are in the 90 to 100 percent range, indicative of consistent annotations with ground truth docs. Note that documents shown in Figure 4 (y-axis) are chosen randomly (with replacement) from the test set and hence there is overlap between documents across different batches. Furthermore, coordinators provided feedback to students with lower scores to improve their overall annotation quality.
|
| 136 |
+
|
| 137 |
+
# 3.3.5. Adjudication
|
| 138 |
+
|
| 139 |
+
A majority voting scheme was used to decide the final RR label. However, in some instances, annotators as
|
| 140 |
+
|
| 141 |
+
signed three different labels; such documents were further sent for adjudication. The adjudication was done by experts, project coordinators, and some of the best-performing students (§3.3.1).
|
| 142 |
+
|
| 143 |
+
# 3.3.6. Annotation Quality Assessment
|
| 144 |
+
|
| 145 |
+
Final annotation quality was evaluated using Fleiss Kappa (Fleiss et al., 2013). Overall, Fleiss Kappa score was 0.59, pointing towards moderate agreement. We saw high agreement amongst annotators on PREAMBLE, RPC, NONE, and ISSUE. There were medium agreements on FACTS, RLC, ANALYSIS, PRECEDENT, and ARGUMENTS. RATIO was the most ambiguous role. ANALYSIS was very often confused with FACTS and ARGUMENTS. In a judgment, a judge emphasizes some of the facts, which as per definition, are considered as analysis role; however, annotators often confuse them as facts role. Moreover, sometimes the judge may mention arguments and give their opinion on it; this, as per definition, is the analysis role, but annotators sometimes confuse it with the argument role. FACTS was sometimes confused with RLC (Ruling by Lower Court).
|
| 146 |
+
|
| 147 |
+
# 4. RR Prediction Baseline Models
|
| 148 |
+
|
| 149 |
+
The end goal behind this work has been to encourage the development of systems that can segment a new legal document automatically in terms of rhetorical roles. Towards this goal, we experimented with some baseline models. Since transformer-based models (Wolf et al., 2020) have shown state-of-the-art (SOTA) performance on most of the NLP tasks, including the tasks in legal NLP domain (Malik et al., 2021b), we mainly experimented with them. In the RR prediction task, given a legal document, the task is to predict the RR label for each sentence in the document. We pose this as a multiclass sequence prediction problem. We initially experimented with variants of the model by Bhattacharya et al. (2019b). In particular, we use a CRF (Conditional Random Field) model for RR prediction. The features for this CRF model come from a transformer, i.e., the BERT-BASE (Devlin et al., 2018) model is used to get sentence embeddings corresponding to the CLS token. These sentence embeddings are then passed through the CRF layer to get final predictions. We call this model BERT_CRF. We also tried the architecture proposed by Cohan et al. (2019) which captures contextual dependencies using only BERT without the need for hierarchical encoding using a CRF. We call this model BERT_only. After experiments with vanilla transformer models, we finally created the baseline system using the SciBERT-HSLN architecture (Brack et al., 2021). Figure 5 shows the overall architecture of the proposed model. In the proposed model, each sentence is passed through BERT BASE model to get word embeddings, these embeddings are further processed by Bi-LSTM layer followed by attention-based pooling layer to get sentence representations $\{s_1,s_2,\dots s_n\}$ .
|
| 150 |
+
|
| 151 |
+

|
| 152 |
+
Figure 5: RR Prediction Baseline model inspired by Brack et al. (2021)
|
| 153 |
+
|
| 154 |
+
<table><tr><td>Model</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>BERT_CRF</td><td>0.24</td><td>0.24</td><td>0.23</td></tr><tr><td>BERT_only</td><td>0.67</td><td>0.68</td><td>0.67</td></tr><tr><td>SciBERT-HSLN</td><td>0.79</td><td>0.80</td><td>0.79</td></tr></table>
|
| 155 |
+
|
| 156 |
+
Table 3: Performance of models on test (in-domain) data
|
| 157 |
+
|
| 158 |
+
Context Enrichment layer encodes the contextual information, by taking sequence of sentence representations, resulting in contextualized sentence representations: $\{c_1,c_2,\dots ,c_n\}$ . This is followed by MLP layers and CRF that leverage the distributed representation features to predict the RR label for each sentence via softmax activation.
|
| 159 |
+
|
| 160 |
+
Results: The performance of different models was tested on test(in-domain) data and results are given in Table 3. We use standard weighted F1 score metric for evaluation. As can be observed, the BERT_CRF model performs the worst, and the BERT_only model performs worse than the proposed model SciBERT-HSLN, which achieved a weighted F1 score of $78\%$ . It is perhaps because SciBERT-HSLN, being a sequential model, can capture longer range dependencies between sentences in a document. The results of the model on the test set for each of the RR labels are shown in Table 4. Figure 6 shows the confusion matrix for the SciBERT-HSLN model. As can be observed from Table 4 and Figure 6, ARGUMENTS based roles are miss-classified very often and confused among the two types of ARGUMENTS and also sometimes confused with FACTS and ANALYSIS. PREAMBLE is almost perfectly classified. As can be seen, PRECEDENT NOT RELIED is completely miss-classified and confused with PRECEDENT RELIED and ANALYSIS. RATIO is often confused with ANALYSIS, and this trend is similar to what was observed for annotators as well. Similar to what was observed for annotators, RPC, PREAMBLE, NONE and ISSUE are classified with decent F1 scores. STATUES are also not well classified as many times a judge mentions some laws in their opinion and model tends to learn these spurious patterns as analysis and miss-classifies actual stat-
|
| 161 |
+
|
| 162 |
+
<table><tr><td>Rhetorical Role</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>ANALYSIS</td><td>0.77</td><td>0.89</td><td>0.83</td></tr><tr><td>ARGPETITIONER</td><td>0.60</td><td>0.64</td><td>0.62</td></tr><tr><td>ARGRESPONDENT</td><td>0.84</td><td>0.41</td><td>0.55</td></tr><tr><td>FAC</td><td>0.80</td><td>0.84</td><td>0.82</td></tr><tr><td>ISSUE</td><td>0.93</td><td>0.87</td><td>0.90</td></tr><tr><td>NONE</td><td>0.85</td><td>0.84</td><td>0.85</td></tr><tr><td>PREAMBLE</td><td>0.96</td><td>0.98</td><td>0.97</td></tr><tr><td>PRE_NOT_RELIED</td><td>0.00</td><td>0.00</td><td>0.00</td></tr><tr><td>PRE.RelIED</td><td>0.79</td><td>0.60</td><td>0.68</td></tr><tr><td>RATIO</td><td>0.53</td><td>0.56</td><td>0.54</td></tr><tr><td>RLC</td><td>0.75</td><td>0.45</td><td>0.57</td></tr><tr><td>RPC</td><td>0.78</td><td>0.87</td><td>0.82</td></tr><tr><td>STA</td><td>0.77</td><td>0.54</td><td>0.64</td></tr><tr><td>Overall</td><td>0.79</td><td>0.80</td><td>0.79</td></tr></table>
|
| 163 |
+
|
| 164 |
+

|
| 165 |
+
Figure 6: Confusion Matrix for SciBERT-HSLN model predictions on the test data
|
| 166 |
+
|
| 167 |
+
ues as analysis. We have also created a leaderboard for the task of RR prediction where other researchers can experiment with various approaches.
|
| 168 |
+
|
| 169 |
+
Results on test (out-domain) data: In order to check if the baseline model trained on Criminal and Tax cases generalized to other domains, we tested the baseline model on 27 judgments from Motor Vehicles, Industrial and Labour and Land and Property cases. Weighted F1 reduced to 0.70. This degradation in performance is mainly due to different style of writing in the judgments.
|
| 170 |
+
|
| 171 |
+
# 5. Applications of Rhetorical Roles Prediction Task
|
| 172 |
+
|
| 173 |
+
The purpose of creating a rhetorical role corpus is to enable automated understanding of legal documents by segmenting them into topically coherent units. This can be helpful in various applications such legal document
|
| 174 |
+
|
| 175 |
+
Table 4: F1 scores of RR baseline model for each of the rhetorical role on test data
|
| 176 |
+
|
| 177 |
+
<table><tr><td>Model</td><td>ROUGE-1</td><td>ROUGE-2</td><td>ROUGE-L</td></tr><tr><td>BERTSUM</td><td>0.60</td><td>0.42</td><td>0.59</td></tr><tr><td>BERTSUM RR</td><td>0.62</td><td>0.46</td><td>0.61</td></tr></table>
|
| 178 |
+
|
| 179 |
+
Table 5: Extractive Summarization Results
|
| 180 |
+
|
| 181 |
+
summarization (Bhattacharya et al., 2019a), and legal judgment prediction (Malik et al., 2021b). In this paper, we explore both the use-cases. We experimented with how rhetorical roles prediction could help create abstractive, extractive summaries of Indian court judgments and predict the judgment outcome based on the judgment text.
|
| 182 |
+
|
| 183 |
+
# 5.1. Extractive Summarization of Court Judgments using Rhetorical Roles
|
| 184 |
+
|
| 185 |
+
We explored the task of extractive summarization. For a given legal document, the task requires extracting the salient sentences that would summarize the document. We experimented with the LawBriEFs corpus consisting of 285 extractive summaries of Indian court judgments prepared by law students from a National Law University in India. The corpus was created by providing judgment documents to law students, followed by a questionnaire that required them to pick salient sentences that would answer the questions and, in the process, create the summaries. The questions pertained to facts, arguments, issues, ratio, and decisions. We wanted to experiment with how rhetorical roles could be helpful in extracting summaries.
|
| 186 |
+
|
| 187 |
+
We finetuned BERTSUM (Liu and Lapata, 2019) model on the Lawbriefs data to pick up the top $20\%$ of the sentences as summaries. Since the judgments are much longer than 512 token limits of BERTSUM, we created non-overlapping chunks of 512 tokens and created 3151 chunks in training data from 235 judgments and 827 chunks from 50 judgments as test data. We then trained another model, which also takes as input a rhetorical role for each sentence. We concatenated 768-dimensional sentence vector from CLS token to one-hot encoded sentence rhetorical roles. The idea is that if certain rhetorical roles are more important than others while creating summaries, then the model will learn those. We call this model BERTSUM RR. Discussion with Legal Experts revealed that ISSUE, RATIO, and RPC are important in summary and must always be selected without the need of summarizing. So we copied all the sentences with predicted rhetorical roles ISSUE, RATIO and RPC regardless of whether they are present in the top $20\%$ sentences. Model performance evaluated using ROUGE scores (Lin, 2004) are compared in Table 5. Results indicate that rhetorical roles are useful in selecting better summary sentences.
|
| 188 |
+
|
| 189 |
+
# 5.2. Abstractive Summarization of Court Judgments using Rhetorical Roles
|
| 190 |
+
|
| 191 |
+
The task of abstractive summarization requires generating concise text summaries of legal documents. For
|
| 192 |
+
|
| 193 |
+
our experiments, we considered 50 randomly selected documents from the Law Briefs dataset (as described in 5.1) as test data. For this task we used pre-trained Legal Pegasus model.5 Legal Pegasus is fine-tuned version of Pegasus (Zhang et al., 2020) on US securities litigation dataset.6 We used the pre-trained Legal Pegasus model for generating abstractive summaries for the baseline. In particular, we split the document into non-overlapping chunks of 1024 tokens, and each chunk was passed through the model to generate summaries. The final summary was obtained by concatenating summaries of each chunk. It constituted the baseline model. We wanted to see how RR could help generate better summaries. Towards this goal, we segmented the document in terms of rhetorical roles, and each of the segments was passed separately through the Legal Pegasus model to generate summaries. The final summary was obtained by concatenating the summaries corresponding to each of the rhetorical roles in the order they appear in the document. This corresponds to the Legal Pegasus RR model. Both models are compared on the test set and ROUGE scores for both the model are shown in Table 6. As can be observed in Table 6 use of rhetorical roles helps to improve the performance on the task of abstractive summarizing.
|
| 194 |
+
|
| 195 |
+
<table><tr><td>Model</td><td>ROUGE-1</td><td>ROUGE-2</td><td>ROUGE-L</td></tr><tr><td>Legal Pegasus</td><td>0.55</td><td>0.34</td><td>0.47</td></tr><tr><td>Legal Pegasus RR</td><td>0.56</td><td>0.36</td><td>0.48</td></tr></table>
|
| 196 |
+
|
| 197 |
+
# 5.3. Court Judgment Prediction using Rhetorical Roles
|
| 198 |
+
|
| 199 |
+
Malik et al. (2021b) created the corpus (ILDC: Indian Legal Documents Corpus) and the task (CJPE: Court Judgment Prediction and Explanation) for predicting and explaining the court judgments based on legal judgment texts. It is essential for the judgment prediction task to identify which sentences provide hints about the final decision and use that filtered data as input for prediction. We predicted rhetorical role for each sentence of the train, test data using the baseline rhetorical role model. In the ILDC dataset, we removed the sentences with RPC and RATIO tags making the task more challenging. We also removed the judgments for which no ANALYSIS was predicted. Note that the ILDC dataset is already anonymized and takes care of the biases and ethical concerns associated with the task of judgment prediction. Moreover, we use judgment prediction only as a use case and do not believe that an automated system could remove a human judge; rather,
|
| 200 |
+
|
| 201 |
+
such a system could augment a human and expedite legal processes, especially in highly populated countries like India.
|
| 202 |
+
|
| 203 |
+
For the task of judgment prediction, training data had 5044 judgments, and test data had 977 judgments. The idea is to filter the training data using rhetorical roles to check the impact on model performance, keeping the model architecture the same. We used XLNet on the ILDC single model proposed in Malik et al. (2021b) to predict the judgment outcome on the last 512 tokens of the judgment text. We call this approach XLNet_last512. The model ran for 13 epochs, and then it was early stopped. In another experiment, we trained the same architecture to predict judgment outcome on the last 512 tokens of ANALYSIS role sentences. We call this model as XLNet_last512_Analysis. The model ran for 12 epochs, and then it was early stopped. The model performance comparison are given in Table 7. As observed from the results, filtering the input text for the ANALYSIS role improves the prediction.
|
| 204 |
+
|
| 205 |
+
Table 6: Abstractive Summarization Results
|
| 206 |
+
|
| 207 |
+
<table><tr><td>Model</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>XLNet_last512</td><td>0.76</td><td>0.49</td><td>0.59</td></tr><tr><td>XLNet_last512_Analysis</td><td>0.71</td><td>0.55</td><td>0.62</td></tr></table>
|
| 208 |
+
|
| 209 |
+
Table 7: Judgment prediction Results
|
| 210 |
+
|
| 211 |
+
# 6. Conclusion and Future Directions
|
| 212 |
+
|
| 213 |
+
In this paper, we proposed a new corpus of legal judgment documents annotated with 13 different Rhetorical Roles. The corpus was created via crowdsourcing involving law students. We also proposed baseline models for automatic rhetorical role prediction in a legal document. For some of the roles, the model shows similar trends in predicting the roles as human annotators. Nevertheless, there is scope for further improvement and we have created a leaderboard for the task, so that researchers from community can contribute towards improving the RR prediction system. We also showed two applications of rhetorical roles: summarization and judgment prediction. For both the usecases use of rhetorical role helps to improve results. We have released the corpus and the baseline models and encourage the community to use these to develop other legal applications as well.
|
| 214 |
+
|
| 215 |
+
# Acknowledgements
|
| 216 |
+
|
| 217 |
+
We thank EkStep Foundation for funding this work. We thank all the law experts, student volunteers, and coordinators for contributing to data annotation. We thank LawBriEFs for sharing the summaries. The author Ashutosh Modi would like to acknowledge the support of Google Research India via the Faculty Research Award Grant 2021.
|
| 218 |
+
|
| 219 |
+
# 7. Bibliographical References
|
| 220 |
+
|
| 221 |
+
Bhatia, V. K. (2014). Analysing genre: Language use in professional settings. Routledge.
|
| 222 |
+
|
| 223 |
+
Bhattacharya, P., Hiware, K., Rajgaria, S., Pochhi, N., Ghosh, K., and Ghosh, S. (2019a). A comparative study of summarization algorithms applied to legal case judgments. In European Conference on Information Retrieval, pages 413-428. Springer.
|
| 224 |
+
Bhattacharya, P., Paul, S., Ghosh, K., Ghosh, S., and Wyner, A. (2019b). Identification of rhetorical roles of sentences in indian legal judgments.
|
| 225 |
+
Brack, A., Hoppe, A., Buschermohle, P., and Ewerth, R. (2021). Sequential sentence classification in research papers using cross-domain multi-task learning.
|
| 226 |
+
Chalkidis, I., Androutsopoulos, I., and Aletras, N. (2019). Neural legal judgment prediction in English. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4317-4323, Florence, Italy, July. Association for Computational Linguistics.
|
| 227 |
+
Chalkidis, I., Fergadiotis, M., and Androutsopoulos, I. (2021). MultiEURLEX - a multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6974–6996, Online and Punta Cana, Dominican Republic, November. Association for Computational Linguistics.
|
| 228 |
+
Cohan, A., Beltagy, I., King, D., Dalvi, B., and Weld, D. (2019). Pretrained language models for sequential sentence classification. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).
|
| 229 |
+
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09.
|
| 230 |
+
Devlin, J., Chang, M., Lee, K., and Toutanova, K. (2018). BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.
|
| 231 |
+
Farzindar, A. and Lapalme, G. (2004). Letsum, an automatic legal text summarizing system.
|
| 232 |
+
Fleiss, J. L., Levin, B., and Paik, M. C. (2013). Statistical methods for rates and proportions. John wiley & sons.
|
| 233 |
+
Grabmair, M., Ashley, K. D., Hwa, R., and Sweeney, P. M. (2011). Toward extracting information from public health statutes using text classification machine learning. In Legal Knowledge and Information Systems, pages 73-82. IOS Press.
|
| 234 |
+
Hachey, B. and Grover, C. (2006). Extractive summarisation of legal texts. Artificial Intelligence and Law, 14(4):305-345.
|
| 235 |
+
Jackson, P., Al-Kofahi, K., Tyrrell, A., and Vachher, A. (2003). Information extraction from case law and retrieval of prior cases. Artificial Intelligence, 150(1-2):239-290.
|
| 236 |
+
Kapoor, A., Dhawan, M., Goel, A., Arjun, T., Agrawal,
|
| 237 |
+
|
| 238 |
+
V., Agrawal, A., Bhattacharya, A., Kumaraguru, P., and Modi, A. (2022). HLDC: Hindi Legal Documents Corpus. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2022. Association for Computational Linguistics.
|
| 239 |
+
Lagos, N., Segond, F., Castellani, S., and O?Neill, J. (2010). Event extraction for legal case building and reasoning. In International Conference on Intelligent Information Processing, pages 92-101. Springer.
|
| 240 |
+
Lin, C.-Y. (2004). ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain, July. Association for Computational Linguistics.
|
| 241 |
+
Liu, Y. and Lapata, M. (2019). Text summarization with pretrained encoders.
|
| 242 |
+
Malik, V., Sanjay, R., Guha, S. K., Nigam, S. K., Hazarika, A., Bhattacharya, A., and Modi, A. (2021a). Semantic Segmentation of Legal Documents via Rhetorical Roles. CoRR, abs/2112.01836.
|
| 243 |
+
Malik, V., Sanjay, R., Nigam, S. K., Ghosh, K., Guha, S. K., Bhattacharya, A., and Modi, A. (2021b). ILDC for CJPE: Indian legal documents corpus for court judgment prediction and explanation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4046-4062, Online, August. Association for Computational Linguistics.
|
| 244 |
+
Maxwell, K. T., Oberlander, J., and Lavrenko, V. (2009). Evaluation of semantic events for legal case retrieval. In Proceedings of the WSDM'09 Workshop on Exploiting Semantic Annotations in Information Retrieval, pages 39-41.
|
| 245 |
+
Moens, M.-F., Uytendaele, C., and Dumortier, J. (1999). Abstracting of legal cases: the potential of clustering based on the selection of representative objects. Journal of the American Society for Information Science, 50(2):151-161.
|
| 246 |
+
Moens, M.-F., Boiy, E., Palau, R. M., and Reed, C. (2007). Automatic detection of arguments in legal texts. In Proceedings of the 11th international conference on Artificial intelligence and law, pages 225-230.
|
| 247 |
+
National Judicial Data Grid. (2021). National judicial data grid statistics. https://www.njdg.ecourts.gov.in/njdgnew/index.php.
|
| 248 |
+
Russakovsky, O., Deng, J., Huang, Z., Berg, A. C., and Fei-Fei, L. (2013). Detecting avocados to zucchini: what have we done, and where are we going? In International Conference on Computer Vision (ICCV).
|
| 249 |
+
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., and Fei-Fei, L. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252.
|
| 250 |
+
|
| 251 |
+
Saravanan, M., Ravindran, B., and Raman, S. (2007). Using legal ontology for query enhancement in generating a document summary. Frontiers In Artificial Intelligence and Applications, 165:171.
|
| 252 |
+
Saravanan, M., Ravindran, B., and Raman, S. (2008). Automatic identification of rhetorical roles using conditional random fields for legal document summarization. In Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I.
|
| 253 |
+
spaCy. (2021). spaCy Toolkit. https://spacy.io/.
|
| 254 |
+
Strickson, B. and De La Iglesia, B. (2020a). Legal Judgement Prediction for UK Courts. In Proceedings of the 2020 The 3rd International Conference on Information Science and System, pages 204-209, Cambridge United Kingdom, March. ACM.
|
| 255 |
+
Strickson, B. and De La Iglesia, B. (2020b). Legal Judgement Prediction for UK Courts. In Proceedings of the 2020 The 3rd International Conference on Information Science and System, pages 204-209, Cambridge United Kingdom, March. ACM.
|
| 256 |
+
Sulea, O.-M., Zampieri, M., Vela, M., and van Genabith, J. (2017). Predicting the law area and decisions of French Supreme Court cases. In Proceedings of the International Conference on Advances in Natural Language Processing, RANLP 2017, pages 716-722, Varna, Bulgaria, September. INCOMA Ltd.
|
| 257 |
+
Tay, Y., Dehghani, M., Bahri, D., and Metzler, D. (2020). Efficient transformers: A survey. arXiv preprint arXiv:2009.06732.
|
| 258 |
+
Tran, V., Nguyen, M. L., and Satoh, K. (2019). Building legal case retrieval systems with lexical matching and summarization using a pre-trained phrase scoring model. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law, pages 275-282.
|
| 259 |
+
Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. (2018). Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.
|
| 260 |
+
Wolf, T., Debut, L., Sanh, V., Chaumont, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtopicz, M., Davison, J., Shleifer, S., von Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C., Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., and Rush, A. (2020). Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online, October. Association for Computational Linguistics.
|
| 261 |
+
Wyner, A., Mochales-Palau, R., Moens, M.-F., and Milward, D. (2010). Approaches to text mining arguments from legal cases. In Semantic processing of legal texts, pages 60-79. Springer.
|
| 262 |
+
Xiao, C., Zhong, H., Guo, Z., Tu, C., Liu, Z., Sun, M.,
|
| 263 |
+
|
| 264 |
+
Feng, Y., Han, X., Hu, Z., Wang, H., et al. (2018).
|
| 265 |
+
Cail2018: A large-scale legal dataset for judgment prediction. arXiv preprint arXiv:1807.02478.
|
| 266 |
+
Zhang, J., Zhao, Y., Saleh, M., and Liu, P. J. (2020).
|
| 267 |
+
Pegasus: Pre-training with extracted gap-sentences for abstractive summarization.
|
2201.13xxx/2201.13125/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:685a098ac9c5a5d3961748d86017409dd55512a64470d481ca109e758015b95b
|
| 3 |
+
size 355891
|
2201.13xxx/2201.13125/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.13xxx/2201.13143/abce0a26-20db-491d-836f-c008291aceaf_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.13xxx/2201.13143/abce0a26-20db-491d-836f-c008291aceaf_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.13xxx/2201.13143/abce0a26-20db-491d-836f-c008291aceaf_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ea97569d67385052563f1141f8dcb1b39c836de3fdf518351511916101222466
|
| 3 |
+
size 3991102
|
2201.13xxx/2201.13143/full.md
ADDED
|
@@ -0,0 +1,373 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CoTV: Cooperative Control for Traffic Light Signals and Connected Autonomous Vehicles using Deep Reinforcement Learning
|
| 2 |
+
|
| 3 |
+
Jiaying Guo, Student Member, IEEE, Long Cheng, Senior Member, IEEE, and Shen Wang, Member, IEEE
|
| 4 |
+
|
| 5 |
+
Abstract—The target of reducing travel time only is insufficient to support the development of future smart transportation systems. To align with the United Nations Sustainable Development Goals (UN-SDG), a further reduction of fuel and emissions, improvements of traffic safety, and the ease of infrastructure deployment and maintenance should also be considered. Different from existing work focusing on the optimization of the control in either traffic light signal (to improve the intersection throughput), or vehicle speed (to stabilize the traffic), this paper presents a multi-agent Deep Reinforcement Learning (DRL) system called CoTV, which Cooperatively controls both Traffic light signals and Connected Autonomous Vehicles (CAV). Therefore, our CoTV can well balance the achievement of the reduction of travel time, fuel, and emissions. In the meantime, CoTV can also be easy to deploy by cooperating with only one CAV that is the nearest to the traffic light controller on each incoming road. This enables more efficient coordination between traffic light controllers and CAV, thus leading to the convergence of training CoTV under the large-scale multi-agent scenario that is traditionally difficult to converge. We give the detailed system design of CoTV and demonstrate its effectiveness in a simulation study using SUMO under various grid maps and realistic urban scenarios with mixed-autonomy traffic.
|
| 6 |
+
|
| 7 |
+
Index Terms—Deep Reinforcement Learning, Multi-agent System, Connected Autonomous Vehicles, Mixed-autonomy Traffic
|
| 8 |
+
|
| 9 |
+
# I. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Developing the next generation Intelligent Transportation Systems (ITS) is one of the key ways to achieve the United Nations Sustainable Development Goals (UN-SDG) [1]. In particular, firstly, sustainable traffic requires higher efficiency to reduce enormous monetary losses caused by excessive traffic delays. Secondly, more eco-friendly driving should be encouraged to decrease fuel consumption and gas emissions (mainly $\mathrm{CO}_{2}$ ). Thirdly, traffic safety is one of the key indicators for sustainable traffic, inherently, which should be enhanced by avoiding potential collisions to save lives. Last but not least, to achieve those sustainable traffic goals, easy-to-deploy ITS infrastructure is critical.
|
| 12 |
+
|
| 13 |
+
Most existing research in sustainable urban traffic control adjusts either traffic light signals or vehicle speed. Traffic light signal controllers dynamically select the best timing plan
|
| 14 |
+
|
| 15 |
+
Jiaying Guo is with the School of Computer Science, University College Dublin, Ireland, E-mail: jiaying.guo@ucdconnect.ie
|
| 16 |
+
Long Cheng is with the School of Control and Computer Engineering, North China Electric Power University, Beijing 102206, China. E-mail: lcheng@ncepu.edu.cn
|
| 17 |
+
Shen Wang is with the School of Computer Science, University College Dublin, Ireland, E-mail: shen.wang@ucd.ie
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
Fig. 1. The illustration of the motivation and goals of our proposed system CoTV. Traditionally, traffic light controllers can increase the intersection throughput, thus reducing the travel time and fuel. While CAV adjusts its speed to reduce the fuel, thus maintaining a safe time gap to its surrounding traffic. Our CoTV coordinates these two different types of agents to achieve a more comprehensive set of the goals of sustainable traffic.
|
| 21 |
+
|
| 22 |
+
according to the real-time traffic. As shown in Fig.1, this can directly increase the intersection throughput, thus reducing travel time as well as energy consumption and emissions. CAV can proactively control vehicles' acceleration, as shown in Fig. 1, to achieve more stable traffic nearby with relatively higher driving velocity (i.e., lower fuel consumption and gas emissions) and keep a safe distance [2] from the surrounding traffic (i.e., longer time-to-collision). Recent research from the transportation domain attempts to explore the potential of joint control for both traffic light signals and vehicle speed. Methodologies used in such research include mixed-integer linear programming [3], the enumeration method and the pseudo-spectral method [4]. However, these methods may not perform well in realistic traffic scenarios because their deterministic traffic control decisions are insufficient to deal with a fast-changing urban environment [5].
|
| 23 |
+
|
| 24 |
+
Unlike the aforementioned traditional methods, many researchers nowadays have demonstrated the great potential of DRL in solving traffic control challenges under complex urban scenarios. For instance, inspired by the traditional traffic signal control method MaxPressure [6], PressLight [7] can achieve even better traffic efficiency improvements under various urban scenarios using DRL. Moreover, the DRL-based traffic signal control can also reduce the waiting time of specific vehicles in emergency situations in which traffic condition varies quickly [8]. On the other hand, efficient and effective CAV speed
|
| 25 |
+
|
| 26 |
+
control can stabilize traffic in many complex and changing road scenarios using DRL [9], which is traditionally infeasible using optimization-based controllers. However, there is a lack of research using DRL for the joint control of both urban intersection signals and vehicle speed. This DRL-based joint control is challenging due to the difficulty of designing a proper cooperation scheme for two different agent types (i.e., traffic light controllers and CAV). Moreover, the unpredictability of urban mixed-autonomy traffic makes it even harder to converge within a reasonable number of training iterations.
|
| 27 |
+
|
| 28 |
+
To overcome the limitations mentioned above, we propose CoTV: a multi-agent DRL-based system that can cooperatively control traffic light signals and CAV. CoTV well balances the advantages of both traffic light controllers and CAV to achieve more sustainable traffic, as shown in Fig.1. Concretely, the contributions of our work are as follows:
|
| 29 |
+
|
| 30 |
+
- Effective cooperation schemes between CAVs and traffic light controllers. Different from the methodology in the literature on Multi-Agent Reinforcement Learning (MARL) for traffic control, instead of using action-dependent design [10] (i.e., the action of one agent depends on the action of other agents in the shared environment), our cooperation schemes rely on the exchange of states between agents within the range of one intersection, including the traffic light controller and approaching CAVs. This so-called "action-independent MARL" [11] can work in our CoTV as the objective of traffic light controller and CAV for the traffic improvement are inherently complementary (i.e., rather than overlapping: all improving travel time or reducing fuel). Thus, CoTV takes advantage of the simplicity of "action-independent MARL" design on DRL training and keeps the effectiveness of CoTV in improving traffic under various scenarios. The cooperation schemes of CoTV are shown to facilitate training convergence, which is challenging for independent MARL that does not include any cooperation (either state or action). Specifically, our CoTV using Proximal Policy Optimization (PPO) [12] obtains up to $30\%$ reduction in both travel time and fuel consumption & $\mathrm{CO}_{2}$ emissions under varying CAV penetration rates.
|
| 31 |
+
|
| 32 |
+
- Scalable to complex urban scenarios by avoiding cooperation with excessive CAV agents. Compared with controlling all possible CAVs using MARL, the traffic light controller in our CoTV selects the closest CAV to the intersection on each incoming road as the CAV agent. This idea is inspired by platooning can increase intersection throughput [13], as the leading vehicle in a certain road has the great potential to form a platoon with the rest vehicles on the same road. We also demonstrate that compared with coordinating all CAVs (CoTV*), CoTV does not compromise efficiency improvement while significantly reducing the training time and resources used.
|
| 33 |
+
- Efficient communication exchange schemes between CAV and traffic light controllers. The amount of state information exchanged between CAV and traffic light controllers is low enough. As shown in Fig.2, the
|
| 34 |
+
|
| 35 |
+
communication schemes are designed to exchange the speed, acceleration, and location of CAVs and the current signal phase of traffic light controllers to each other. The information exchange requires less than 100 Kbps transmission rate, which can be achieved using Vehicle-To-Vehicle (V2V) and Vehicle-To-Infrastructure (V2I) communication infrastructure. The wireless communication technology IEEE 802.11p supports a bandwidth of 3 Mbps to 20 Mbps [14].
|
| 36 |
+
|
| 37 |
+
This paper extends our previous work [15] to control traffic light signals and CAV cooperatively using DRL. The improvements include: 1) The system framework of CoTV is designed for addressing scalability issues, resulting in the significantly reduced number of CAV agents controlled. 2) The state and reward for agents are simplified by removing redundant traffic information. Therefore, the amount of information exchanged among agents is reduced to ease the deployment of CoTV. 3) The testing scenarios are extended from a small grid map to more realistic urban scenarios. 4) We demonstrate the robustness of CoTV under different CAV penetration rates. 5) As an important requirement of achieving sustainable traffic, the effectiveness of CoTV in enhancing traffic safety is evaluated by time-to-collision [16]. 6) Two other common MARL approaches, action-dependent and independent, are compared with the action-independent MARL of our CoTV in terms of policy training and traffic improvements.
|
| 38 |
+
|
| 39 |
+
# II. RELATED WORKS
|
| 40 |
+
|
| 41 |
+
This section overviews the recent related work and highlights the gaps that our CoTV attempts to fill. In particular, it firstly focuses on the research in either traffic light signal control or vehicle speed control. Secondly, it discusses the recent research in the joint control of both agents. Lastly, it summarizes the practicability of deploying existing work in mixed-autonomy and its impact on traffic efficiency and safety.
|
| 42 |
+
|
| 43 |
+
# A. Control for Either Traffic Light Signals or Vehicle Speed
|
| 44 |
+
|
| 45 |
+
Most existing research in sustainable urban traffic control adjusts either traffic light signals or vehicle speed. Sydney Coordinated Adaptive Traffic System (SCATS) [17] is one of the earliest and most widely applied traffic light signal control systems. It can dynamically select the best signal plan from a list of pre-defined candidates that can potentially achieve better intersection throughput by improving green time efficiency. Varaiya [6] proposed a traffic light signal control scheme named MaxPressure, which was proven to maximize the throughput of the entire road network, with each traffic light controller receiving local traffic information. On the other hand, the field experiments in [18] prove that the speed control of CAV can stabilize traffic and is beneficial to reduce braking times and fuel consumption. Green Light Optimal Speed Advisory (GLOSA) system guides CAV to adjust its speed according to the current traffic signal phase and the remaining distance to its approaching intersection [19]. Therefore, the more smooth acceleration/deceleration of CAVs can further reduce fuel consumption and $\mathrm{CO}_{2}$ emissions. However, these traffic control optimization approaches rely on
|
| 46 |
+
|
| 47 |
+
deterministic formulations to make dynamic traffic problems tractable. These deterministic formulations remain static in ever-changing traffic and thus may not be flexible enough to improve realistic traffic.
|
| 48 |
+
|
| 49 |
+
DRL has been used to cope with complex traffic environments, promising better urban traffic. PressLight [7] is a DRL-based model using Deep Q-learning (DQN). It collects local real-time traffic information inspired by the traditional method MaxPressure [6] while achieving more improvement on traffic efficiency than MaxPressure. Wu et al. [9] extended the field experiments in [18] using the Trust Region Policy Optimization (TRPO) method for training CAVs in a simulated experiment. The used DRL-based vehicle speed controller surpasses traditional optimization controllers on traffic improvement. Various scenarios using CAV had been tested in [20], including road merging and unsignalized intersections. The DRL-based speed control of CAV can optimize the vehicle trajectory of the whole trip and reduce the risk of collision all the time. Compared to traditional optimization methods with deterministic solutions, DRL methods, which are used in our proposed CoTV, learn from trial-and-error in the interaction with the environment to train different optimal policies under various traffic scenarios, which is more capable of performing adaptive traffic control and generalizing well under fast-changing urban road scenarios.
|
| 50 |
+
|
| 51 |
+
# B. Joint Control for Traffic Light Signals and Vehicle Speed
|
| 52 |
+
|
| 53 |
+
Traditional optimization-based methods have been attempted to jointly control traffic light signals and vehicle speed. Yu et al. [3] developed mixed-integer linear programming for optimizing vehicle trajectories and traffic signals simultaneously at isolated intersections. The phase sequence and duration of traffic light signals are coordinated with vehicle arriving time to the intersections. A two-level model for traffic light controllers and CAV was proposed using the enumeration method and the pseudo-spectral method [4]. The first level is applied to coordinate CAV and traffic light controllers to minimize travel time, and the second level is used to regulate CAV trajectory to reduce fuel consumption. The same system targets were adopted in the cooperative optimization model [21]. The model uses a mixed-integer non-linear program, which originally has high computational complexity.
|
| 54 |
+
|
| 55 |
+
To the best of our knowledge, DRL methods for the joint control of traffic light signals and CAV have not been well studied. The joint control using DRL suffers many challenges, commonly in multi-agent systems [22]: (1) Every agent, traffic light controller, or CAV, proactively interacts with the same environment simultaneously, causing a non-stationary environment to bring more uncertainty on training convergence. (2) A large number of agents cause scalability issues due to an exponential increase in the computational cost of joint action. (3) The reward of agents can assess the system at a different area scale in the environment: individual, regional, or global. Reward design is critical for DRL agents due to the high correlation to achieving system goals. For example, traffic light controllers explicitly coordinate traffic around intersections, and each CAV mainly affects its surrounding
|
| 56 |
+
|
| 57 |
+
traffic. The proposed model in this paper attempts to overcome these difficulties and utilize the advantage of DRL methods to control traffic light signals and CAV cooperatively.
|
| 58 |
+
|
| 59 |
+
# C. Efficiency and Safety for Mixed-Autonomy Traffic
|
| 60 |
+
|
| 61 |
+
The development of CAV is thriving in both academia and industry, which is expected to improve traffic. However, the deployment must experience a gradual mixed transition from introductory, established, to prevalent [23] with the growth of CAV penetration rate. Existing work presents that CAV mixing in traffic still brings uncertainty. Mixed-autonomy experiments on motorways were conducted in [24], simplified from intersections with conflicting traffic movements. Similar work was tested in single-lane facilities, where CAV can enhance traffic safety by keeping a larger gap from the surrounding vehicles [25]. However, a low penetration rate (less than $10\%$ ) causes more conflicts in urban scenarios [26]. On the other hand, CAV has the potential to improve traffic efficiency but cannot guarantee a higher average speed than traditional vehicles depending on the network type and traffic conditions. [27] conducted experiments in a ring scenario, showing that the CAV penetration rate greater than $20\%$ allows all vehicles to reach higher speeds and stabilize the flow. The penetration rate in $20\%$ to $40\%$ is possible to result in the near-maximum improvements [26]. Overall, a high penetration rate of CAV can bring traffic efficiency and safety improvement on mixed-autonomy traffic in various scenarios.
|
| 62 |
+
|
| 63 |
+
This work advances the state-of-the-art in assessing DRL-based mix-autonomy control under dynamic urban road scenarios with multiple intersections. Moreover, our system CoTV chooses only a small fraction of CAVs that cooperate with traffic light controllers, which have great potential to guide the rest of the vehicles. This makes the deployment of CoTV practical and easy-to-scale.
|
| 64 |
+
|
| 65 |
+
# III. SYSTEM OVERVIEW
|
| 66 |
+
|
| 67 |
+
This section explains the design of our system CoTV. Firstly, we outline the system design goals. Then, the system components (i.e., traffic light controllers and CAV) are presented with the design of their action, state, and reward. The cooperation schemes between the two types of agents using Vehicle-To-Everything (V2X) communications are elaborated as well. Thirdly, we explain the training process of CoTV using PPO, during which parameter sharing is applied for all agents in the same type to perform the learned policy. Additionally, we also present the consideration of the ease of deployment in designing CoTV. The code of this study is open-sourced<sup>1</sup>.
|
| 68 |
+
|
| 69 |
+
# A. System Design Goals
|
| 70 |
+
|
| 71 |
+
The proposed model CoTV aims to achieve the following goals, which are also shown in Fig.1:
|
| 72 |
+
|
| 73 |
+
- Reduced travel time: Travel time is the metric that end road users care about the most. Our system should reduce the travel time of a vehicle with a given route. This goal
|
| 74 |
+
|
| 75 |
+

|
| 76 |
+
Fig. 2. The DRL design of CoTV. Two types of agents, traffic light controllers and CAV interact with the environment according to the state information exchanged via V2X communications.
|
| 77 |
+
|
| 78 |
+
is traditionally achieved by traffic light signal control that can increase intersection throughput.
|
| 79 |
+
|
| 80 |
+
- Lower fuel consumption and $\mathrm{CO}_{2}$ emissions: Sustainable traffic goals encourage eco-friendly driving behaviors. This goal is traditionally achieved by the speed control of CAV that can stabilize traffic flow. Smart traffic light control can also partly contribute to achieving this goal by reducing the number of stop-and-go.
|
| 81 |
+
- Longer time-to-collision: Safety is a crucial consideration in sustainable traffic system design. Reducing the risk of collision can be achieved by maintaining a longer time-to-collision [16], with sufficient time to moderately decelerate. CAV can proactively keep a safe distance from the surrounding traffic. Thus, ITS using CAV has the potential to achieve higher traffic safety.
|
| 82 |
+
- Easy to deploy: Our system CoTV requires a V2X communication infrastructure to support communication exchange over the cooperative control. Meanwhile, scalability issues should be addressed with the increasing number of agents. Efficient communication schemes among traffic light controllers and reduced CAV agents are the key to achieving this goal.
|
| 83 |
+
|
| 84 |
+
# B. System Components
|
| 85 |
+
|
| 86 |
+
Our proposed system assumes that all vehicles are connected, including CAV and Human-Driven Vehicles (HDV) (details can be found in Table II). The V2X communication is also assumed perfect without no packet loss and no latency. The main components of CoTV: traffic light controllers and CAV, as shown in Fig.2. The design of action, state, and reward for them are described as follows, while the V2X communication schemes involved are shown in Fig.3:
|
| 87 |
+
|
| 88 |
+
# 1) Traffic light controller:
|
| 89 |
+
|
| 90 |
+
- Action: We limit the action of traffic light controller to a binary set, where "1" represents switching to the next phase for the next timestep while "0" means to keep the current phase unchanged. As opposed to other common action definitions in the literature, such as phase selection [11], the phase switch [28] we choose is more manageable for the model training process.
|
| 91 |
+
|
| 92 |
+

|
| 93 |
+
Fig. 3. V2X communication schemes in CoTV showing how traffic light controllers and CAVs use V2I and V2V. This implements state exchange and cooperative control. CAV agents of CoTV are highlighted in blue.
|
| 94 |
+
|
| 95 |
+
- State: The state of traffic light controller involves three parts: the current signal phase, traffic on the roads that this traffic light controller coordinates, and the status of the closest vehicle to the intersection on each incoming road. As shown in Fig.2, of all three parts of the traffic light controller's state, the information of the last two parts is acquired by using the V2I communications infrastructure illustrated in Fig.3. The road traffic is presented by the number of vehicles on each road coordinated by the traffic light controller. These roads are divided into incoming roads and outgoing roads. The last part of the state includes speed, acceleration, distance to the intersection, and the road name where it is located for the closest vehicle to the intersection on each incoming road.
|
| 96 |
+
- Reward: The reward is the penalty of intersection pressure inspired by [6], [7]. Intersection pressure is defined as the difference between the sum of the number of vehicles on the incoming roads $N_{in}$ and the sum of the number of vehicles on the outgoing roads $N_{out}$ . Then the intersection pressure is normalized by the maximum road capacity $c$ to improve DRL training. The maximum road capacity $c$ indicates the maximum number of vehicles on a single road in the given road network. It is calculated by dividing the length of the longest road by the minimum space required for a vehicle (i.e., the length of a single vehicle plus the minimum distance between two adjacent vehicles). The reward of a certain traffic light controller
|
| 97 |
+
|
| 98 |
+
$r_m$ , Eq.(1) becomes
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
r _ {m} = - \frac {N _ {i n} - N _ {o u t}}{c} \tag {1}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
We also illustrate the above reward in Fig.4. The reward function is formulated to reduce travel time, one of the system goals, by increasing intersection throughput. Minimizing intersection pressure encourages vehicles to pass through the intersection quickly while considering the remaining capacity in the outgoing roads, thus improving green light efficiency and throughput [6]. We also simplify the calculation of intersection pressure without considering traffic movements (correspondence between incoming and outgoing roads) compared with [6], [7]. Therefore, CoTV can be easily applied in various urban scenarios with multi-directional roads. Besides, we avoid using other common reward definitions in the current literature, such as queue length and waiting time [10], [28], which is precarious in different traffic flow conditions even without the influence of traffic lights.
|
| 105 |
+
|
| 106 |
+

|
| 107 |
+
Fig. 4. The illustration of the reward of a traffic light controller $r_m$ , assuming the maximum road capacity $c = 40$ .
|
| 108 |
+
|
| 109 |
+
# 2) CAV:
|
| 110 |
+
|
| 111 |
+
- Action: The action is set to be consistent with the literature [29], which is a continuous action space to represent the CAV acceleration in the range of $[-3m / s^2, 3m / s^2]$ .
|
| 112 |
+
- State: The state explicitly includes speed and acceleration for itself and the vehicle preceding the CAV immediately, the distances to the preceding vehicle and the approaching intersection, and the current signal status of the approaching traffic light controller. The CAV agent can receive information from the vehicles on the same road and the approaching traffic light controller using V2V and V2I communication, as shown in Fig.3.
|
| 113 |
+
- Reward: The reward is penalized by the deviation of average speed $v$ from the maximum speed limit $v^{*}$ , plus the Euclidean norm of acceleration $a$ after the normalization using the vehicle's maximum acceleration $a^{*}$ , as shown in Fig.5. Speeds and accelerations in the reward are that of all vehicles $K$ located on the same
|
| 114 |
+
|
| 115 |
+
road as the CAV agent. The reward of certain CAV agent $r_n$ , Eq.(2) becomes
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
\begin{array}{l} r _ {n} = r _ {1} + r _ {2}, \\ r _ {1} = - \frac {\sum_ {j \in K} \left(v ^ {*} - v _ {j}\right)}{v ^ {*} \times | K |}, v _ {j} \leq v ^ {*}, \tag {2} \\ r _ {2} = - \sqrt {\frac {\sum_ {j \in K} \left(\frac {a _ {j}}{a ^ {*}}\right) ^ {2}}{| K | ^ {2}}}, a _ {j} = \left\{ \begin{array}{l} 0, a _ {j} < 0 \\ a _ {j}, a _ {j} \geq 0 \end{array} \right. \\ \end{array}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
The first term of the reward function $r_1$ encourages a higher average vehicle velocity but keeps it within the maximum speed limit. In this speed range, higher speed increases fuel economy, and potential collisions due to excessive speed can be avoided. Moreover, collisions can generally be avoided as they often lead to a significant decrease in the speed of many following vehicles blocked by such collisions (i.e., such training episodes will be discarded due to low reward value). The second term of the reward function $r_2$ stabilizes acceleration to reduce fuel consumption, while also inducing a large time gap between adjacent vehicles [25] for enabling high-speed collision-free driving. Our reward function of CAV agents encourages better speed control, thus facilitating cooperative control of CoTV to achieve the reduction of fuel consumption and $\mathrm{CO}_2$ emissions and the improvement of traffic safety.
|
| 122 |
+
|
| 123 |
+

|
| 124 |
+
Fig. 5. Illustration of the CAV reward $r_n$ , assuming the maximum speed limit $v^* = 15$ , and the vehicle's maximum acceleration $a^* = 9$ . CAV agents of CoTV are highlighted in blue.
|
| 125 |
+
|
| 126 |
+
# C. Training Process
|
| 127 |
+
|
| 128 |
+
Algorithm 1 presents the training process of CoTV, and the outcome is the policy functions for traffic light agents $M$ and CAV agents $N$ , $\pi_{TL}$ and $\pi_{CAV}$ . The trained policy $\pi$ is expected to guide agents to select one appropriate action $a$ from the action set in a certain state $s$ for maximizing the accumulated value of reward $r$ . We predefine the termination condition as the number of training iterations $I$ . In each iteration, there are $E$ episodes running in parallel, and each episode lasts $H$ timesteps. DRL trajectory data $\tau$ is collected per simulation timestep in each episode to extend the training batch $B$ , and then sampled $K$ times to update the traffic light and CAV policy parameter $\theta_{TL}$ and $\theta_{CAV}$ through
|
| 129 |
+
|
| 130 |
+
gradient descent. Specifically, the traffic light controllers of CoTV select the closest CAV to the intersection on each incoming road as the CAV agent, as in Line 10 of Algorithm 1. These CAV agents have the potential to increase intersection throughput by forming a platoon with the rest vehicles on the same road [13]. The communication exchange schemes for the cooperative control of CoTV occur in the agent receiving state, in Line 13 of Algorithm 1. Traffic light controllers and CAV exchange information with each other, involving the current signal status of traffic light and speed, acceleration, and location of CAV.
|
| 131 |
+
|
| 132 |
+
We choose PPO algorithm [12] for the following reasons. The PPO algorithm has the advantage of being easy to implement and achieving monotonic reward improvement. DQN is the common algorithm to train traffic light controllers [7], [28], which is efficient in the discrete actions (e.g., a binary set of signal phase adjustment). Whereas DQN does not perform well on continuous actions (e.g., vehicle acceleration of any real number within a certain range) [30]. In contrast, PPO can be applied for scenarios with discrete actions or continuous actions. On the other hand, compared with traffic light signals that have a pre-defined phase sequence, the initial driving behavior of DRL-controlled CAV has lots of unreasonable stop-and-go and standstill. The constrained policy update of PPO aims to improve reward monotonically, which is more stable to train CAV and better than Asynchronous Advantage Actor-Critic used in [11]. Although TRPO can also constraint the policy update, PPO is easy to implement and simpler to sample data, which helps the cooperation of traffic light controllers and CAV.
|
| 133 |
+
|
| 134 |
+
When interacting with the environment, CoTV applies parameter sharing [9] to all agents of the same type in the multiagent DRL system, which can converge the training process faster and benefit from shared experience, especially in large-scale applications [31].
|
| 135 |
+
|
| 136 |
+
# D. Considerations for "easy-to-deploy"
|
| 137 |
+
|
| 138 |
+
Firstly, CoTV is designed to be deployed in the major junctions of urban scenarios, which requires minimum upgrades to the existing adaptive traffic light systems (e.g., SCATS, SCOOTS, etc.). This deployment strategy covers broader arterial roads that carry the majority of traffic by the minimum possible number of intersection controllers. Lane-changing operations are not considered in the action space of CoTV agent design for simplicity. However, lane-changing operations are permitted in the evaluation of CoTV shown in Section V. Secondly, compared with controlling all possible CAVs with DRL, the traffic light controller of CoTV selects only the closest CAV to the intersection on each incoming road to cooperate, which can significantly reduce training time and resources used in the process thus alleviating scalability issues. Meanwhile, the cooperation schemes among agents (i.e., the traffic light controller and the approaching CAV agents) only rely on the information exchange of states, not actions. This means the action for a certain agent is selected independently from other agents' actions. Therefore, CoTV avoids the exponentially increased complexity of joint actions
|
| 139 |
+
|
| 140 |
+
# Algorithm 1 Training Process of CoTV using PPO
|
| 141 |
+
|
| 142 |
+
# Require:
|
| 143 |
+
|
| 144 |
+
1: Obtain the set of traffic light agents to control, $M$
|
| 145 |
+
2: Set the number of episodes in parallel to $E$ , and the time horizon for each episode to $H$
|
| 146 |
+
3: Initialize the policy parameter for one type of agent, $\theta_{TL}$ for traffic light controllers and $\theta_{CAV}$ for CAV, through parameter sharing
|
| 147 |
+
4: Initialize sample batch $B = \emptyset$
|
| 148 |
+
5: Set the number of epochs for mini-batch updates in one iteration as $K$
|
| 149 |
+
|
| 150 |
+
# Ensure:
|
| 151 |
+
|
| 152 |
+
6: for iteration $= 1,2,\dots,I$ do
|
| 153 |
+
7: for episode $= 1,2,\dots,E$ do in parallel
|
| 154 |
+
8: for timestep $h = 0,1,\dots,H$ do
|
| 155 |
+
9: for each traffic light agent $m$ in $M$ do
|
| 156 |
+
10: Add the closest CAV $n$ to the intersection on each incoming road to the CAV agent set $N$
|
| 157 |
+
11: end for
|
| 158 |
+
12: for each agent $i$ in $M + N$ do
|
| 159 |
+
13: Run policy $\pi_{TL}$ or $\pi_{CAV}$ in the environment
|
| 160 |
+
14: Collect trajectories $\tau = (s_{h - 1_i},a_{h_i},s_{h_i},r_{h_i})$
|
| 161 |
+
15: Extend $B$ with $\tau$
|
| 162 |
+
16: end for
|
| 163 |
+
17: end for
|
| 164 |
+
18: Compute advantage estimates $\hat{A}_1, \dots, \hat{A}_H$
|
| 165 |
+
19: end for
|
| 166 |
+
20: Update $\theta_{TL}$ and $\theta_{CAV}$ in the policy $\pi_{TL},\pi_{CAV}$ using advantage estimates $\hat{A}$ , with $K$ epochs to sample minibatches from $B$ , and then reset $B = \emptyset$
|
| 167 |
+
21: end for
|
| 168 |
+
|
| 169 |
+
for MARL using action-dependent design [22]. Besides, the amount of information exchanged in CoTV is low enough compared with high-dimensional transmission data (i.e., image representations to describe traffic features) [28], [32]. Specifically, as shown in Fig.3, the information of CAV involves speed, acceleration, and location. Their size is estimated to be approximately 40 Bytes if encoded using float numbers. While traffic light controllers send their current signal phase, which is about 8 Bytes if using integer numbers for encoding. This information plus headers will still be less than 100Kbps. This transmission demand is met by the V2I and V2V communications infrastructure [14] using IEEE 802.11p which is between 3 and 20 Mbps. Additionally, all the information exchanged using the vehicular network occurs within the range of a single intersection (i.e., the single-hop range that is about 300 meters), which can improve the robustness of CoTV instead of heavily relying on a large scale (i.e., using multi-hop transmission) of network conditions [11].
|
| 170 |
+
|
| 171 |
+
# IV. EVALUATION METHODOLOGY
|
| 172 |
+
|
| 173 |
+
In this section, we introduce the evaluation methodology, which includes the simulation settings, the metrics used for
|
| 174 |
+
|
| 175 |
+
evaluation, and the overview of compared methods against our proposed CoTV.
|
| 176 |
+
|
| 177 |
+
# A. Simulation Scenarios
|
| 178 |
+
|
| 179 |
+
The simulation platform used in this work is Simulation of Urban MObility (SUMO) $^2$ , which is one of the most widely used open-source microscopic traffic simulators. Our model design and implementation are based on FLOW $^3$ , which provides DRL-related API to work with SUMO dynamically.
|
| 180 |
+
|
| 181 |
+
We clarify some concepts relating to the time horizons. We set 1 simulation timestep equal to 1 simulation second. One episode refers to a full run of a single simulation scenario, which is set to 720 simulation timesteps. At the end of each iteration, CoTV starts to update the parameters of the PPO algorithm used, after 18 episodes run in parallel. In total, we terminate the training process of CoTV after 150 iterations.
|
| 182 |
+
|
| 183 |
+
For testing scenarios, firstly, we demonstrate the effectiveness of CoTV under a simple 1x1 grid map with a single intersection. Then, we show CoTV can be scalable to more consecutive intersections under a 1x6 grid map. Lastly, we validate the effectiveness of CoTV using a subset of the realistic urban scenario of Dublin city, Ireland. Table I summarizes the settings of traffic in each scenario.
|
| 184 |
+
|
| 185 |
+
TABLEI TRAFFIC SETTINGS IN THE THREE TEST SCENARIOS.
|
| 186 |
+
|
| 187 |
+
<table><tr><td>Scenario</td><td>Traffic generation duration (seconds)</td><td>Total number of vehicles</td></tr><tr><td>1x1 grid1</td><td>300</td><td>70</td></tr><tr><td>1x6 grid</td><td>300</td><td>240</td></tr><tr><td>Dublin</td><td>400</td><td>275</td></tr></table>
|
| 188 |
+
|
| 189 |
+
${}^{1}$ axb grid, $a$ is the number of row, $b$ is the number of column.
|
| 190 |
+
|
| 191 |
+
1) $1 \times 1$ grid map: In our $1 \times 1$ grid map, each edge has two roads in opposite directions. To make this map closer to the real urban scenario, we set the road length as 300 meters and the maximum speed limit as $15 \, \mathrm{m/s}$ ( $= 54 \, \mathrm{km/h}$ ). As shown in Fig.6, we generate different go-straight traffic flows in four directions: $\mathrm{N} \rightarrow \mathrm{S}$ (from north to south), $\mathrm{S} \rightarrow \mathrm{N}$ , $\mathrm{W} \rightarrow \mathrm{E}$ (from west to east), and $\mathrm{E} \rightarrow \mathrm{W}$ . This traffic generation method is inspired from [11]. The origin and destination of each vehicle are at the end of the road at the perimeter of the network. The vehicle generation duration for each flow is approximately 300 seconds. The traffic flows $\mathrm{N} \rightarrow \mathrm{S}$ and $\mathrm{W} \rightarrow \mathrm{E}$ are relatively heavier than the other two. Specifically, the traffic flow rates in the number of vehicles per hour per road are: 288 ( $\mathrm{N} \rightarrow \mathrm{S}$ ), 240 ( $\mathrm{W} \rightarrow \mathrm{E}$ ), 192 ( $\mathrm{E} \rightarrow \mathrm{W}$ ), and 120 ( $\mathrm{S} \rightarrow \mathrm{N}$ ), respectively. The two traffic flows, $\mathrm{S} \rightarrow \mathrm{N}$ and $\mathrm{W} \rightarrow \mathrm{E}$ , are generated at the beginning of each episode. Then, the $\mathrm{N} \rightarrow \mathrm{S}$ flow vehicles start to enter the network sequentially on the 45th second. After one minute, the traffic flow of $\mathrm{E} \rightarrow \mathrm{W}$ appears. The speed of each vehicle when entering the network is random. Thus, the total number of vehicles is 70 in the scenario with one intersection.
|
| 192 |
+
|
| 193 |
+
$^{2}$ https://www.eclipse.org/sumo/
|
| 194 |
+
<sup>3</sup>https://flow-project.github.io
|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
Fig. 6. The settings of traffic generation for 1x1 grid scenario. For example, $\mathrm{W}\rightarrow \mathrm{E}$ (#20, 1st sec) means there are 20 vehicles sequentially generated from the first simulation second.
|
| 198 |
+
|
| 199 |
+
2) $1 \times 6$ grid map: The $1 \times 6$ grid scenario is shown in Fig.7, which contains six intersections extending the $1 \times 1$ grid map with 5 more consecutive intersections. The road setting and traffic flow configurations are similar to the settings of the $1 \times 1$ grid scenario. The increased vertical $(N \to S$ and $S \to N)$ roads are allocated the corresponding traffic flow. A total of 240 vehicles are generated in this scenario.
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
Fig. 7. The settings of traffic generation for $1 \times 6$ grid scenario. The same settings (the number of vehicles generated, the simulation time to start traffic generation) apply for the traffic flow in the same direction.
|
| 203 |
+
|
| 204 |
+
3) Dublin map: Fig.8 illustrates the selected six signalized intersections area in the city of Dublin. These intersections are the main ones connected by arterial roads, maximizing traffic improvement while considering "easy-to-deploy" with minimized infrastructure upgrades as mentioned in Section III-D. A variety of roads are introduced, including exclusive go-straight, exclusive turn, and multi-directional roads. Meanwhile, intersections come in different shapes and sizes, including one three-leg with four signal phases (the rightmost one in Fig.8); the four-leg intersection is the majority, three have four phases, and the other has six phases; the most complex intersection is 5-leg with 6 phases (the third one from the left of Fig.8). The scenario is extracted from the open data in [26] to simulate the real-world traffic in Dublin city. We extracted dynamic traffic generated from 10 AM for 400 seconds, consisting of 275 vehicles allowed to drive straight, turn left or right at intersections. Each vehicle has a dedicated trip.
|
| 205 |
+
|
| 206 |
+

|
| 207 |
+
Fig. 8. The selected six signalized intersections area in the city of Dublin (a regional road, R111, in South Dublin). The highlighted roads are our selected testing scenario (six intersections are highlighted using red circles).
|
| 208 |
+
|
| 209 |
+
# B. Evaluation Metrics
|
| 210 |
+
|
| 211 |
+
We evaluate the sustainable traffic improvements of each scenario using the following metrics:
|
| 212 |
+
|
| 213 |
+
- Travel time (seconds): Travel time of each vehicle is the time cost in the road network until finishing the designated trip. The average travel time is calculated on vehicles completing their trips in a scenario, which is the common measure to evaluate traffic efficiency [10].
|
| 214 |
+
- Delay (seconds): Delay is the difference between the actual travel time and the ideal travel time (i.e., time spent when driving at the maximum permitted speed) for each trip. This value indicates the space in which the traffic efficiency can be further optimized to its upper-bound. This metric could be more noticeable than travel time to reflect the improvement of traffic efficiency [24].
|
| 215 |
+
- Fuel consumption (l/100km): Fuel consumption is the average amount of fuel consumed in liters every 100 kilometers traveled. The closer the vehicle speed is to the maximum speed limit we set, the more gentle change of acceleration, the less fuel consumption is likely to be achieved [33]. In our experiments, fuel consumption, as well as the $\mathrm{CO}_{2}$ emission described later, is calculated using HBEFA3/PC_G_EU4 model (i.e., a gasoline-powered Euro norm 4-passenger car modeled using the HBEFA3 [34]), which is the default vehicle emission model in $\mathrm{SUMO}^4$ . This model mainly considers the instantaneous speed and acceleration of a vehicle.
|
| 216 |
+
- $\mathrm{CO}_{2}$ emissions (g/km): $\mathrm{CO}_{2}$ emissions are measured by the average amount of carbon dioxide emitted in grams per kilometer traveled by all vehicles. As the primary component of greenhouse gas emissions, $\mathrm{CO}_{2}$ emissions are required to be reduced to achieve sustainable traffic.
|
| 217 |
+
- Time-To-Collision (TTC): TTC is a widely-used safety indicator [16], estimating the time required for a car to hit its preceding one. We use the default threshold of TTC in SUMO, 3 seconds $^{5}$ , which means a possible collision is recognised when the time gap between the two adjacent cars is less than 3 seconds. The value of TTC is literally the total number of such possible rear-end collisions for a given time horizon.
|
| 218 |
+
|
| 219 |
+
TABLE II SIMULATION SETTINGS OF DIFFERENT VEHICLE TYPES.
|
| 220 |
+
|
| 221 |
+
<table><tr><td colspan="2">All vehicles are CONNECTED</td></tr><tr><td>HDV (Non-CAV)</td><td>CAV</td></tr><tr><td>• Can NOT be controlled by CoTV</td><td>• Can be controlled by CoTV1</td></tr><tr><td>• IDM car-following model [35]</td><td>• IDM, if not controlled by CoTV</td></tr><tr><td colspan="2">Penetration Rate = |CAV| / [HDV|+|CAV|] × 100%</td></tr></table>
|
| 222 |
+
|
| 223 |
+
$^{1}$ Denoted as CoTV* when all CAVs are controlled by our system. This scenario of $100\%$ penetration rate is a different case as CoTV only controls CAV that is the closest to the incoming intersection on each road.
|
| 224 |
+
|
| 225 |
+
# C. Compared Methods
|
| 226 |
+
|
| 227 |
+
To evaluate the effectiveness of our system CoTV, the compared methods are described as follows:
|
| 228 |
+
|
| 229 |
+
- **Baseline:** This method is the baseline to evaluate the improvement of others. Traffic light signals have a static timing plan that does not change with the varying traffic, thus does not require V2X communications to collect vehicle information. All vehicles are HDV that are simulated by IDM car-following model [35] as shown in Table II, which is also used for simulating HDV in [29]. The Baseline scenario simulates most existing urban scenarios, which do not have any traffic light controllers and CAVs controlled by DRL. A cycle of the static traffic light signal plan contains four phases in order: Green-NS (green light for the flow N→S and S→N), Yellow-NS, Green-WE, and Yellow-WE. The duration of the green light is 40 seconds (default value in SUMO). The yellow light duration typically lasts from 3 to 6 seconds [10], so we set 3 seconds as the yellow light duration, which is also the default setting in SUMO. Thus, the length of a cycle is 86 seconds $(40 + 3 + 40 + 3)$ . Besides, the Baseline of the Dublin scenario adopts the original traffic light signal plans. Their specific settings vary by different intersections. Green light phase duration ranges from 37 to 42 seconds, and yellow light phase lasts 3 seconds. Some of intersections have a short green light for turn-right with 6 seconds.
|
| 230 |
+
|
| 231 |
+
- FlowCAV: FlowCAV [29] is a state-of-the-art DRL-based model to control the speed of a CAV to improve fuel efficiency and reduce emissions. Each CAV observes its preceding vehicle and then regulates its speed. The reward of a single CAV is evaluated globally by the average speed and acceleration of all vehicles. In this scenario, all traffic light signals are static. There is only one CAV agent per road, which leads the following vehicles on the same road.
|
| 232 |
+
- PressLight: PressLight [7] is a state-of-the-art DRL-based model to control traffic light signals to improve intersection throughput. The state of a traffic light controller includes the number of vehicles on the incoming roads and outgoing roads. The reward design utilizes the "pressure" to improve intersection throughput, which is inspired by [6]. All vehicles are HDV and connected, as shown in Table II, which are periodically broadcast
|
| 233 |
+
|
| 234 |
+
their up-to-date status (e.g., location, speed, acceleration), any agents within the communication range can aggregate them as the real-time traffic information.
|
| 235 |
+
|
| 236 |
+
- GLOSA: This is a optimization-based method for jointly controlling traffic light signals and CAVs. The GLOSA system<sup>6</sup> can adjust CAV speed considering the current traffic light phase and the current status of CAV. In our experiment, we combine it with adaptive traffic light controllers<sup>7</sup> to achieve joint control. Thus, phase switching is actuated after detecting a sufficient time gap between successive vehicles, resulting in various phase durations. It is worth noting that all vehicles in this scenario are CAVs.
|
| 237 |
+
- I-CoTV: I-CoTV combines independent policy training on the two types of agents as a common and straightforward way to develop MARL. There is no cooperation design between agents in either state or action, distinct from CoTV (action-independent MARL with cooperation schemes in the state exchange). Hence, the state of traffic light controllers involves two parts: its current signal phase and traffic on the roads it coordinates, not including any instantaneous vehicle information compared to CoTV. Correspondingly, the state of CAV agent only consists of the speed, acceleration, and location of itself and its preceding vehicle, without the current signal of the approaching traffic light from agent communication. Introducing I-CoTV aims to demonstrate that the efficient cooperation schemes of CoTV facilitate training convergence.
|
| 238 |
+
- M-CoTV: M-CoTV is the action-dependent MARL version of CoTV that trains the policies of traffic light controllers and CAVs considering both the action and state of another agent type within the range of one intersection. Introducing M-CoTV aims to demonstrate that CoTV takes advantage of the simplicity of action-independent MARL on policy training while efficiently achieving traffic improvements.
|
| 239 |
+
- CoTV*: CoTV* remains all features of CoTV, except that in CoTV*, the traffic light controller interacts with all CAVs instead of only the closest one to the intersection on each incoming road. Introducing CoTV* aims to demonstrate the improvement of CoTV in alleviating scalability issues.
|
| 240 |
+
|
| 241 |
+
# V. EVALUATION RESULTS
|
| 242 |
+
|
| 243 |
+
This section firstly discusses how CoTV performs in traffic efficiency and safety against four other compared methods under grid maps and Dublin scenario. Experiments with various CAV penetration rates are also conducted. Additionally, comparison of other MARL methods further demonstrates the effectiveness of CoTV on the cooperative control. Secondly, we show if CoTV can be efficiently deployed by comparing it with CoTV* in training time and traffic improvements. All numerical results shown are averaged from eighteen episodes.
|
| 244 |
+
|
| 245 |
+
# A. Traffic Efficiency & Safety
|
| 246 |
+
|
| 247 |
+
1) Comparison with state-of-the-art methods: Table III shows the traffic improvements of CoTV under $100\%$ CAV penetration rate, the same for FlowCAV and GLOSA (while $0\%$ CAV penetration rate for PressLight and Baseline scenario as no need for vehicle speed control).
|
| 248 |
+
|
| 249 |
+
TABLE III COMPARISON OF COTV AGAINST BASELINE AND STATE-OF-THE-ART METHODS. PERCENTAGE CHANGES SHOWN ARE COMPARED TO BASELINE. THE BEST ACHIEVED MEASUREMENTS ARE IN BOLD.
|
| 250 |
+
|
| 251 |
+
<table><tr><td>Method</td><td>Travel time (s)</td><td>Delay (s)</td><td>Fuel (l/100km)</td><td>CO2(g/km)</td><td>TTC</td></tr><tr><td></td><td colspan="5">1x1 grid</td></tr><tr><td>Baseline</td><td>59.67</td><td>18.76</td><td>9.29</td><td>216.05</td><td>354.00</td></tr><tr><td>FlowCAV</td><td>87.23+46.19%</td><td>46.32+146.91%</td><td>12.98+39.72%</td><td>302.02+39.79%</td><td>694.22+96.11%</td></tr><tr><td>PressLight</td><td>49.81-16.52%</td><td>8.90-52.56%</td><td>8.64-7.00%</td><td>201.00-6.97%</td><td>51.61-85.42%</td></tr><tr><td>GLOSA</td><td>50.95-14.61%</td><td>10.03-46.54%</td><td>8.65-6.89%</td><td>201.34-6.81%</td><td>65.83-81.40%</td></tr><tr><td>CoTV</td><td>48.42-18.85%</td><td>7.50-60.02%</td><td>8.44-9.15%</td><td>196.42-9.09%</td><td>30.11-91.49%</td></tr><tr><td></td><td colspan="5">1x6 grid</td></tr><tr><td>Baseline</td><td>89.99</td><td>33.13</td><td>9.54</td><td>221.97</td><td>1724.94</td></tr><tr><td>FlowCAV</td><td>172.12+91.27%</td><td>118.30+257.08%</td><td>17.60+84.49%</td><td>409.44+84.46%</td><td>5200.00+201.46%</td></tr><tr><td>PressLight</td><td>77.59-13.78%</td><td>21.22-35.95%</td><td>8.82-7.55%</td><td>205.13-7.59%</td><td>676.22-60.80%</td></tr><tr><td>GLOSA</td><td>68.91-23.42%</td><td>12.05-63.63%</td><td>7.59-20.44%</td><td>176.49-20.49%</td><td>252.94-85.34%</td></tr><tr><td>CoTV</td><td>65.56-27.15%</td><td>8.70-73.74%</td><td>7.27-23.79%</td><td>169.19-23.78%</td><td>68.28-96.04%</td></tr><tr><td></td><td colspan="5">Dublin</td></tr><tr><td>Baseline</td><td>59.33</td><td>29.17</td><td>10.98</td><td>255.53</td><td>1212.67</td></tr><tr><td>FlowCAV</td><td>59.39+0.10%</td><td>29.43+0.89%</td><td>11.16+1.64%</td><td>259.70+1.63%</td><td>1223.28+0.87%</td></tr><tr><td>PressLight</td><td>44.92-24.29%</td><td>14.69-49.64%</td><td>8.49-22.68%</td><td>197.43-22.74%</td><td>463.11-61.81%</td></tr><tr><td>GLOSA</td><td>45.40-23.48%</td><td>15.06-48.37%</td><td>8.46-22.95%</td><td>196.92-22.94%</td><td>545.50-55.02%</td></tr><tr><td>CoTV</td><td>41.76-29.61%</td><td>11.48-60.64%</td><td>7.97-27.41%</td><td>185.42-27.44%</td><td>195.94-83.84%</td></tr></table>
|
| 252 |
+
|
| 253 |
+
- Travel time & delay: As shown in Table III, CoTV achieves the shortest travel time with up to $30\%$ reduction compared to Baseline. PressLight and GLOSA achieve over $24\%$ and $23\%$ reduction, respectively. However, FlowCAV does not reduce travel time due to static traffic light plan and the absence of current traffic light signals, and the results in grid road maps are much worse than Baseline. The further improvement of CoTV demonstrates the advantages of cooperative traffic control compared with controlling traffic light signals only,
|
| 254 |
+
|
| 255 |
+

|
| 256 |
+
Fig. 9. Travel time distributions for three test scenarios, comparing four methods. CoTV can reduce the travel time of all vehicles to be densely distributed with lower values than other methods.
|
| 257 |
+
|
| 258 |
+
meanwhile indicating that DRL-based approaches provide better adaptive traffic control than traditional approaches. Moreover, Fig.9 illustrates travel time of all vehicles can be reduced significantly and more densely distributed around a lower value under the three scenarios using CoTV, compared with other methods. The results in delay from Table III shows that CoTV reduces the travel time very close to its minimum possible value. Compared with other methods, CoTV can achieve up to about $74\%$ reduction in delay.
|
| 259 |
+
|
| 260 |
+
- Environmental indicators: CoTV brings the best results of fuel consumption and $\mathrm{CO}_{2}$ emissions, both achieving over $27\%$ reduction shown in Table III. The reduced travel time of PressLight results in less fuel consumption. GLOSA obtains the second-best results due to the jointly optimised traffic light timings and vehicle speed. However, FlowCAV does not show any improvement on the two environmental indicators due to the complexity of urban scenarios containing intersections.
|
| 261 |
+
- Traffic safety: CoTV reduces TTC by over $96\%$ , as shown in Table III. PressLight and GLOSA improve traffic safety as well. However, there is a great difference in TTC between PressLight and CoTV under the 1x6 grid scenario, and the result of CoTV under Dublin scenario is much better than the other two methods. Conversely, FlowCAV hurts traffic safety under the grid maps but not in Dublin scenario. The more realistic urban scenario brings explicit complexity to enhance safety. This also highlights the advantages of CoTV using DRL-based methods for the cooperative control.
|
| 262 |
+
|
| 263 |
+
2) Robustness to varying CAV penetration rates: Fig.10 shows that the travel time of CoTV tends to decrease as the CAV penetration rates increase under 1x1 and 1x6 grid maps and Dublin scenarios. Even under $0\%$ CAV penetration rate (i.e., the ratio of CAVs to all vehicles as shown in Table II), the travel time that CoTV achieves is still less than Baseline and PressLight. Specifically, CoTV with $0\%$ CAV penetration rate implicates no vehicle speed control. In general, CoTV with
|
| 264 |
+
|
| 265 |
+
CAV speed control can get better results, which demonstrates the effectiveness of cooperative traffic control. Similar results are shown in other metrics; thus, we do not present them to save space. This demonstrates the practicability of CoTV when deployed in a realistic mixed-autonomy scenario.
|
| 266 |
+
|
| 267 |
+
3) Comparison with other MARL methods: To further demonstrate the effectiveness of our CoTV system design on cooperative control, we compare CoTV with two other common MARL methods, I-CoTV (independent, without any cooperation schemes) and M-CoTV (action-dependent, with cooperation schemes in action and state). Results under Dublin scenario with full-autonomy traffic are shown in Table.IV. CoTV achieves the best results, while I-CoTV suffers from convergence issues, resulting in the worst traffic performance. M-CoTV fails to overcome high complexity from consideration of other agents' actions, which affects traffic improvements. In particular, the performance changes in fuel consumption and travel time are inconsistent in M-CoTV compared with I-CoTV. The training time of M-CoTV also increases by about $50\%$ . In addition, referring to Table.III, M-CoTV and I-CoTV perform better than Baseline and FlowCAV but do not surpass PressLight and GLOSA.
|
| 268 |
+
|
| 269 |
+
TABLE IV COMPARISON BETWEEN I-COTV (INDEPENDENT, WITHOUT ANY COOPERATION SCHEMES), M-COTV (ACTION-DEPENDENT, WITH COOPERATION SCHEMES IN ACTION AND STATE), AND COTV WITH FULL-AUTONOMY TRAFFIC UNDER DUBLIN SCENARIO.
|
| 270 |
+
|
| 271 |
+
<table><tr><td>Method</td><td>Travel time (s)</td><td>Fuel (l/100km)</td><td>TTC</td><td>Training time (h)</td></tr><tr><td>I-CoTV</td><td>49.21</td><td>9.19</td><td>489.78</td><td>1.36</td></tr><tr><td>M-CoTV</td><td>47.53</td><td>10.44</td><td>660.08</td><td>2.00</td></tr><tr><td>CoTV</td><td>41.76</td><td>7.97</td><td>195.94</td><td>1.33</td></tr></table>
|
| 272 |
+
|
| 273 |
+
In summary, CoTV achieves the first three system goals, including reduced travel time, lower fuel consumption and $\mathrm{CO}_{2}$ emissions, and longer time-to-collision. The cooperation schemes between CAV and traffic light controllers, which is the first contribution of this paper, can overcome the difficulties of DRL-based joint control in complex urban traffic scenarios.
|
| 274 |
+
|
| 275 |
+
# B. Scalability Improvement
|
| 276 |
+
|
| 277 |
+
The second contribution of CoTV is the improvement of the multi-agent system scalability by reducing the number of CAV agents controlled. Compared with CoTV* that trains all possible CAVs, results from Table V indicate that CoTV can reduce the training time by up to $44\%$ , while still having comparable (sometimes slightly better) improvement in both traffic efficiency and safety under Dublin scenario. Although CoTV* obtains better results under the two grid maps than CoTV, it is worth reminding that CoTV achieves this by only cooperating with the closest CAV on each incoming road for the traffic light controller. The closest CAV has the great potential to increase intersection throughput, which is similar to controlling the leading vehicle only for improving the traffic efficiency of a platoon [13]. The CAV as the leading
|
| 278 |
+
|
| 279 |
+

|
| 280 |
+
(a) $1 \times 1$ grid
|
| 281 |
+
|
| 282 |
+

|
| 283 |
+
(b) 1x6 grid
|
| 284 |
+
Fig. 10. The average travel time of CoTV under different penetration rates under both grid and Dublin scenarios. The average travel time obtained in Baseline and PressLight is also given for comparison. Travel time tends to decrease as the CAV penetration rate increases.
|
| 285 |
+
|
| 286 |
+

|
| 287 |
+
(c) Dublin
|
| 288 |
+
|
| 289 |
+
vehicle is well controlled by CoTV, all its following vehicles are subsequently self-adjusted. Moreover, Fig.11 indicates that two agent types of CoTV, traffic light controllers and CAVs, can be converged at a higher reward with a small standard deviation than the start after about 60 training iterations. Thus, CoTV can alleviate scalability issues, while also not compromise traffic improvement. The last goal of system design, easier to deploy, is achieved.
|
| 290 |
+
|
| 291 |
+
TABLEV COMPARISON BETWEEN COTV AND COTV\* (CONTROL ALL POSSIBLE CAVS) UNDER FULL-AUTONOMY TRAFFIC.
|
| 292 |
+
|
| 293 |
+
<table><tr><td></td><td>Method</td><td>Travel time (s)</td><td>Fuel (l/100km)</td><td>TTC</td><td>Training time (h)</td></tr><tr><td>1x1</td><td>CoTV*</td><td>48.34</td><td>8.40</td><td>29.61</td><td>0.70</td></tr><tr><td>grid</td><td>CoTV</td><td>48.42</td><td>8.44</td><td>30.11</td><td>0.45</td></tr><tr><td>1x6</td><td>CoTV*</td><td>65.34</td><td>7.11</td><td>61.78</td><td>2.57</td></tr><tr><td>grid</td><td>CoTV</td><td>65.56</td><td>7.27</td><td>68.28</td><td>1.65</td></tr><tr><td rowspan="2">Dublin</td><td>CoTV*</td><td>43.42</td><td>7.98</td><td>219.67</td><td>2.37</td></tr><tr><td>CoTV</td><td>41.76</td><td>7.97</td><td>195.94</td><td>1.33</td></tr></table>
|
| 294 |
+
|
| 295 |
+

|
| 296 |
+
Fig. 11. Evolution of the average episode reward for traffic light controllers (TL) and CAV agent of CoTV under Dublin scenario. The shade represents the standard deviation value. After DRL training on CoTV, the rewards for both types of agents can converge to higher values and smaller standard deviations than in the initial stage.
|
| 297 |
+
|
| 298 |
+
# C. Discussion
|
| 299 |
+
|
| 300 |
+
To further explore the deployment options of CoTV, we conduct experiments under a relatively large and dense urban scenario in Dublin city centre, which traditionally requires
|
| 301 |
+
|
| 302 |
+
sophisticated coordination between adjacent traffic light controllers. The selected area covers nearly $1\,km^2$ with 31 signalized intersections, as shown in Fig.12. These intersections with different road shapes and traffic light signal cycles/phases are all controlled by CoTV. Table.VI shows the traffic performance under this dense Dublin scenario under $100\%$ CAV penetration rate. Although CoTV can get converged and obtain the best results in all evaluation metrics, which shows that CoTV can be deployed in both major and minor junctions, we still need further studies to find the optimal selection of key intersections to control to avoid costly deployment on all urban junctions.
|
| 303 |
+
|
| 304 |
+

|
| 305 |
+
Fig. 12. The selected dense urban scenario in the city centre of Dublin. There are 119 intersections in total, including 31 signalized intersections. 321 vehicles are generated from 10 AM in 400 seconds.
|
| 306 |
+
|
| 307 |
+
# VI. CONCLUSIONS AND FUTURE WORK
|
| 308 |
+
|
| 309 |
+
This paper proposes a multi-agent DRL system, CoTV, to control traffic light signals and CAV cooperatively for achieving sustainable traffic goals. CoTV can significantly improve traffic efficiency (i.e., travel time, fuel consumption, and $\mathrm{CO}_{2}$ emissions) as well as traffic safety (i.e., time-to-collision), which outperforms other DRL-based systems that control either traffic light signal or vehicle speed, and traditional joint control method. Moreover, the traffic light controllers in our CoTV utilize V2I communications infrastructure to only cooperate with the closest CAV (i.e., as the leader of
|
| 310 |
+
|
| 311 |
+
TABLE VI TRAFFIC PERFORMANCE UNDER A DENSE DUBLIN SCENARIO. PERCENTAGE CHANGES SHOWN ARE COMPARED TO BASELINE. THE BEST ACHIEVED MEASUREMENTS ARE IN BOLD.
|
| 312 |
+
|
| 313 |
+
<table><tr><td>Method</td><td>Travel time (s)</td><td>Delay (s)</td><td>Fuel (l/100km)</td><td>CO2(g/km)</td><td>TTC</td></tr><tr><td>Baseline</td><td>125.40</td><td>78.84</td><td>11.22</td><td>261.09</td><td>2344.50</td></tr><tr><td rowspan="2">FlowCAV</td><td>127.71</td><td>81.14</td><td>11.06</td><td>257.03</td><td>1947.44</td></tr><tr><td>+1.84%</td><td>+2.92%</td><td>-1.43%</td><td>-1.45%</td><td>-16.94%</td></tr><tr><td rowspan="2">PressLight</td><td>114.60</td><td>68.15</td><td>9.70</td><td>225.59</td><td>1800.83</td></tr><tr><td>-8.61%</td><td>-13.56%</td><td>-13.55%</td><td>-13.60%</td><td>-23.19%</td></tr><tr><td rowspan="2">GLOSA</td><td>104.21</td><td>57.64</td><td>8.50</td><td>197.70</td><td>1193.61</td></tr><tr><td>-16.90%</td><td>-26.89%</td><td>-24.24%</td><td>-24.28%</td><td>-49.09%</td></tr><tr><td rowspan="2">CoTV</td><td>103.18</td><td>56.60</td><td>8.40</td><td>195.39</td><td>787.29</td></tr><tr><td>-17.72%</td><td>-28.21%</td><td>-25.13%</td><td>-25.16%</td><td>-66.42%</td></tr></table>
|
| 314 |
+
|
| 315 |
+
a platoon) on each incoming road for alleviating scalability issue of multi-agent systems. This also eases the deployment and achieves the training process to converge within a moderate number of iterations. Experiments in various grid maps and realistic urban scenarios demonstrate the effectiveness of CoTV. Compared to the Baseline, CoTV can save up to $28\%$ in fuel consumption and $\mathrm{CO}_{2}$ while reducing travel time by up to $30\%$ . The robustness of CoTV is also validated under different penetration rates of CAV.
|
| 316 |
+
|
| 317 |
+
As future works, we plan to improve the robustness of our CoTV system to more practical scenarios. Firstly, we will relax the assumption that all vehicles are connected using V2X communications. Secondly, we will improve CoTV to be resilient to varying vehicular network conditions (e.g., latency, packet loss, bandwidth, etc.). Our long-term goal is to tackle the scalability issues of applying cooperative MARL algorithms (e.g., COMA) in complex urban traffic scenarios.
|
| 318 |
+
|
| 319 |
+
# REFERENCES
|
| 320 |
+
|
| 321 |
+
[1] U. Desa et al., "Transforming our world: The 2030 agenda for sustainable development," 2016.
|
| 322 |
+
[2] A. Shetty, M. Yu, A. Kurzhanskiy, O. Grembek, H. Tavafoghi, and P. Varaiya, "Safety challenges for autonomous vehicles in the absence of connectivity," Transportation Research Part C: Emerging Technologies, vol. 128, p. 103133, 2021.
|
| 323 |
+
[3] C. Yu, Y. Feng, H. X. Liu, W. Ma, and X. Yang, "Integrated optimization of traffic signals and vehicle trajectories at isolated urban intersections," Transportation Research Part B: Methodological, vol. 112, pp. 89-112, 2018.
|
| 324 |
+
[4] B. Xu, X. J. Ban, Y. Bian, W. Li, J. Wang, S. E. Li, and K. Li, "Cooperative method of traffic signal optimization and speed control of connected vehicles at isolated intersections," IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 4, pp. 1390-1403, 2018.
|
| 325 |
+
[5] X. Di and R. Shi, "A survey on autonomous vehicle control in the era of mixed-autonomy: From physics-based to AI-guided driving policy learning," Transportation research part C: emerging technologies, vol. 125, p. 103008, 2021.
|
| 326 |
+
[6] P. Varaiya, "Max pressure control of a network of signalized intersections," Transportation Research Part C: Emerging Technologies, vol. 36, pp. 177-195, 2013.
|
| 327 |
+
[7] H. Wei, C. Chen, G. Zheng, K. Wu, V. Gayah, K. Xu, and Z. Li, "Presslight: Learning max pressure control to coordinate traffic signals in arterial network," in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 1290-1298.
|
| 328 |
+
|
| 329 |
+
[8] G. Benedetti, M. P. Fanti, A. M. Mangini, and F. Parisi, "Application of Deep Reinforcement Learning for Traffic Control of Road Intersection with Emergency Vehicles," in 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2021, pp. 182-187.
|
| 330 |
+
[9] C. Wu, A. Kreidieh, K. Parvate, E. Vinitsky, and A. M. Bayen, “Flow: A modular learning framework for autonomy in traffic,” arXiv preprint arXiv:1710.05465, 2017.
|
| 331 |
+
[10] H. Wei, G. Zheng, V. Gayah, and Z. Li, "A survey on traffic signal control methods," arXiv preprint arXiv:1904.08117, 2019.
|
| 332 |
+
[11] T. Chu, J. Wang, L. Codeca, and Z. Li, "Multi-agent deep reinforcement learning for large-scale traffic signal control," IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 3, pp. 1086-1095, 2019.
|
| 333 |
+
[12] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
|
| 334 |
+
[13] J. Lioris, R. Pedarsani, F. Y. Tascikaraoglu, and P. Varaiya, “Doubling throughput in urban roads by platooning,” IFAC-PapersOnLine, vol. 49, no. 3, pp. 49–54, 2016.
|
| 335 |
+
[14] P. K. Singh, S. K. Nandi, and S. Nandi, “A tutorial survey on vehicular communication state of the art, and future research directions,” *Vehicular Communications*, vol. 18, p. 100164, 2019.
|
| 336 |
+
[15] J. Guo and S. Wang, “Poster: Can Traffic Lights and CAV Work Together using Deep Reinforcement Learning?” in 2021 IEEE Vehicular Networking Conference (VNC). IEEE, 2021, pp. 127-128.
|
| 337 |
+
[16] D. N. Lee, “A theory of visual control of braking based on information about time-to-collision,” Perception, vol. 5, no. 4, pp. 437–459, 1976.
|
| 338 |
+
[17] A. G. Sims and K. W. Dobinson, “The Sydney coordinated adaptive traffic (SCAT) system philosophy and benefits,” IEEE Transactions on vehicular technology, vol. 29, no. 2, pp. 130–137, 1980.
|
| 339 |
+
[18] R. E. Stern, S. Cui, M. L. Delle Monache, R. Bhadani, M. Bunting, M. Churchill, N. Hamilton, H. Pohlmann, F. Wu, B. Piccoli et al., "Dissipation of stop-and-go waves via control of autonomous vehicles: Field experiments," Transportation Research Part C: Emerging Technologies, vol. 89, pp. 205-221, 2018.
|
| 340 |
+
[19] H. Suzuki and Y. Marumo, "A new approach to green light optimal speed advisory (GLOSA) systems for high-density traffic flow," in 2018 21st International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2018, pp. 362-367.
|
| 341 |
+
[20] E. Vinitsky, A. Kreidieh, L. Le Flem, N. Kheterpal, K. Jang, C. Wu, F. Wu, R. Liaw, E. Liang, and A. M. Bayen, "Benchmarks for reinforcement learning in mixed-autonomy traffic," in Conference on robot learning. PMLR, 2018, pp. 399-409.
|
| 342 |
+
[21] M. Tajalli and A. Hajbabaie, "Traffic Signal Timing and Trajectory Optimization in a Mixed Autonomy Traffic Stream," IEEE Transactions on Intelligent Transportation Systems, 2021.
|
| 343 |
+
[22] P. Hernandez-Leal, B. Kartal, and M. E. Taylor, "A survey and critique of multiagent deep reinforcement learning," Autonomous Agents and Multi-Agent Systems, vol. 33, no. 6, pp. 750-797, 2019.
|
| 344 |
+
[23] J. Olstam, F. Johansson, A. Alessandrini, P. Sukennik, J. Lohmiller, and M. Friedrich, "An approach for handling uncertainties related to behaviour and vehicle mixes in traffic simulation experiments with automated vehicles," Journal of advanced transportation, vol. 2020, 2020.
|
| 345 |
+
[24] I. Postigo, J. Olstam, and C. Rydergren, "Effects on Traffic Performance Due to Heterogeneity of Automated Vehicles on Motorways: A Microscopic Simulation Study," in VEHITS, 2021, pp. 142-151.
|
| 346 |
+
[25] A. Sharma, Z. Zheng, J. Kim, A. Bhaskar, and M. M. Haque, "Assessing traffic disturbance, efficiency, and safety of the mixed traffic flow of connected vehicles and traditional vehicles by considering human factors," Transportation research part C: emerging technologies, vol. 124, p. 102934, 2021.
|
| 347 |
+
[26] M. Guériau and I. Dusparic, “Quantifying the impact of connected and autonomous vehicles on traffic efficiency and safety in mixed traffic,” in 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2020, pp. 1-8.
|
| 348 |
+
[27] H. Wei, X. Liu, L. Mashayekhy, and K. Decker, "Mixed-Autonomy Traffic Control with Proximal Policy Optimization," in 2019 IEEE Vehicular Networking Conference (VNC). IEEE, 2019, pp. 1-8.
|
| 349 |
+
[28] H. Wei, G. Zheng, H. Yao, and Z. Li, "Intellilight: A reinforcement learning approach for intelligent traffic light control," in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp. 2496-2505.
|
| 350 |
+
[29] C. Wu, A. Kreidieh, K. Parvate, E. Vinitsky, and A. M. Bayen, "Flow: Architecture and benchmarking for reinforcement learning in traffic control," arXiv preprint arXiv:1710.05465, p. 10, 2017.
|
| 351 |
+
|
| 352 |
+
[30] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, "Continuous control with deep reinforcement learning," arXiv preprint arXiv:1509.02971, 2015.
|
| 353 |
+
[31] C. Chen, H. Wei, N. Xu, G. Zheng, M. Yang, Y. Xiong, K. Xu, and Z. Li, "Toward A thousand lights: Decentralized deep reinforcement learning for large-scale traffic signal control," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 04, 2020, pp. 3414-3421.
|
| 354 |
+
[32] X. Liang, X. Du, G. Wang, and Z. Han, “A deep reinforcement learning network for traffic light cycle control,” IEEE Transactions on Vehicular Technology, vol. 68, no. 2, pp. 1243–1253, 2019.
|
| 355 |
+
[33] K. Ahn, H. Rakha, A. Trani, and M. Van Aerde, "Estimating vehicle fuel consumption and emissions based on instantaneous speed and acceleration levels," Journal of transportation engineering, vol. 128, no. 2, pp. 182-190, 2002.
|
| 356 |
+
[34] M. Keller, S. Hausberger, C. Matzer, P. Wüthrich, and B. Notter, "Handbook of emission factors for road transport (HBEFA) 3.1," quick reference. Technical report, INFRAS, Tech. Rep., 2010.
|
| 357 |
+
[35] M. Treiber and A. Kesting, “The intelligent driver model with stochasticity-new insights into traffic flow oscillations,” Transportation research proceedings, vol. 23, pp. 174–187, 2017.
|
| 358 |
+
|
| 359 |
+

|
| 360 |
+
|
| 361 |
+
Shen Wang (Member, IEEE) is currently an Assistant Professor with the School of Computer Science, University College Dublin, Ireland. He received the M.Eng. degree from Wuhan University, China, and the Ph.D. degree from Dublin City University, Ireland. Dr. Wang is a member of the IEEE and has been involved with several EU projects as a co-PI, WP and Task leader in big trajectory data streaming for air traffic control and trustworthy AI for intelligent cybersecurity systems. Some key industry partners of his applied research are IBM
|
| 362 |
+
|
| 363 |
+
Research Brazil, Boeing Research and Technology Europe, and Huawei Ireland Research Centre. He is the recipient of the IEEE Intelligent Transportation Systems Society Young Professionals Travelling Fellowship 2022. His research interests include connected autonomous vehicles, explainable artificial intelligence, and security and privacy for mobile networks.
|
| 364 |
+
|
| 365 |
+

|
| 366 |
+
|
| 367 |
+
Jiaying Guo (Student Member, IEEE) is a Ph.D. student at the School of Computer Science, University College Dublin, Ireland. She received the B.Sc. degree from University College Dublin and the B.Eng. degree from Beijing University of Technology, China, in 2020. Her research interests include multi-agent systems of intelligent transportation, reinforcement learning, mix-autonomy traffic, and connected autonomous vehicles.
|
| 368 |
+
|
| 369 |
+

|
| 370 |
+
|
| 371 |
+
Long Cheng is a Full Professor in the School of Control and Computer Engineering at North China Electric Power University in Beijing, and also a Visiting Professor at Insight SFI Research Centre for Data Analytics in Dublin. He received the B.E. from Harbin Institute of Technology, China in 2007, M.Sc from University of Duisburg-Essen, Germany in 2010 and Ph.D from National University of Ireland Maynooth in 2014. He was an Assistant Professor at Dublin City University, and a Marie Curie Fellow at University College Dublin. He also has worked at
|
| 372 |
+
|
| 373 |
+
organizations such as Huawei Technologies Germany, IBM Research Dublin, TU Dresden and TU Eindhoven. He has published around 60 papers in journals and conferences like TPDS, TON, TC, TSC, TASE, TCAD, TCC, TBD, TITS, TVLSI, JPDC, IEEE Network, CIKM, ICPP, CCGrid and EuroPar etc. His research focuses on distributed systems, deep learning, cloud computing and process mining. Prof Cheng is a Senior Member of the IEEE and an associate editor of the Journal of Cloud Computing.
|
2201.13xxx/2201.13143/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d06919b55003af9d0764255f92ef7d04eade9d98e4b5c886dd25f22da57d935e
|
| 3 |
+
size 601971
|
2201.13xxx/2201.13143/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.13xxx/2201.13148/2c84f44d-f098-4430-8e8c-b79d28977a5f_content_list.json
ADDED
|
@@ -0,0 +1,934 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "THRESHOLD INDEPENDENT EVALUATION OF SOUND EVENT DETECTION SCORES",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
104,
|
| 8 |
+
114,
|
| 9 |
+
895,
|
| 10 |
+
132
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Janek Ebbers, Reinhold Haeb-Umbach",
|
| 17 |
+
"bbox": [
|
| 18 |
+
163,
|
| 19 |
+
152,
|
| 20 |
+
475,
|
| 21 |
+
169
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Paderborn University, \nDepartment of Communications Engineering, 33098 Paderborn, Germany, {ebbers,haeb} @ nt.upb.de",
|
| 28 |
+
"bbox": [
|
| 29 |
+
138,
|
| 30 |
+
186,
|
| 31 |
+
501,
|
| 32 |
+
256
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Romain Serizel",
|
| 39 |
+
"bbox": [
|
| 40 |
+
674,
|
| 41 |
+
152,
|
| 42 |
+
800,
|
| 43 |
+
167
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "Université de Lorraine, CNRS, Inria, Loria, F-54000 Nancy, France, romain.serizel@loria.fr",
|
| 50 |
+
"bbox": [
|
| 51 |
+
614,
|
| 52 |
+
186,
|
| 53 |
+
859,
|
| 54 |
+
253
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "ABSTRACT",
|
| 61 |
+
"text_level": 1,
|
| 62 |
+
"bbox": [
|
| 63 |
+
243,
|
| 64 |
+
286,
|
| 65 |
+
328,
|
| 66 |
+
299
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "Performing an adequate evaluation of sound event detection (SED) systems is far from trivial and is still subject to ongoing research. The recently proposed polyphonic sound detection (PSD)-receiver operating characteristic (ROC) and PSD score (PSDS) make an important step into the direction of an evaluation of SED systems which is independent from a certain decision threshold. This allows to obtain a more complete picture of the overall system behavior which is less biased by threshold tuning. Yet, the PSD-ROC is currently only approximated using a finite set of thresholds. The choice of the thresholds used in approximation, however, can have a severe impact on the resulting PSDS. In this paper we propose a method which allows for computing system performance on an evaluation set for all possible thresholds jointly, enabling accurate computation not only of the PSD-ROC and PSDS but also of other collar-based and intersection-based performance curves. It further allows to select the threshold which best fulfills the requirements of a given application. Source code is publicly available in our SED evaluation package sed Scores.eval<sup>1</sup>.",
|
| 73 |
+
"bbox": [
|
| 74 |
+
81,
|
| 75 |
+
301,
|
| 76 |
+
488,
|
| 77 |
+
540
|
| 78 |
+
],
|
| 79 |
+
"page_idx": 0
|
| 80 |
+
},
|
| 81 |
+
{
|
| 82 |
+
"type": "text",
|
| 83 |
+
"text": "Index Terms—sound event detection, polyphonic sound detection, evaluation, threshold independent, roc",
|
| 84 |
+
"bbox": [
|
| 85 |
+
83,
|
| 86 |
+
541,
|
| 87 |
+
488,
|
| 88 |
+
569
|
| 89 |
+
],
|
| 90 |
+
"page_idx": 0
|
| 91 |
+
},
|
| 92 |
+
{
|
| 93 |
+
"type": "text",
|
| 94 |
+
"text": "1. INTRODUCTION",
|
| 95 |
+
"text_level": 1,
|
| 96 |
+
"bbox": [
|
| 97 |
+
215,
|
| 98 |
+
582,
|
| 99 |
+
357,
|
| 100 |
+
595
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "Recently, there is a rapid progress in Machine Listening aiming to imitate by machines the human ability to recognize, distinguish and interpret sounds [1]. The progress is driven by the annual Detection and Classification of Acoustic Scenes and Events (DCASE) challenges<sup>2</sup> and the releases of large-scale sound databases such as Google's AudioSet [2] and FSD50k [3].",
|
| 107 |
+
"bbox": [
|
| 108 |
+
81,
|
| 109 |
+
603,
|
| 110 |
+
488,
|
| 111 |
+
684
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "text",
|
| 117 |
+
"text": "For a successful development of such systems an adequate evaluation of the system's operating behavior is crucial, where, ideally, the evaluation metric correlates to the user satisfaction during system application [4].",
|
| 118 |
+
"bbox": [
|
| 119 |
+
81,
|
| 120 |
+
683,
|
| 121 |
+
488,
|
| 122 |
+
736
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "text",
|
| 128 |
+
"text": "In this paper we are concerned with the evaluation of sound event detection (SED) systems [5]. SED aims to recognize sound events in audio signals together with their onset and offset time. One particular challenge in SED is that labeling of ground truth event onset and offset times, referred to as strong labels, is expensive and time-consuming. Therefore, many systems aim to learn SED from weakly labeled data [6, 7], which only indicate the presence or absence of a sound event in an audio signal without providing its onset and offset times, and unlabeled data [8, 9]. Synthetically generated",
|
| 129 |
+
"bbox": [
|
| 130 |
+
81,
|
| 131 |
+
736,
|
| 132 |
+
490,
|
| 133 |
+
854
|
| 134 |
+
],
|
| 135 |
+
"page_idx": 0
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "text",
|
| 139 |
+
"text": "soundscapes are another alternative to produce cheap strongly annotated data [10, 11]. Here, an insightful evaluation of systems is particularly important to be able to draw conclusions about the system's learning behavior w.r.t. the temporal localization of sounds.",
|
| 140 |
+
"bbox": [
|
| 141 |
+
506,
|
| 142 |
+
286,
|
| 143 |
+
913,
|
| 144 |
+
339
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 0
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "text",
|
| 150 |
+
"text": "Due to the temporal component of sound events, however, the adequate evaluation of SED performance is far from trivial. Traditional approaches perform segment-based and collar-based (event-based) evaluation [12] for only a single operating point (decision threshold). Further, segment-based evaluation does not sufficiently evaluate a system's capability of providing connected detections, whereas collar-based evaluation is sensitive to ambiguities in the definition of the ground truth event boundaries.",
|
| 151 |
+
"bbox": [
|
| 152 |
+
506,
|
| 153 |
+
339,
|
| 154 |
+
913,
|
| 155 |
+
444
|
| 156 |
+
],
|
| 157 |
+
"page_idx": 0
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "text",
|
| 161 |
+
"text": "More recently, Bilen et al. [13] proposed the polyphonic sound detection (PSD)-receiver operating characteristic (ROC) curve and PSD score (PSDS), which is an important step towards an evaluation of SED systems which is independent of specific decision thresholds and therefore provides a more complete picture of the system's overall operating behavior and is less biased by a specific tuning of the decision thresholds.",
|
| 162 |
+
"bbox": [
|
| 163 |
+
506,
|
| 164 |
+
444,
|
| 165 |
+
913,
|
| 166 |
+
536
|
| 167 |
+
],
|
| 168 |
+
"page_idx": 0
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "text",
|
| 172 |
+
"text": "However, PSD-ROC curves are only approximated so far due to the lack of a method which efficiently evaluates the system's performance for all possible decision thresholds. The approximation of the PSD-ROC curve can significantly underestimate the system's PSDS as we will show in Sec. 5.",
|
| 173 |
+
"bbox": [
|
| 174 |
+
506,
|
| 175 |
+
536,
|
| 176 |
+
913,
|
| 177 |
+
602
|
| 178 |
+
],
|
| 179 |
+
"page_idx": 0
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "text",
|
| 183 |
+
"text": "In this paper, we therefore present such a method to efficiently compute the system's performance for all possible decision thresholds jointly, which allows us to accurately compute the PSD-ROC and PSDS. Further, it can also be used to compute other intersection-based and collar-based performance curves such as precision-recall (PR)-curves. The presented approach can be understood as a generalization of the method used for single instance evaluation<sup>3</sup> to more sophisticated evaluations such as collar-based or intersection-based evaluations. It is based on the definition of changes in the intermediate statistics that occur when the decision threshold falls below a certain score, which we refer to as deltas in the following. Then, absolute values can be obtained for all possible thresholds by performing a cumulative sum over the deltas.",
|
| 184 |
+
"bbox": [
|
| 185 |
+
506,
|
| 186 |
+
603,
|
| 187 |
+
913,
|
| 188 |
+
773
|
| 189 |
+
],
|
| 190 |
+
"page_idx": 0
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "text",
|
| 194 |
+
"text": "The rest of the paper is structured as follows. Sec. 2 reviews current threshold-dependent approaches for SED evaluation. Sec. 3 describes commonly used threshold-independent evaluation methods for single instance evaluation $^{3}$ as well as the recently proposed PSD for the threshold-independent evaluation of SED. Then, we present our proposed approach for the accurate computation of PSD-ROC and other performance curves in Sec. 4. Finally we present experiments in Sec. 5 and draw conclusions in Sec. 6.",
|
| 195 |
+
"bbox": [
|
| 196 |
+
506,
|
| 197 |
+
773,
|
| 198 |
+
913,
|
| 199 |
+
878
|
| 200 |
+
],
|
| 201 |
+
"page_idx": 0
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "page_footnote",
|
| 205 |
+
"text": "Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 282835863.",
|
| 206 |
+
"bbox": [
|
| 207 |
+
83,
|
| 208 |
+
864,
|
| 209 |
+
486,
|
| 210 |
+
887
|
| 211 |
+
],
|
| 212 |
+
"page_idx": 0
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "page_footnote",
|
| 216 |
+
"text": "https://github.com/fgnt/sed Scores_eval",
|
| 217 |
+
"bbox": [
|
| 218 |
+
102,
|
| 219 |
+
888,
|
| 220 |
+
416,
|
| 221 |
+
900
|
| 222 |
+
],
|
| 223 |
+
"page_idx": 0
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "page_footnote",
|
| 227 |
+
"text": "<sup>2</sup>http://dcase.community/events#challenges",
|
| 228 |
+
"bbox": [
|
| 229 |
+
102,
|
| 230 |
+
901,
|
| 231 |
+
424,
|
| 232 |
+
912
|
| 233 |
+
],
|
| 234 |
+
"page_idx": 0
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "page_footnote",
|
| 238 |
+
"text": "3By single instance evaluation we refer to an evaluation where each classified instance is evaluated with its own target.",
|
| 239 |
+
"bbox": [
|
| 240 |
+
509,
|
| 241 |
+
887,
|
| 242 |
+
913,
|
| 243 |
+
912
|
| 244 |
+
],
|
| 245 |
+
"page_idx": 0
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "aside_text",
|
| 249 |
+
"text": "arXiv:2201.13148v1 [eess.AS] 31 Jan 2022",
|
| 250 |
+
"bbox": [
|
| 251 |
+
22,
|
| 252 |
+
261,
|
| 253 |
+
57,
|
| 254 |
+
720
|
| 255 |
+
],
|
| 256 |
+
"page_idx": 0
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"text": "2. SOUND EVENT DETECTION EVALUATION",
|
| 261 |
+
"text_level": 1,
|
| 262 |
+
"bbox": [
|
| 263 |
+
125,
|
| 264 |
+
90,
|
| 265 |
+
446,
|
| 266 |
+
104
|
| 267 |
+
],
|
| 268 |
+
"page_idx": 1
|
| 269 |
+
},
|
| 270 |
+
{
|
| 271 |
+
"type": "text",
|
| 272 |
+
"text": "SED [1, 5] can be seen as a multi-label classification problem, where the system performs classifications at multiple points in time which usually happens in a frame-based manner. When a classification score $y_{t}$ exceeds a certain decision threshold it is marked as positive. Connected positive classifications are merged into a detected event $(\\hat{t}_{\\mathrm{on},i},\\hat{t}_{\\mathrm{off},i},\\hat{c}_i)$ with $\\hat{t}_{\\mathrm{on},i},\\hat{t}_{\\mathrm{off},i},\\hat{c}_i$ being the onset time, offset time and class label, respectively, of the $i$ -th detection.",
|
| 273 |
+
"bbox": [
|
| 274 |
+
81,
|
| 275 |
+
117,
|
| 276 |
+
486,
|
| 277 |
+
209
|
| 278 |
+
],
|
| 279 |
+
"page_idx": 1
|
| 280 |
+
},
|
| 281 |
+
{
|
| 282 |
+
"type": "text",
|
| 283 |
+
"text": "As in other classification tasks the evaluation is based on true positive (TP), false positive (FP) and false negative (FN) counts. The TPs count $N_{\\mathrm{TP}}$ represents the number of ground truth events that have been detected by the system. The FPs count $N_{\\mathrm{FP}}$ sums up the number of detections which do not match a ground truth event. Hence, the total number of detected events is given as $N_{\\mathrm{DP}} = N_{\\mathrm{TP}} + N_{\\mathrm{FP}}$ . The FNs count $N_{\\mathrm{FN}}$ , which is the number of ground truth events missed by the system, is given as $N_{\\mathrm{FN}} = N_{\\mathrm{GP}} - N_{\\mathrm{TP}}$ with $N_{\\mathrm{GP}}$ being the total number of ground truth events. From these intermediate statistics higher level measures can be derived such as the precision $P = N_{\\mathrm{TP}} / N_{\\mathrm{DP}}$ , the recall (TP-Rates (TPRs)) $R = N_{\\mathrm{TP}} / N_{\\mathrm{GP}}$ and FP-Rate (FPR) $\\mathrm{FPR} = N_{\\mathrm{FP}} / N_{\\mathrm{GN}}$ , where $N_{\\mathrm{GN}}$ is the total number of ground truth negative instances in the evaluation data set.",
|
| 284 |
+
"bbox": [
|
| 285 |
+
81,
|
| 286 |
+
210,
|
| 287 |
+
486,
|
| 288 |
+
380
|
| 289 |
+
],
|
| 290 |
+
"page_idx": 1
|
| 291 |
+
},
|
| 292 |
+
{
|
| 293 |
+
"type": "text",
|
| 294 |
+
"text": "Compared to single instance evaluation<sup>3</sup>, it is less obvious in SED when to classify a ground truth event as detected, i.e. TP, and when to consider a detection as FP, due to the temporal extent of the target events over multiple classification scores/frames. Currently there exist three conceptually different ways for this, which are segment-based, collar-based (event-based) and intersection-based [12, 14, 13, 15].",
|
| 295 |
+
"bbox": [
|
| 296 |
+
81,
|
| 297 |
+
381,
|
| 298 |
+
486,
|
| 299 |
+
474
|
| 300 |
+
],
|
| 301 |
+
"page_idx": 1
|
| 302 |
+
},
|
| 303 |
+
{
|
| 304 |
+
"type": "text",
|
| 305 |
+
"text": "2.1. Segment-based",
|
| 306 |
+
"text_level": 1,
|
| 307 |
+
"bbox": [
|
| 308 |
+
83,
|
| 309 |
+
492,
|
| 310 |
+
215,
|
| 311 |
+
505
|
| 312 |
+
],
|
| 313 |
+
"page_idx": 1
|
| 314 |
+
},
|
| 315 |
+
{
|
| 316 |
+
"type": "text",
|
| 317 |
+
"text": "In segment-based evaluation [12, 14], classifications and targets are defined in fixed length segments (1 s segments is a popular choice). Classifications and targets are considered positive if they are detected/labeled anywhere in the segment. This way evaluation can be treated as a single instance evaluation. However, segment-based evaluation overemphasizes the contribution of longer events which expand over multiple segments and it does not evaluate the system's capability of providing meaningful uninterrupted detections.",
|
| 318 |
+
"bbox": [
|
| 319 |
+
81,
|
| 320 |
+
508,
|
| 321 |
+
486,
|
| 322 |
+
616
|
| 323 |
+
],
|
| 324 |
+
"page_idx": 1
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"type": "text",
|
| 328 |
+
"text": "2.2. Collar-based",
|
| 329 |
+
"text_level": 1,
|
| 330 |
+
"bbox": [
|
| 331 |
+
83,
|
| 332 |
+
633,
|
| 333 |
+
200,
|
| 334 |
+
646
|
| 335 |
+
],
|
| 336 |
+
"page_idx": 1
|
| 337 |
+
},
|
| 338 |
+
{
|
| 339 |
+
"type": "text",
|
| 340 |
+
"text": "Collar-based, a.k.a. event-based, evaluation [12, 14] compares detections $(\\hat{t}_{\\mathrm{on},i},\\hat{t}_{\\mathrm{off},i},\\hat{c}_i)$ with ground truth events $(t_{\\mathrm{on},j},t_{\\mathrm{off},j},c_j)$ directly. Only if there is a matching event pair $(i,j)$ with $c_{j} = \\hat{c}_{i}$ , $|\\hat{t}_{\\mathrm{on},i} - t_{\\mathrm{on},j}|\\leq d$ and $|\\hat{t}_{\\mathrm{off},i} - t_{\\mathrm{off},j}|\\leq d_{\\mathrm{off},j}$ , a TP is achieved. Other detections are counted as FPs. The offset collar $d_{\\mathrm{off},j} = \\max (d,rT_j)$ usually depends on the length $T_{j}$ of the ground truth event. Common choices are $d = 200~\\mathrm{ms}$ and $r = 0.2$",
|
| 341 |
+
"bbox": [
|
| 342 |
+
81,
|
| 343 |
+
651,
|
| 344 |
+
486,
|
| 345 |
+
743
|
| 346 |
+
],
|
| 347 |
+
"page_idx": 1
|
| 348 |
+
},
|
| 349 |
+
{
|
| 350 |
+
"type": "text",
|
| 351 |
+
"text": "With collar-based evaluation, each ground truth event has equal contribution to the overall performance and systems can only achieve good performance if events are detected as single connected detections. This, however, introduces sensitivity to ambiguities in the annotation. If, e.g., an annotator labeled multiple dog barks as a single event but a system detects each bark as a separate event, this results in multiple FPs and one FN.",
|
| 352 |
+
"bbox": [
|
| 353 |
+
81,
|
| 354 |
+
744,
|
| 355 |
+
486,
|
| 356 |
+
835
|
| 357 |
+
],
|
| 358 |
+
"page_idx": 1
|
| 359 |
+
},
|
| 360 |
+
{
|
| 361 |
+
"type": "text",
|
| 362 |
+
"text": "2.3. Intersection-based",
|
| 363 |
+
"text_level": 1,
|
| 364 |
+
"bbox": [
|
| 365 |
+
83,
|
| 366 |
+
854,
|
| 367 |
+
235,
|
| 368 |
+
867
|
| 369 |
+
],
|
| 370 |
+
"page_idx": 1
|
| 371 |
+
},
|
| 372 |
+
{
|
| 373 |
+
"type": "text",
|
| 374 |
+
"text": "Intersection-based evaluation [13, 15] determines the number of TPs and FPs based on intersections between detections and ground truth events. A detection tolerance criterion (DTC) classifies detections as",
|
| 375 |
+
"bbox": [
|
| 376 |
+
81,
|
| 377 |
+
872,
|
| 378 |
+
486,
|
| 379 |
+
912
|
| 380 |
+
],
|
| 381 |
+
"page_idx": 1
|
| 382 |
+
},
|
| 383 |
+
{
|
| 384 |
+
"type": "image",
|
| 385 |
+
"img_path": "images/a7ac93d1b6f8a1453db87957f91f831b5a4dd14dcdc2f10e35bd96c0d5f76e8e.jpg",
|
| 386 |
+
"image_caption": [
|
| 387 |
+
"Fig. 1. Illustration of the joint computation of intermediate statistics with single instance evaluation."
|
| 388 |
+
],
|
| 389 |
+
"image_footnote": [],
|
| 390 |
+
"bbox": [
|
| 391 |
+
526,
|
| 392 |
+
87,
|
| 393 |
+
890,
|
| 394 |
+
212
|
| 395 |
+
],
|
| 396 |
+
"page_idx": 1
|
| 397 |
+
},
|
| 398 |
+
{
|
| 399 |
+
"type": "text",
|
| 400 |
+
"text": "FP if its intersection with ground truth events of the same event class, normalized by the length of the detected event, falls below a certain DTC ratio $\\rho_{\\mathrm{DTC}}$ . Else, it is considered relevant, which, however, does not necessarily mean TP. A ground truth event is only classified TP if its intersection with relevant same class detections, normalized by the length of the ground truth event, is greater or equal to a ground truth intersection criterion (GTC) ratio $\\rho_{\\mathrm{GTC}}$ .",
|
| 401 |
+
"bbox": [
|
| 402 |
+
508,
|
| 403 |
+
266,
|
| 404 |
+
913,
|
| 405 |
+
358
|
| 406 |
+
],
|
| 407 |
+
"page_idx": 1
|
| 408 |
+
},
|
| 409 |
+
{
|
| 410 |
+
"type": "text",
|
| 411 |
+
"text": "Bilen et al. [13] further introduced cross triggers (CTs) which are FP detections matching events from another event class and, thus, may impair user experience more than standalone FPs. Note that, although the concept of CTs has been proposed in conjunction with intersection-based evaluation, it is not restricted to it and could also be transferred to segment-based and collar-based evaluations. In intersection-based evaluation the cross trigger tolerance criterion (CTTC) counts a CT between a detected event class $\\hat{c}_i$ and another event class $c$ with $c \\neq \\hat{c}_i$ if the detection intersects with ground truth events of class $c$ by at least $\\rho_{\\mathrm{CTTC}}$ .",
|
| 412 |
+
"bbox": [
|
| 413 |
+
506,
|
| 414 |
+
358,
|
| 415 |
+
913,
|
| 416 |
+
489
|
| 417 |
+
],
|
| 418 |
+
"page_idx": 1
|
| 419 |
+
},
|
| 420 |
+
{
|
| 421 |
+
"type": "text",
|
| 422 |
+
"text": "3. THRESHOLD-INDEPENDENT EVALUATION",
|
| 423 |
+
"text_level": 1,
|
| 424 |
+
"bbox": [
|
| 425 |
+
547,
|
| 426 |
+
502,
|
| 427 |
+
875,
|
| 428 |
+
513
|
| 429 |
+
],
|
| 430 |
+
"page_idx": 1
|
| 431 |
+
},
|
| 432 |
+
{
|
| 433 |
+
"type": "text",
|
| 434 |
+
"text": "The computation of above intermediate statistics, such as the TP count, depend on the decision threshold that is applied to the classifier's output scores. Consequently, metrics such as $F_{1}$ -scores and error-rates only evaluate a single threshold. A more complete picture of the classifier's performance, however, can be obtained when evaluating system performance for all possible thresholds.",
|
| 435 |
+
"bbox": [
|
| 436 |
+
506,
|
| 437 |
+
520,
|
| 438 |
+
913,
|
| 439 |
+
599
|
| 440 |
+
],
|
| 441 |
+
"page_idx": 1
|
| 442 |
+
},
|
| 443 |
+
{
|
| 444 |
+
"type": "text",
|
| 445 |
+
"text": "3.1. Single Instance Evaluation",
|
| 446 |
+
"text_level": 1,
|
| 447 |
+
"bbox": [
|
| 448 |
+
509,
|
| 449 |
+
618,
|
| 450 |
+
714,
|
| 451 |
+
631
|
| 452 |
+
],
|
| 453 |
+
"page_idx": 1
|
| 454 |
+
},
|
| 455 |
+
{
|
| 456 |
+
"type": "text",
|
| 457 |
+
"text": "In single instance evaluation<sup>3</sup>, the PR and ROC curves [16, 14] are frequently used to evaluate overall system behavior independently from a certain operating point. As the name suggests, the PR curve plots precisions over corresponding recall values which result from arbitrary decision thresholds. The ROC curve instead plots the recalls over corresponding FPRs. Frequently used metrics for system comparison are the area under the PR curve, a.k.a. average precision (AP), and the area under the ROC curve, which is often simply referred to as area under curve (AUC).",
|
| 458 |
+
"bbox": [
|
| 459 |
+
506,
|
| 460 |
+
635,
|
| 461 |
+
913,
|
| 462 |
+
753
|
| 463 |
+
],
|
| 464 |
+
"page_idx": 1
|
| 465 |
+
},
|
| 466 |
+
{
|
| 467 |
+
"type": "text",
|
| 468 |
+
"text": "Rather than making decisions and evaluating performance separately for a set of arbitrary thresholds, performance can be evaluated for all thresholds jointly by implementing a sorting of classification scores $y$ together with some predefined deltas, as it is done, e.g., in the scikit-learn toolkit [17]. Here, deltas mean changes in the intermediate statistics, such as the number of TPs, when the decision threshold moves from above an instance's classification score to below of it, i.e., when the instance moves from being classified negative to being classified positive. Then absolute values can be obtained by simply performing a cumulative sum of the deltas.",
|
| 469 |
+
"bbox": [
|
| 470 |
+
506,
|
| 471 |
+
753,
|
| 472 |
+
913,
|
| 473 |
+
886
|
| 474 |
+
],
|
| 475 |
+
"page_idx": 1
|
| 476 |
+
},
|
| 477 |
+
{
|
| 478 |
+
"type": "text",
|
| 479 |
+
"text": "This approach is illustrated in Fig. 1 for an exemplary data set with six instances. $\\Delta N_{\\mathrm{TP}}$ means the change in the TP count which,",
|
| 480 |
+
"bbox": [
|
| 481 |
+
508,
|
| 482 |
+
886,
|
| 483 |
+
913,
|
| 484 |
+
912
|
| 485 |
+
],
|
| 486 |
+
"page_idx": 1
|
| 487 |
+
},
|
| 488 |
+
{
|
| 489 |
+
"type": "image",
|
| 490 |
+
"img_path": "images/7573ce5faa865e9f95e59e96b65e7e45b8365f7643df4cfe05f342d0996a2377.jpg",
|
| 491 |
+
"image_caption": [
|
| 492 |
+
"Fig. 2. Collar-based deltas example."
|
| 493 |
+
],
|
| 494 |
+
"image_footnote": [],
|
| 495 |
+
"bbox": [
|
| 496 |
+
143,
|
| 497 |
+
84,
|
| 498 |
+
421,
|
| 499 |
+
188
|
| 500 |
+
],
|
| 501 |
+
"page_idx": 2
|
| 502 |
+
},
|
| 503 |
+
{
|
| 504 |
+
"type": "text",
|
| 505 |
+
"text": "for single instance evaluation, is simply the binary target of the instance. This is because, upon positive classification, the TP count only increases by one when the instance is labeled positive. $\\Delta N_{\\mathrm{DP}}$ represents the change in the total number of system detections. Here $\\Delta N_{\\mathrm{DP}}$ is always one as there is always one instance more being classified positive when the threshold falls below its classification score. The precisions $P = N_{\\mathrm{TP}} / N_{\\mathrm{DP}}$ can, e.g., now be read off for all decision thresholds in the third table containing the absolute values.",
|
| 506 |
+
"bbox": [
|
| 507 |
+
81,
|
| 508 |
+
223,
|
| 509 |
+
486,
|
| 510 |
+
328
|
| 511 |
+
],
|
| 512 |
+
"page_idx": 2
|
| 513 |
+
},
|
| 514 |
+
{
|
| 515 |
+
"type": "text",
|
| 516 |
+
"text": "3.2.PSD-ROC",
|
| 517 |
+
"text_level": 1,
|
| 518 |
+
"bbox": [
|
| 519 |
+
83,
|
| 520 |
+
345,
|
| 521 |
+
184,
|
| 522 |
+
357
|
| 523 |
+
],
|
| 524 |
+
"page_idx": 2
|
| 525 |
+
},
|
| 526 |
+
{
|
| 527 |
+
"type": "text",
|
| 528 |
+
"text": "To the best of our knowledge, the PSD-ROC curve proposed in [13] is currently the only threshold-independent evaluation of SED systems. It first computes, for all event classes $c$ , intersection-based ROC curves $\\mathrm{ROC}_c(\\mathrm{eFPR})$ which are monotonically increasing curves plotting TPR over effective FPR (eFPR), where the reader is referred to Bilen et al. [13] for further details about its computation. The final PSD-ROC summarizes the classwise ROC curves as",
|
| 529 |
+
"bbox": [
|
| 530 |
+
81,
|
| 531 |
+
362,
|
| 532 |
+
486,
|
| 533 |
+
455
|
| 534 |
+
],
|
| 535 |
+
"page_idx": 2
|
| 536 |
+
},
|
| 537 |
+
{
|
| 538 |
+
"type": "equation",
|
| 539 |
+
"text": "\n$$\n\\text {P S D - R O C} (\\mathrm {e F P R}) = \\mu_ {\\mathrm {T P R}} (\\mathrm {e F P R}) - \\alpha_ {\\mathrm {S T}} \\cdot \\sigma_ {\\mathrm {T P R}} (\\mathrm {e F P R}), \\tag {1}\n$$\n",
|
| 540 |
+
"text_format": "latex",
|
| 541 |
+
"bbox": [
|
| 542 |
+
119,
|
| 543 |
+
465,
|
| 544 |
+
486,
|
| 545 |
+
479
|
| 546 |
+
],
|
| 547 |
+
"page_idx": 2
|
| 548 |
+
},
|
| 549 |
+
{
|
| 550 |
+
"type": "text",
|
| 551 |
+
"text": "with $\\mu_{\\mathrm{TPR}}(\\mathrm{eFPR})$ and $\\sigma_{\\mathrm{TPR}}(\\mathrm{eFPR})$ being the mean and standard deviation over the classwise ROC curves at a certain eFPR, and where $\\alpha_{\\mathrm{ST}}$ is a parameter penalizing instability across classes. The PSDS is the normalized area under the PSD-ROC curve up to a maximal $\\mathrm{eFPR}_{\\mathrm{max}}$ .",
|
| 552 |
+
"bbox": [
|
| 553 |
+
81,
|
| 554 |
+
489,
|
| 555 |
+
486,
|
| 556 |
+
554
|
| 557 |
+
],
|
| 558 |
+
"page_idx": 2
|
| 559 |
+
},
|
| 560 |
+
{
|
| 561 |
+
"type": "text",
|
| 562 |
+
"text": "Note that the number of thresholds, which may result in a different TPR-eFPR value pair, is as high as the number of classification scores in the data set. With a system outputting scores at a rate of $50\\mathrm{Hz}$ and a rather small evaluation set of, e.g., only $1\\mathrm{h}$ , this would be $180\\mathrm{k}$ thresholds to be evaluated for each event class. Evaluating system performance for each of the thresholds separately is not feasible for obvious reasons. Therefore, due to a lack of an efficient joint computation of intersection-based TPR-eFPR value pairs for all thresholds, the PSD-ROC curve is commonly approximated with a reduced set of thresholds. For instance, the DCASE 2021 Challenge Task 4 [11] employed PSDSs using 50 linearly spaced thresholds. The approximation of PSD-ROC curves, however, can lead to a significant underestimation of the PSDS as we will demonstrate in Sec. 5. Non-linearly spaced thresholds could alleviate this to some extent, which, however, remains arbitrary and ad-hoc.",
|
| 563 |
+
"bbox": [
|
| 564 |
+
81,
|
| 565 |
+
555,
|
| 566 |
+
486,
|
| 567 |
+
752
|
| 568 |
+
],
|
| 569 |
+
"page_idx": 2
|
| 570 |
+
},
|
| 571 |
+
{
|
| 572 |
+
"type": "text",
|
| 573 |
+
"text": "4. EFFICIENT COMPUTATION OF COLLAR- AND INTERSECTION-BASED CURVES",
|
| 574 |
+
"text_level": 1,
|
| 575 |
+
"bbox": [
|
| 576 |
+
109,
|
| 577 |
+
762,
|
| 578 |
+
460,
|
| 579 |
+
787
|
| 580 |
+
],
|
| 581 |
+
"page_idx": 2
|
| 582 |
+
},
|
| 583 |
+
{
|
| 584 |
+
"type": "text",
|
| 585 |
+
"text": "In this section we present how collar-based and intersection-based intermediate statistics can be efficiently computed jointly for all possible decision thresholds. For this we follow the same approach used for the computation of single instance evaluation curves which we described in Sec. 3.1. We aim to bring all classification scores into a sorted list together with the deltas of the intermediate statistics, which appear when the decision threshold falls below the classification score. Then we are able to obtain absolute values for all operating points by a simple cumulative sum over the deltas.",
|
| 586 |
+
"bbox": [
|
| 587 |
+
81,
|
| 588 |
+
792,
|
| 589 |
+
486,
|
| 590 |
+
912
|
| 591 |
+
],
|
| 592 |
+
"page_idx": 2
|
| 593 |
+
},
|
| 594 |
+
{
|
| 595 |
+
"type": "image",
|
| 596 |
+
"img_path": "images/38295f2aaf5ed0095b09f9bba6e1c9cc163b99a968787a648e4f519e085b6101.jpg",
|
| 597 |
+
"image_caption": [
|
| 598 |
+
"Fig. 3. Intersection-based deltas example."
|
| 599 |
+
],
|
| 600 |
+
"image_footnote": [],
|
| 601 |
+
"bbox": [
|
| 602 |
+
550,
|
| 603 |
+
84,
|
| 604 |
+
866,
|
| 605 |
+
208
|
| 606 |
+
],
|
| 607 |
+
"page_idx": 2
|
| 608 |
+
},
|
| 609 |
+
{
|
| 610 |
+
"type": "text",
|
| 611 |
+
"text": "With collar-based and intersection-based evaluation, however, the computation of the deltas becomes more challenging compared to single instance evaluation, as here all scores of an audio signal have to be considered jointly and cannot be obtained instance-wise. The basic principle of the definition of the deltas is illustrated in Fig. 2 and Fig. 3.",
|
| 612 |
+
"bbox": [
|
| 613 |
+
506,
|
| 614 |
+
248,
|
| 615 |
+
913,
|
| 616 |
+
327
|
| 617 |
+
],
|
| 618 |
+
"page_idx": 2
|
| 619 |
+
},
|
| 620 |
+
{
|
| 621 |
+
"type": "text",
|
| 622 |
+
"text": "In Fig. 2 collar-based evaluation is considered. For simplicity, we here assume scores/frames to have a width of $1\\mathrm{s}$ , that target event boundaries lie exactly between two scores/frames and the on-/offset collars to be $1\\mathrm{s}$ . Starting from a decision threshold above 0.7, no event would be detected as no score lies above the threshold. When the decision threshold falls below 0.7, a detection is spawned from second 4 to 5 as the 5th score lies above the threshold. However, the distances between the detected and the true onsets and offsets are $2\\mathrm{s}$ for both, therefore not matching the collar. Hence, the newly spawned detection is a FP and we have $\\Delta N_{\\mathrm{FP}} = +1$ . When the threshold falls below 0.6, however, the detection expands from second 3 to 6 and the FP disappears ( $\\Delta N_{\\mathrm{FP}} = -1$ ) and becomes a TP detection ( $\\Delta N_{\\mathrm{TP}} = +1$ ). When the decision threshold falls below 0.5 and below 0.4, nothing changes as the collars are still matched and the detection remains TP ( $\\Delta N_{\\mathrm{TP}} = \\Delta N_{\\mathrm{FP}} = 0$ ). Finally, when the decision threshold falls below 0.3, the detection expands from 0 s to 9 s and the detected on-/offsets exceed the collar, and the TP disappears ( $\\Delta N_{\\mathrm{TP}} = -1$ ) and becomes a FP again ( $\\Delta N_{\\mathrm{FP}} = +1$ ).",
|
| 623 |
+
"bbox": [
|
| 624 |
+
506,
|
| 625 |
+
329,
|
| 626 |
+
913,
|
| 627 |
+
566
|
| 628 |
+
],
|
| 629 |
+
"page_idx": 2
|
| 630 |
+
},
|
| 631 |
+
{
|
| 632 |
+
"type": "text",
|
| 633 |
+
"text": "A slightly more advanced example is shown in Fig. 3, where we consider intersection-based evaluation including CTs. We assume $\\rho_{\\mathrm{DTC}} = \\rho_{\\mathrm{GTC}} = \\rho_{\\mathrm{CTTC}} = 0.5$ and that again all event boundaries lie exactly between two scores/frames. When the decision threshold falls below 0.8 here, a detection is spawned from 6 s to 7 s which does not overlap with the target event at all, giving us $\\Delta N_{\\mathrm{FP}} = +1$ . Further, the detected event completely lies within the ground truth event from another class (in red), giving us $\\Delta N_{\\mathrm{CT}} = +1$ . When the threshold falls below 0.7, the detection's overlap with the target event is still only $1 / 3 < \\rho_{\\mathrm{DTC}}$ . This is still a FP and therefore $\\Delta N_{\\mathrm{FP}} = 0$ . The overlap with the other class event is $2 / 3 \\geq \\rho_{\\mathrm{CTTC}}$ . Therefore there is still a CT, with $\\Delta N_{\\mathrm{CT}} = 0$ . When the threshold falls below 0.6, the detection's overlap with both the target event and the other class event is $2 / 5 < \\rho_{\\mathrm{DTC}} = \\rho_{\\mathrm{CTTC}}$ . The detection is still FP ( $\\Delta N_{\\mathrm{FP}} = 0$ ), but not a CT anymore ( $\\Delta N_{\\mathrm{CT}} = -1$ ). When the threshold falls below 0.5 the overlap with the target event becomes $1 / 2 = \\rho_{\\mathrm{DTC}}$ . The FP disappears ( $\\Delta N_{\\mathrm{FP}} = -1$ ) and becomes a TP ( $\\Delta N_{\\mathrm{TP}} = +1$ ). This remains unchanged until the decision threshold falls below 0.3, where the overlap with the ground truth event becomes only $4 / 9 < \\rho_{\\mathrm{DTC}}$ . This is a FP again (but not a CT) with $\\Delta N_{\\mathrm{TP}} = -1$ and $\\Delta N_{\\mathrm{FP}} = +1$ .",
|
| 634 |
+
"bbox": [
|
| 635 |
+
508,
|
| 636 |
+
568,
|
| 637 |
+
913,
|
| 638 |
+
843
|
| 639 |
+
],
|
| 640 |
+
"page_idx": 2
|
| 641 |
+
},
|
| 642 |
+
{
|
| 643 |
+
"type": "text",
|
| 644 |
+
"text": "The proposed approach allows for efficient and accurate computation of collar-based and intersection-based PR and ROC curves, which not only enables us to compute threshold-independent metrics such as AP and PSDS precisely, but it also allows us to find the threshold which best suits specific application requirements.",
|
| 645 |
+
"bbox": [
|
| 646 |
+
506,
|
| 647 |
+
845,
|
| 648 |
+
913,
|
| 649 |
+
912
|
| 650 |
+
],
|
| 651 |
+
"page_idx": 2
|
| 652 |
+
},
|
| 653 |
+
{
|
| 654 |
+
"type": "image",
|
| 655 |
+
"img_path": "images/c9a101e54bff1d9bddc85ae0faf7ac3baf0805981c69f0342ec43a2387d95d22.jpg",
|
| 656 |
+
"image_caption": [
|
| 657 |
+
"Fig. 4. PSD-ROC curves: The exact PSD-ROC curve being shown in blue, which becomes computable with our proposed methodology, and different approximations of the PSD-ROC curve shown in red."
|
| 658 |
+
],
|
| 659 |
+
"image_footnote": [],
|
| 660 |
+
"bbox": [
|
| 661 |
+
88,
|
| 662 |
+
89,
|
| 663 |
+
370,
|
| 664 |
+
200
|
| 665 |
+
],
|
| 666 |
+
"page_idx": 3
|
| 667 |
+
},
|
| 668 |
+
{
|
| 669 |
+
"type": "image",
|
| 670 |
+
"img_path": "images/df157f97bf6ed81b7261d6bc24dc103a53a5117c6e48bb965653052012ab6cdc.jpg",
|
| 671 |
+
"image_caption": [],
|
| 672 |
+
"image_footnote": [],
|
| 673 |
+
"bbox": [
|
| 674 |
+
385,
|
| 675 |
+
88,
|
| 676 |
+
617,
|
| 677 |
+
202
|
| 678 |
+
],
|
| 679 |
+
"page_idx": 3
|
| 680 |
+
},
|
| 681 |
+
{
|
| 682 |
+
"type": "image",
|
| 683 |
+
"img_path": "images/9cf8d66cb3eb8557b37eebef57700f06f2d66df02c549dd2843a62b5169f1c86.jpg",
|
| 684 |
+
"image_caption": [],
|
| 685 |
+
"image_footnote": [],
|
| 686 |
+
"bbox": [
|
| 687 |
+
630,
|
| 688 |
+
88,
|
| 689 |
+
862,
|
| 690 |
+
202
|
| 691 |
+
],
|
| 692 |
+
"page_idx": 3
|
| 693 |
+
},
|
| 694 |
+
{
|
| 695 |
+
"type": "text",
|
| 696 |
+
"text": "Note that the proposed methodology is rather general and can be applied to arbitrary evaluations as long as one is able to determine the deltas in the intermediate statistics for each classification score in the evaluation data set.",
|
| 697 |
+
"bbox": [
|
| 698 |
+
83,
|
| 699 |
+
251,
|
| 700 |
+
488,
|
| 701 |
+
304
|
| 702 |
+
],
|
| 703 |
+
"page_idx": 3
|
| 704 |
+
},
|
| 705 |
+
{
|
| 706 |
+
"type": "text",
|
| 707 |
+
"text": "5. EXPERIMENTS",
|
| 708 |
+
"text_level": 1,
|
| 709 |
+
"bbox": [
|
| 710 |
+
218,
|
| 711 |
+
316,
|
| 712 |
+
352,
|
| 713 |
+
328
|
| 714 |
+
],
|
| 715 |
+
"page_idx": 3
|
| 716 |
+
},
|
| 717 |
+
{
|
| 718 |
+
"type": "text",
|
| 719 |
+
"text": "In this section we demonstrate the usefulness of the proposed method for the accurate computation of threshold-independent curves and metrics as well as its potential for threshold tuning.",
|
| 720 |
+
"bbox": [
|
| 721 |
+
83,
|
| 722 |
+
334,
|
| 723 |
+
488,
|
| 724 |
+
375
|
| 725 |
+
],
|
| 726 |
+
"page_idx": 3
|
| 727 |
+
},
|
| 728 |
+
{
|
| 729 |
+
"type": "text",
|
| 730 |
+
"text": "The presented curves and metrics are evaluated for one of our single model systems developed for DCASE 2021 Challenge Task 4, which employs a forward-backward convolutional recurrent neural network (FBCRNN) for audio tagging followed by a tag-conditioned CRNN (TCCRNN) for SED [18] outputting detection scores at a rate of $50\\mathrm{Hz}$ . For more details about the system and its training, which are not relevant here, the reader is referred to Ebbers et al. [18].",
|
| 731 |
+
"bbox": [
|
| 732 |
+
83,
|
| 733 |
+
375,
|
| 734 |
+
488,
|
| 735 |
+
465
|
| 736 |
+
],
|
| 737 |
+
"page_idx": 3
|
| 738 |
+
},
|
| 739 |
+
{
|
| 740 |
+
"type": "text",
|
| 741 |
+
"text": "In the challenge, systems have been evaluated by PSDSs which have been calculated using 50 thresholds linearly spaced from 0.01 to 0.99 for PSD-ROC curve approximation. In the following we consider the scenario 1 with $\\rho_{DTC} = \\rho_{GTC} = 0.7$ , $\\alpha_{\\mathrm{CT}} = 0$ , $\\alpha_{\\mathrm{ST}} = 1$ and $\\mathrm{eFPR}_{\\mathrm{max}} = 100 / \\mathrm{h}$ and report evaluations on the public evaluation set of the DESED database [19].",
|
| 742 |
+
"bbox": [
|
| 743 |
+
83,
|
| 744 |
+
467,
|
| 745 |
+
488,
|
| 746 |
+
546
|
| 747 |
+
],
|
| 748 |
+
"page_idx": 3
|
| 749 |
+
},
|
| 750 |
+
{
|
| 751 |
+
"type": "text",
|
| 752 |
+
"text": "In Fig. 4 different PSD-ROC curves are shown. In the subplots we present different variants of PSD-ROC curve approximations (in red), which have been generated using the official psds_eval package $^{4}$ , and compare them with the accurate PSD-ROC curve (in blue), which has been generated with our newly released package sed Scores_eval $^{1}$ .",
|
| 753 |
+
"bbox": [
|
| 754 |
+
83,
|
| 755 |
+
546,
|
| 756 |
+
488,
|
| 757 |
+
625
|
| 758 |
+
],
|
| 759 |
+
"page_idx": 3
|
| 760 |
+
},
|
| 761 |
+
{
|
| 762 |
+
"type": "text",
|
| 763 |
+
"text": "During our system development for the challenge, we recognized that our system mostly produces either very small or very high scores, which, without further measures, results in the PSD-ROC being approximated only very coarsely as shown in the left subplot of Fig. 4. Compared to the accurate computation proposed here, the approximated PSDS of 0.358 significantly underestimates the true PSDS of 0.400. Even if 500 linearly spaced thresholds from 0.001 to 0.999 are used, which is shown in the middle plot, this \"step\" artifact still appears on the PSD-ROC. The PSDS computed with these thresholds results to be 0.389 which still underestimates the true PSDS.",
|
| 764 |
+
"bbox": [
|
| 765 |
+
83,
|
| 766 |
+
626,
|
| 767 |
+
488,
|
| 768 |
+
768
|
| 769 |
+
],
|
| 770 |
+
"page_idx": 3
|
| 771 |
+
},
|
| 772 |
+
{
|
| 773 |
+
"type": "text",
|
| 774 |
+
"text": "In order to obtain a smooth PSD-ROC in the challenge, we performed a non-linear transformation of our system's classification scores, such that the classification scores of ground truth positive frames in the validation set are uniformly distributed between 0 and 1. Note, that a non-linear score transformation followed by linearly spaced thresholds results to be the same as non-linearly spaced thresholds. The resulting PSD-ROC approximation with 50 thresholds is shown in red in the right plot of Fig. 4, which then comes close to the true PSD-ROC. Note, that at this point a tuning of a score",
|
| 775 |
+
"bbox": [
|
| 776 |
+
81,
|
| 777 |
+
770,
|
| 778 |
+
488,
|
| 779 |
+
888
|
| 780 |
+
],
|
| 781 |
+
"page_idx": 3
|
| 782 |
+
},
|
| 783 |
+
{
|
| 784 |
+
"type": "table",
|
| 785 |
+
"img_path": "images/b98392ec3e5078f6996c1244f13739260d4c5d544f3c1b66cabedc5bef687bb8.jpg",
|
| 786 |
+
"table_caption": [
|
| 787 |
+
"Table 1. Collar-based $F_{1}$ -score performance without and with optimal threshold tuning on validation set."
|
| 788 |
+
],
|
| 789 |
+
"table_footnote": [],
|
| 790 |
+
"table_body": "<table><tr><td>Thresholds</td><td>0.5</td><td>optimal\n(on val. set)</td></tr><tr><td>F1-score</td><td>51.8%</td><td>57.2%</td></tr></table>",
|
| 791 |
+
"bbox": [
|
| 792 |
+
589,
|
| 793 |
+
280,
|
| 794 |
+
834,
|
| 795 |
+
323
|
| 796 |
+
],
|
| 797 |
+
"page_idx": 3
|
| 798 |
+
},
|
| 799 |
+
{
|
| 800 |
+
"type": "text",
|
| 801 |
+
"text": "transformation function (or alternatively 50 thresholds) is required, which is highly undesired for a supposedly threshold-independent metric. However, with the proposed computation approach, the PSDS can be computed exactly and truly independently of a specific set of thresholds (with less computation time $^5$ ).",
|
| 802 |
+
"bbox": [
|
| 803 |
+
506,
|
| 804 |
+
335,
|
| 805 |
+
913,
|
| 806 |
+
402
|
| 807 |
+
],
|
| 808 |
+
"page_idx": 3
|
| 809 |
+
},
|
| 810 |
+
{
|
| 811 |
+
"type": "text",
|
| 812 |
+
"text": "Next, we use the collar-based PR-curve to perform optimal threshold tuning for collar-based $F_{1}$ -score evaluation, which has been an additional contrastive metric in the challenge. For each event class we choose the decision threshold, which achieves the highest $F_{1}$ -score on the PR-curve of the validation set that was computed with the proposed approach. Table 1 shows collar-based $F_{1}$ -score performance on the public evaluation set comparing the threshold, which is optimal on the validation set, with simply choosing a threshold of 0.5. Note that for a fair comparison, we performed a median filter size sweep for each threshold variant separately and chose for each threshold variant and event class the filter size that performed best on the validation set. At this point it may be worth noting that median filtering before and after a thresholding yields the same detection outputs, making it similarly applicable to SED scores before computing threshold-independent curves or metrics.",
|
| 813 |
+
"bbox": [
|
| 814 |
+
508,
|
| 815 |
+
402,
|
| 816 |
+
913,
|
| 817 |
+
599
|
| 818 |
+
],
|
| 819 |
+
"page_idx": 3
|
| 820 |
+
},
|
| 821 |
+
{
|
| 822 |
+
"type": "text",
|
| 823 |
+
"text": "It can be observed that solely by tuning the decision threshold on the validation set, performance can be improved by $5.4\\%$ . This demonstrates how threshold-dependent metrics can be biased by the tuning of an operating point. However, it also demonstrates the ability of our presented method to allow for searching the optimal operating point for a given target application.",
|
| 824 |
+
"bbox": [
|
| 825 |
+
506,
|
| 826 |
+
599,
|
| 827 |
+
913,
|
| 828 |
+
679
|
| 829 |
+
],
|
| 830 |
+
"page_idx": 3
|
| 831 |
+
},
|
| 832 |
+
{
|
| 833 |
+
"type": "text",
|
| 834 |
+
"text": "6. CONCLUSIONS",
|
| 835 |
+
"text_level": 1,
|
| 836 |
+
"bbox": [
|
| 837 |
+
645,
|
| 838 |
+
690,
|
| 839 |
+
779,
|
| 840 |
+
703
|
| 841 |
+
],
|
| 842 |
+
"page_idx": 3
|
| 843 |
+
},
|
| 844 |
+
{
|
| 845 |
+
"type": "text",
|
| 846 |
+
"text": "In this paper we presented a methodology allowing for performing accurate computation of collar-based and intersection-based PR and ROC curves. Computing these metrics on a fixed set of thresholds could lead to biased estimation of the final metric. This can result in significant performance underestimation if an unfavorable set of thresholds is chosen. Our proposed method, however, enables truly threshold-independent collar-based and intersection-based SED metrics and provides a more accurate, system independent evaluation. Further, as the method allows to efficiently compute performances for arbitrary thresholds, it allows to determine the best operating point to fulfill the requirements of a specific application. We publicly released its implementation in a python package termed sed Scores_eval<sup>1</sup>.",
|
| 847 |
+
"bbox": [
|
| 848 |
+
506,
|
| 849 |
+
708,
|
| 850 |
+
913,
|
| 851 |
+
878
|
| 852 |
+
],
|
| 853 |
+
"page_idx": 3
|
| 854 |
+
},
|
| 855 |
+
{
|
| 856 |
+
"type": "page_footnote",
|
| 857 |
+
"text": "5See https://github.com/fgnt/sed}scores_eval/blob/ main/notebooks/psds.ipynb for timings.",
|
| 858 |
+
"bbox": [
|
| 859 |
+
508,
|
| 860 |
+
887,
|
| 861 |
+
911,
|
| 862 |
+
912
|
| 863 |
+
],
|
| 864 |
+
"page_idx": 3
|
| 865 |
+
},
|
| 866 |
+
{
|
| 867 |
+
"type": "footer",
|
| 868 |
+
"text": "4https://github.com/audioanalytic/psds_eval",
|
| 869 |
+
"bbox": [
|
| 870 |
+
101,
|
| 871 |
+
898,
|
| 872 |
+
441,
|
| 873 |
+
912
|
| 874 |
+
],
|
| 875 |
+
"page_idx": 3
|
| 876 |
+
},
|
| 877 |
+
{
|
| 878 |
+
"type": "text",
|
| 879 |
+
"text": "7. REFERENCES",
|
| 880 |
+
"text_level": 1,
|
| 881 |
+
"bbox": [
|
| 882 |
+
223,
|
| 883 |
+
90,
|
| 884 |
+
349,
|
| 885 |
+
104
|
| 886 |
+
],
|
| 887 |
+
"page_idx": 4
|
| 888 |
+
},
|
| 889 |
+
{
|
| 890 |
+
"type": "list",
|
| 891 |
+
"sub_type": "ref_text",
|
| 892 |
+
"list_items": [
|
| 893 |
+
"[1] Tuomas Virtanen, Mark D Plumbley, and Dan Ellis, Computational analysis of sound scenes and events, Springer, 2018.",
|
| 894 |
+
"[2] Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter, \"Audio set: An ontology and human-labeled dataset for audio events,\" in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2017, pp. 776-780.",
|
| 895 |
+
"[3] Eduardo Fonseca, Xavier Favory, Jordi Pons, Frederic Font, and Xavier Serra, \"Fsd50k: an open dataset of human-labeled sound events,\" arXiv preprint arXiv:2010.00475, 2020.",
|
| 896 |
+
"[4] Sacha Krstulović, \"Audio event recognition in the smart home,\" Computational Analysis of Sound Scenes and Events, pp. 335-371, 2018.",
|
| 897 |
+
"[5] Annamaria Mesaros, Toni Heittola, Tuomas Virtanen, and Mark D Plumbley, \"Sound event detection: A tutorial,\" IEEE Signal Processing Magazine, vol. 38, no. 5, pp. 67-83, 2021.",
|
| 898 |
+
"[6] Ankit Shah, Anurag Kumar, Alexander G Hauptmann, and Bhiksha Raj, “A closer look at weak label learning for audio events,” arXiv preprint arXiv:1804.09288, 2018.",
|
| 899 |
+
"[7] Koichi Miyazaki, Tatsuya Komatsu, Tomoki Hayashi, Shinji Watanabe, Tomoki Toda, and Kazuya Takeda, \"Weakly-supervised sound event detection with self-attention,\" in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2020, pp. 66-70.",
|
| 900 |
+
"[8] Lu JiaKai, “Mean teacher convolution system for dcase 2018 task 4,” Tech. Rep., Detection and Classification of Acoustic Scenes and Events Challenge, September 2018.",
|
| 901 |
+
"[9] Nicolas Turpault and Romain Serizel, “Training sound event detection on a heterogeneous dataset,” in Proc. Workshop on Detection and Classification of Acoustic Scenes and Events, 2020.",
|
| 902 |
+
"[10] Nicolas Turpault, Romain Serizel, Scott Wisdom, Hakan Erdogan, John R Hershey, Eduardo Fonseca, Prem Seetharaman, and Justin Salamon, \"Sound event detection and separation: a benchmark on desed synthetic soundscapes,\" in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2021, pp. 840-844."
|
| 903 |
+
],
|
| 904 |
+
"bbox": [
|
| 905 |
+
86,
|
| 906 |
+
116,
|
| 907 |
+
488,
|
| 908 |
+
657
|
| 909 |
+
],
|
| 910 |
+
"page_idx": 4
|
| 911 |
+
},
|
| 912 |
+
{
|
| 913 |
+
"type": "list",
|
| 914 |
+
"sub_type": "ref_text",
|
| 915 |
+
"list_items": [
|
| 916 |
+
"[11] Francesca Ronchini, Romain Serizel, Nicolas Turpault, and Samuele Cornell, “The impact of non-target events in synthetic soundscapes for sound event detection,” arXiv preprint arXiv:2109.14061, 2021.",
|
| 917 |
+
"[12] Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen, \"Metrics for polyphonic sound event detection,\" Applied Sciences, vol. 6, no. 6, pp. 162, 2016.",
|
| 918 |
+
"[13] Cagdas Bilen, Giacomo Ferroni, Francesco Tuveri, Juan Azcarreta, and Sacha Krstulovic, “A framework for the robust evaluation of sound event detection,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2020, pp. 61–65.",
|
| 919 |
+
"[14] Annamaria Mesaros, Toni Heittola, and Dan Ellis, “Datasets and evaluation,” in Computational Analysis of Sound Scenes and Events, pp. 147–179. Springer, 2018.",
|
| 920 |
+
"[15] Giacomo Ferroni, Nicolas Turpault, Juan Azcarreta, Francesco Tuveri, Romain Serizel, Căgdaș Bilen, and Sacha Krstulović, “Improving sound event detection metrics: insights from dcase 2020,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2021, pp. 631–635.",
|
| 921 |
+
"[16] Jesse Davis and Mark Goadrich, “The relationship between precision-recall and roc curves,” in Proc. 23rd international conference on Machine learning. 2006, pp. 233–240, ACM Press.",
|
| 922 |
+
"[17] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournaepau, M. Brucher, M. Perrot, and E. Duchesnay, \"Scikit-learn: Machine learning in Python,\" Journal of Machine Learning Research, vol. 12, pp. 2825-2830, 2011.",
|
| 923 |
+
"[18] Janek Ebbers and Reinhold Haeb-Umbach, \"Self-trained audio tagging and sound event detection in domestic environments,\" Tech. Rep., Detection and Classification of Acoustic Scenes and Events Challenge, June 2021.",
|
| 924 |
+
"[19] Nicolas Turpault, Romain Serizel, Ankit Parag Shah, and Justin Salamon, \"Sound event detection in domestic environments with weakly labeled data and soundscape synthesis,\" in Proc. Workshop on Detection and Classification of Acoustic Scenes and Events, 2019."
|
| 925 |
+
],
|
| 926 |
+
"bbox": [
|
| 927 |
+
511,
|
| 928 |
+
92,
|
| 929 |
+
913,
|
| 930 |
+
642
|
| 931 |
+
],
|
| 932 |
+
"page_idx": 4
|
| 933 |
+
}
|
| 934 |
+
]
|
2201.13xxx/2201.13148/2c84f44d-f098-4430-8e8c-b79d28977a5f_model.json
ADDED
|
@@ -0,0 +1,1156 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "title",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.106,
|
| 7 |
+
0.115,
|
| 8 |
+
0.896,
|
| 9 |
+
0.133
|
| 10 |
+
],
|
| 11 |
+
"angle": 0,
|
| 12 |
+
"content": "THRESHOLD INDEPENDENT EVALUATION OF SOUND EVENT DETECTION SCORES"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.164,
|
| 18 |
+
0.153,
|
| 19 |
+
0.476,
|
| 20 |
+
0.17
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "Janek Ebbers, Reinhold Haeb-Umbach"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.139,
|
| 29 |
+
0.187,
|
| 30 |
+
0.502,
|
| 31 |
+
0.257
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "Paderborn University, \nDepartment of Communications Engineering, 33098 Paderborn, Germany, {ebbers,haeb} @ nt.upb.de"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.675,
|
| 40 |
+
0.153,
|
| 41 |
+
0.802,
|
| 42 |
+
0.169
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "Romain Serizel"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.615,
|
| 51 |
+
0.187,
|
| 52 |
+
0.861,
|
| 53 |
+
0.254
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "Université de Lorraine, CNRS, Inria, Loria, F-54000 Nancy, France, romain.serizel@loria.fr"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "title",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.245,
|
| 62 |
+
0.287,
|
| 63 |
+
0.33,
|
| 64 |
+
0.3
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "ABSTRACT"
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.082,
|
| 73 |
+
0.303,
|
| 74 |
+
0.49,
|
| 75 |
+
0.541
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "Performing an adequate evaluation of sound event detection (SED) systems is far from trivial and is still subject to ongoing research. The recently proposed polyphonic sound detection (PSD)-receiver operating characteristic (ROC) and PSD score (PSDS) make an important step into the direction of an evaluation of SED systems which is independent from a certain decision threshold. This allows to obtain a more complete picture of the overall system behavior which is less biased by threshold tuning. Yet, the PSD-ROC is currently only approximated using a finite set of thresholds. The choice of the thresholds used in approximation, however, can have a severe impact on the resulting PSDS. In this paper we propose a method which allows for computing system performance on an evaluation set for all possible thresholds jointly, enabling accurate computation not only of the PSD-ROC and PSDS but also of other collar-based and intersection-based performance curves. It further allows to select the threshold which best fulfills the requirements of a given application. Source code is publicly available in our SED evaluation package sed Scores.eval<sup>1</sup>."
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.084,
|
| 84 |
+
0.542,
|
| 85 |
+
0.489,
|
| 86 |
+
0.57
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "Index Terms—sound event detection, polyphonic sound detection, evaluation, threshold independent, roc"
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "title",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.216,
|
| 95 |
+
0.583,
|
| 96 |
+
0.358,
|
| 97 |
+
0.597
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "1. INTRODUCTION"
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.082,
|
| 106 |
+
0.604,
|
| 107 |
+
0.49,
|
| 108 |
+
0.685
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "Recently, there is a rapid progress in Machine Listening aiming to imitate by machines the human ability to recognize, distinguish and interpret sounds [1]. The progress is driven by the annual Detection and Classification of Acoustic Scenes and Events (DCASE) challenges<sup>2</sup> and the releases of large-scale sound databases such as Google's AudioSet [2] and FSD50k [3]."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "text",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.082,
|
| 117 |
+
0.684,
|
| 118 |
+
0.49,
|
| 119 |
+
0.737
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "For a successful development of such systems an adequate evaluation of the system's operating behavior is crucial, where, ideally, the evaluation metric correlates to the user satisfaction during system application [4]."
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.082,
|
| 128 |
+
0.737,
|
| 129 |
+
0.491,
|
| 130 |
+
0.856
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "In this paper we are concerned with the evaluation of sound event detection (SED) systems [5]. SED aims to recognize sound events in audio signals together with their onset and offset time. One particular challenge in SED is that labeling of ground truth event onset and offset times, referred to as strong labels, is expensive and time-consuming. Therefore, many systems aim to learn SED from weakly labeled data [6, 7], which only indicate the presence or absence of a sound event in an audio signal without providing its onset and offset times, and unlabeled data [8, 9]. Synthetically generated"
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "text",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.508,
|
| 139 |
+
0.287,
|
| 140 |
+
0.915,
|
| 141 |
+
0.34
|
| 142 |
+
],
|
| 143 |
+
"angle": 0,
|
| 144 |
+
"content": "soundscapes are another alternative to produce cheap strongly annotated data [10, 11]. Here, an insightful evaluation of systems is particularly important to be able to draw conclusions about the system's learning behavior w.r.t. the temporal localization of sounds."
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"type": "text",
|
| 148 |
+
"bbox": [
|
| 149 |
+
0.508,
|
| 150 |
+
0.34,
|
| 151 |
+
0.915,
|
| 152 |
+
0.445
|
| 153 |
+
],
|
| 154 |
+
"angle": 0,
|
| 155 |
+
"content": "Due to the temporal component of sound events, however, the adequate evaluation of SED performance is far from trivial. Traditional approaches perform segment-based and collar-based (event-based) evaluation [12] for only a single operating point (decision threshold). Further, segment-based evaluation does not sufficiently evaluate a system's capability of providing connected detections, whereas collar-based evaluation is sensitive to ambiguities in the definition of the ground truth event boundaries."
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"type": "text",
|
| 159 |
+
"bbox": [
|
| 160 |
+
0.508,
|
| 161 |
+
0.445,
|
| 162 |
+
0.915,
|
| 163 |
+
0.537
|
| 164 |
+
],
|
| 165 |
+
"angle": 0,
|
| 166 |
+
"content": "More recently, Bilen et al. [13] proposed the polyphonic sound detection (PSD)-receiver operating characteristic (ROC) curve and PSD score (PSDS), which is an important step towards an evaluation of SED systems which is independent of specific decision thresholds and therefore provides a more complete picture of the system's overall operating behavior and is less biased by a specific tuning of the decision thresholds."
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"type": "text",
|
| 170 |
+
"bbox": [
|
| 171 |
+
0.508,
|
| 172 |
+
0.537,
|
| 173 |
+
0.915,
|
| 174 |
+
0.603
|
| 175 |
+
],
|
| 176 |
+
"angle": 0,
|
| 177 |
+
"content": "However, PSD-ROC curves are only approximated so far due to the lack of a method which efficiently evaluates the system's performance for all possible decision thresholds. The approximation of the PSD-ROC curve can significantly underestimate the system's PSDS as we will show in Sec. 5."
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"type": "text",
|
| 181 |
+
"bbox": [
|
| 182 |
+
0.508,
|
| 183 |
+
0.604,
|
| 184 |
+
0.915,
|
| 185 |
+
0.775
|
| 186 |
+
],
|
| 187 |
+
"angle": 0,
|
| 188 |
+
"content": "In this paper, we therefore present such a method to efficiently compute the system's performance for all possible decision thresholds jointly, which allows us to accurately compute the PSD-ROC and PSDS. Further, it can also be used to compute other intersection-based and collar-based performance curves such as precision-recall (PR)-curves. The presented approach can be understood as a generalization of the method used for single instance evaluation<sup>3</sup> to more sophisticated evaluations such as collar-based or intersection-based evaluations. It is based on the definition of changes in the intermediate statistics that occur when the decision threshold falls below a certain score, which we refer to as deltas in the following. Then, absolute values can be obtained for all possible thresholds by performing a cumulative sum over the deltas."
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"type": "text",
|
| 192 |
+
"bbox": [
|
| 193 |
+
0.508,
|
| 194 |
+
0.775,
|
| 195 |
+
0.915,
|
| 196 |
+
0.88
|
| 197 |
+
],
|
| 198 |
+
"angle": 0,
|
| 199 |
+
"content": "The rest of the paper is structured as follows. Sec. 2 reviews current threshold-dependent approaches for SED evaluation. Sec. 3 describes commonly used threshold-independent evaluation methods for single instance evaluation\\(^{3}\\) as well as the recently proposed PSD for the threshold-independent evaluation of SED. Then, we present our proposed approach for the accurate computation of PSD-ROC and other performance curves in Sec. 4. Finally we present experiments in Sec. 5 and draw conclusions in Sec. 6."
|
| 200 |
+
},
|
| 201 |
+
{
|
| 202 |
+
"type": "page_footnote",
|
| 203 |
+
"bbox": [
|
| 204 |
+
0.084,
|
| 205 |
+
0.865,
|
| 206 |
+
0.488,
|
| 207 |
+
0.888
|
| 208 |
+
],
|
| 209 |
+
"angle": 0,
|
| 210 |
+
"content": "Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 282835863."
|
| 211 |
+
},
|
| 212 |
+
{
|
| 213 |
+
"type": "page_footnote",
|
| 214 |
+
"bbox": [
|
| 215 |
+
0.103,
|
| 216 |
+
0.89,
|
| 217 |
+
0.418,
|
| 218 |
+
0.901
|
| 219 |
+
],
|
| 220 |
+
"angle": 0,
|
| 221 |
+
"content": "https://github.com/fgnt/sed Scores_eval"
|
| 222 |
+
},
|
| 223 |
+
{
|
| 224 |
+
"type": "page_footnote",
|
| 225 |
+
"bbox": [
|
| 226 |
+
0.103,
|
| 227 |
+
0.902,
|
| 228 |
+
0.425,
|
| 229 |
+
0.913
|
| 230 |
+
],
|
| 231 |
+
"angle": 0,
|
| 232 |
+
"content": "<sup>2</sup>http://dcase.community/events#challenges"
|
| 233 |
+
},
|
| 234 |
+
{
|
| 235 |
+
"type": "list",
|
| 236 |
+
"bbox": [
|
| 237 |
+
0.084,
|
| 238 |
+
0.865,
|
| 239 |
+
0.488,
|
| 240 |
+
0.913
|
| 241 |
+
],
|
| 242 |
+
"angle": 0,
|
| 243 |
+
"content": null
|
| 244 |
+
},
|
| 245 |
+
{
|
| 246 |
+
"type": "page_footnote",
|
| 247 |
+
"bbox": [
|
| 248 |
+
0.51,
|
| 249 |
+
0.888,
|
| 250 |
+
0.915,
|
| 251 |
+
0.914
|
| 252 |
+
],
|
| 253 |
+
"angle": 0,
|
| 254 |
+
"content": "3By single instance evaluation we refer to an evaluation where each classified instance is evaluated with its own target."
|
| 255 |
+
},
|
| 256 |
+
{
|
| 257 |
+
"type": "aside_text",
|
| 258 |
+
"bbox": [
|
| 259 |
+
0.023,
|
| 260 |
+
0.262,
|
| 261 |
+
0.058,
|
| 262 |
+
0.721
|
| 263 |
+
],
|
| 264 |
+
"angle": 270,
|
| 265 |
+
"content": "arXiv:2201.13148v1 [eess.AS] 31 Jan 2022"
|
| 266 |
+
}
|
| 267 |
+
],
|
| 268 |
+
[
|
| 269 |
+
{
|
| 270 |
+
"type": "title",
|
| 271 |
+
"bbox": [
|
| 272 |
+
0.126,
|
| 273 |
+
0.092,
|
| 274 |
+
0.447,
|
| 275 |
+
0.106
|
| 276 |
+
],
|
| 277 |
+
"angle": 0,
|
| 278 |
+
"content": "2. SOUND EVENT DETECTION EVALUATION"
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "text",
|
| 282 |
+
"bbox": [
|
| 283 |
+
0.082,
|
| 284 |
+
0.118,
|
| 285 |
+
0.488,
|
| 286 |
+
0.21
|
| 287 |
+
],
|
| 288 |
+
"angle": 0,
|
| 289 |
+
"content": "SED [1, 5] can be seen as a multi-label classification problem, where the system performs classifications at multiple points in time which usually happens in a frame-based manner. When a classification score \\( y_{t} \\) exceeds a certain decision threshold it is marked as positive. Connected positive classifications are merged into a detected event \\( (\\hat{t}_{\\mathrm{on},i},\\hat{t}_{\\mathrm{off},i},\\hat{c}_i) \\) with \\( \\hat{t}_{\\mathrm{on},i},\\hat{t}_{\\mathrm{off},i},\\hat{c}_i \\) being the onset time, offset time and class label, respectively, of the \\( i \\)-th detection."
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "text",
|
| 293 |
+
"bbox": [
|
| 294 |
+
0.082,
|
| 295 |
+
0.211,
|
| 296 |
+
0.487,
|
| 297 |
+
0.381
|
| 298 |
+
],
|
| 299 |
+
"angle": 0,
|
| 300 |
+
"content": "As in other classification tasks the evaluation is based on true positive (TP), false positive (FP) and false negative (FN) counts. The TPs count \\(N_{\\mathrm{TP}}\\) represents the number of ground truth events that have been detected by the system. The FPs count \\(N_{\\mathrm{FP}}\\) sums up the number of detections which do not match a ground truth event. Hence, the total number of detected events is given as \\(N_{\\mathrm{DP}} = N_{\\mathrm{TP}} + N_{\\mathrm{FP}}\\). The FNs count \\(N_{\\mathrm{FN}}\\), which is the number of ground truth events missed by the system, is given as \\(N_{\\mathrm{FN}} = N_{\\mathrm{GP}} - N_{\\mathrm{TP}}\\) with \\(N_{\\mathrm{GP}}\\) being the total number of ground truth events. From these intermediate statistics higher level measures can be derived such as the precision \\(P = N_{\\mathrm{TP}} / N_{\\mathrm{DP}}\\), the recall (TP-Rates (TPRs)) \\(R = N_{\\mathrm{TP}} / N_{\\mathrm{GP}}\\) and FP-Rate (FPR) \\(\\mathrm{FPR} = N_{\\mathrm{FP}} / N_{\\mathrm{GN}}\\), where \\(N_{\\mathrm{GN}}\\) is the total number of ground truth negative instances in the evaluation data set."
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "text",
|
| 304 |
+
"bbox": [
|
| 305 |
+
0.082,
|
| 306 |
+
0.382,
|
| 307 |
+
0.487,
|
| 308 |
+
0.475
|
| 309 |
+
],
|
| 310 |
+
"angle": 0,
|
| 311 |
+
"content": "Compared to single instance evaluation<sup>3</sup>, it is less obvious in SED when to classify a ground truth event as detected, i.e. TP, and when to consider a detection as FP, due to the temporal extent of the target events over multiple classification scores/frames. Currently there exist three conceptually different ways for this, which are segment-based, collar-based (event-based) and intersection-based [12, 14, 13, 15]."
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"type": "title",
|
| 315 |
+
"bbox": [
|
| 316 |
+
0.084,
|
| 317 |
+
0.493,
|
| 318 |
+
0.216,
|
| 319 |
+
0.506
|
| 320 |
+
],
|
| 321 |
+
"angle": 0,
|
| 322 |
+
"content": "2.1. Segment-based"
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"type": "text",
|
| 326 |
+
"bbox": [
|
| 327 |
+
0.082,
|
| 328 |
+
0.51,
|
| 329 |
+
0.488,
|
| 330 |
+
0.617
|
| 331 |
+
],
|
| 332 |
+
"angle": 0,
|
| 333 |
+
"content": "In segment-based evaluation [12, 14], classifications and targets are defined in fixed length segments (1 s segments is a popular choice). Classifications and targets are considered positive if they are detected/labeled anywhere in the segment. This way evaluation can be treated as a single instance evaluation. However, segment-based evaluation overemphasizes the contribution of longer events which expand over multiple segments and it does not evaluate the system's capability of providing meaningful uninterrupted detections."
|
| 334 |
+
},
|
| 335 |
+
{
|
| 336 |
+
"type": "title",
|
| 337 |
+
"bbox": [
|
| 338 |
+
0.084,
|
| 339 |
+
0.635,
|
| 340 |
+
0.201,
|
| 341 |
+
0.647
|
| 342 |
+
],
|
| 343 |
+
"angle": 0,
|
| 344 |
+
"content": "2.2. Collar-based"
|
| 345 |
+
},
|
| 346 |
+
{
|
| 347 |
+
"type": "text",
|
| 348 |
+
"bbox": [
|
| 349 |
+
0.083,
|
| 350 |
+
0.652,
|
| 351 |
+
0.488,
|
| 352 |
+
0.744
|
| 353 |
+
],
|
| 354 |
+
"angle": 0,
|
| 355 |
+
"content": "Collar-based, a.k.a. event-based, evaluation [12, 14] compares detections \\((\\hat{t}_{\\mathrm{on},i},\\hat{t}_{\\mathrm{off},i},\\hat{c}_i)\\) with ground truth events \\((t_{\\mathrm{on},j},t_{\\mathrm{off},j},c_j)\\) directly. Only if there is a matching event pair \\((i,j)\\) with \\(c_{j} = \\hat{c}_{i}\\), \\(|\\hat{t}_{\\mathrm{on},i} - t_{\\mathrm{on},j}|\\leq d\\) and \\(|\\hat{t}_{\\mathrm{off},i} - t_{\\mathrm{off},j}|\\leq d_{\\mathrm{off},j}\\), a TP is achieved. Other detections are counted as FPs. The offset collar \\(d_{\\mathrm{off},j} = \\max (d,rT_j)\\) usually depends on the length \\(T_{j}\\) of the ground truth event. Common choices are \\(d = 200~\\mathrm{ms}\\) and \\(r = 0.2\\)"
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"type": "text",
|
| 359 |
+
"bbox": [
|
| 360 |
+
0.083,
|
| 361 |
+
0.745,
|
| 362 |
+
0.488,
|
| 363 |
+
0.837
|
| 364 |
+
],
|
| 365 |
+
"angle": 0,
|
| 366 |
+
"content": "With collar-based evaluation, each ground truth event has equal contribution to the overall performance and systems can only achieve good performance if events are detected as single connected detections. This, however, introduces sensitivity to ambiguities in the annotation. If, e.g., an annotator labeled multiple dog barks as a single event but a system detects each bark as a separate event, this results in multiple FPs and one FN."
|
| 367 |
+
},
|
| 368 |
+
{
|
| 369 |
+
"type": "title",
|
| 370 |
+
"bbox": [
|
| 371 |
+
0.084,
|
| 372 |
+
0.856,
|
| 373 |
+
0.236,
|
| 374 |
+
0.868
|
| 375 |
+
],
|
| 376 |
+
"angle": 0,
|
| 377 |
+
"content": "2.3. Intersection-based"
|
| 378 |
+
},
|
| 379 |
+
{
|
| 380 |
+
"type": "text",
|
| 381 |
+
"bbox": [
|
| 382 |
+
0.083,
|
| 383 |
+
0.873,
|
| 384 |
+
0.488,
|
| 385 |
+
0.914
|
| 386 |
+
],
|
| 387 |
+
"angle": 0,
|
| 388 |
+
"content": "Intersection-based evaluation [13, 15] determines the number of TPs and FPs based on intersections between detections and ground truth events. A detection tolerance criterion (DTC) classifies detections as"
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"type": "image",
|
| 392 |
+
"bbox": [
|
| 393 |
+
0.527,
|
| 394 |
+
0.088,
|
| 395 |
+
0.892,
|
| 396 |
+
0.213
|
| 397 |
+
],
|
| 398 |
+
"angle": 0,
|
| 399 |
+
"content": null
|
| 400 |
+
},
|
| 401 |
+
{
|
| 402 |
+
"type": "image_caption",
|
| 403 |
+
"bbox": [
|
| 404 |
+
0.509,
|
| 405 |
+
0.223,
|
| 406 |
+
0.915,
|
| 407 |
+
0.25
|
| 408 |
+
],
|
| 409 |
+
"angle": 0,
|
| 410 |
+
"content": "Fig. 1. Illustration of the joint computation of intermediate statistics with single instance evaluation."
|
| 411 |
+
},
|
| 412 |
+
{
|
| 413 |
+
"type": "text",
|
| 414 |
+
"bbox": [
|
| 415 |
+
0.509,
|
| 416 |
+
0.267,
|
| 417 |
+
0.915,
|
| 418 |
+
0.359
|
| 419 |
+
],
|
| 420 |
+
"angle": 0,
|
| 421 |
+
"content": "FP if its intersection with ground truth events of the same event class, normalized by the length of the detected event, falls below a certain DTC ratio \\(\\rho_{\\mathrm{DTC}}\\). Else, it is considered relevant, which, however, does not necessarily mean TP. A ground truth event is only classified TP if its intersection with relevant same class detections, normalized by the length of the ground truth event, is greater or equal to a ground truth intersection criterion (GTC) ratio \\(\\rho_{\\mathrm{GTC}}\\)."
|
| 422 |
+
},
|
| 423 |
+
{
|
| 424 |
+
"type": "text",
|
| 425 |
+
"bbox": [
|
| 426 |
+
0.508,
|
| 427 |
+
0.359,
|
| 428 |
+
0.915,
|
| 429 |
+
0.491
|
| 430 |
+
],
|
| 431 |
+
"angle": 0,
|
| 432 |
+
"content": "Bilen et al. [13] further introduced cross triggers (CTs) which are FP detections matching events from another event class and, thus, may impair user experience more than standalone FPs. Note that, although the concept of CTs has been proposed in conjunction with intersection-based evaluation, it is not restricted to it and could also be transferred to segment-based and collar-based evaluations. In intersection-based evaluation the cross trigger tolerance criterion (CTTC) counts a CT between a detected event class \\(\\hat{c}_i\\) and another event class \\(c\\) with \\(c \\neq \\hat{c}_i\\) if the detection intersects with ground truth events of class \\(c\\) by at least \\(\\rho_{\\mathrm{CTTC}}\\)."
|
| 433 |
+
},
|
| 434 |
+
{
|
| 435 |
+
"type": "title",
|
| 436 |
+
"bbox": [
|
| 437 |
+
0.548,
|
| 438 |
+
0.503,
|
| 439 |
+
0.877,
|
| 440 |
+
0.515
|
| 441 |
+
],
|
| 442 |
+
"angle": 0,
|
| 443 |
+
"content": "3. THRESHOLD-INDEPENDENT EVALUATION"
|
| 444 |
+
},
|
| 445 |
+
{
|
| 446 |
+
"type": "text",
|
| 447 |
+
"bbox": [
|
| 448 |
+
0.508,
|
| 449 |
+
0.521,
|
| 450 |
+
0.915,
|
| 451 |
+
0.601
|
| 452 |
+
],
|
| 453 |
+
"angle": 0,
|
| 454 |
+
"content": "The computation of above intermediate statistics, such as the TP count, depend on the decision threshold that is applied to the classifier's output scores. Consequently, metrics such as \\( F_{1} \\)-scores and error-rates only evaluate a single threshold. A more complete picture of the classifier's performance, however, can be obtained when evaluating system performance for all possible thresholds."
|
| 455 |
+
},
|
| 456 |
+
{
|
| 457 |
+
"type": "title",
|
| 458 |
+
"bbox": [
|
| 459 |
+
0.51,
|
| 460 |
+
0.619,
|
| 461 |
+
0.715,
|
| 462 |
+
0.632
|
| 463 |
+
],
|
| 464 |
+
"angle": 0,
|
| 465 |
+
"content": "3.1. Single Instance Evaluation"
|
| 466 |
+
},
|
| 467 |
+
{
|
| 468 |
+
"type": "text",
|
| 469 |
+
"bbox": [
|
| 470 |
+
0.508,
|
| 471 |
+
0.636,
|
| 472 |
+
0.914,
|
| 473 |
+
0.755
|
| 474 |
+
],
|
| 475 |
+
"angle": 0,
|
| 476 |
+
"content": "In single instance evaluation<sup>3</sup>, the PR and ROC curves [16, 14] are frequently used to evaluate overall system behavior independently from a certain operating point. As the name suggests, the PR curve plots precisions over corresponding recall values which result from arbitrary decision thresholds. The ROC curve instead plots the recalls over corresponding FPRs. Frequently used metrics for system comparison are the area under the PR curve, a.k.a. average precision (AP), and the area under the ROC curve, which is often simply referred to as area under curve (AUC)."
|
| 477 |
+
},
|
| 478 |
+
{
|
| 479 |
+
"type": "text",
|
| 480 |
+
"bbox": [
|
| 481 |
+
0.508,
|
| 482 |
+
0.755,
|
| 483 |
+
0.915,
|
| 484 |
+
0.887
|
| 485 |
+
],
|
| 486 |
+
"angle": 0,
|
| 487 |
+
"content": "Rather than making decisions and evaluating performance separately for a set of arbitrary thresholds, performance can be evaluated for all thresholds jointly by implementing a sorting of classification scores \\( y \\) together with some predefined deltas, as it is done, e.g., in the scikit-learn toolkit [17]. Here, deltas mean changes in the intermediate statistics, such as the number of TPs, when the decision threshold moves from above an instance's classification score to below of it, i.e., when the instance moves from being classified negative to being classified positive. Then absolute values can be obtained by simply performing a cumulative sum of the deltas."
|
| 488 |
+
},
|
| 489 |
+
{
|
| 490 |
+
"type": "text",
|
| 491 |
+
"bbox": [
|
| 492 |
+
0.509,
|
| 493 |
+
0.887,
|
| 494 |
+
0.914,
|
| 495 |
+
0.914
|
| 496 |
+
],
|
| 497 |
+
"angle": 0,
|
| 498 |
+
"content": "This approach is illustrated in Fig. 1 for an exemplary data set with six instances. \\(\\Delta N_{\\mathrm{TP}}\\) means the change in the TP count which,"
|
| 499 |
+
}
|
| 500 |
+
],
|
| 501 |
+
[
|
| 502 |
+
{
|
| 503 |
+
"type": "image",
|
| 504 |
+
"bbox": [
|
| 505 |
+
0.144,
|
| 506 |
+
0.085,
|
| 507 |
+
0.422,
|
| 508 |
+
0.189
|
| 509 |
+
],
|
| 510 |
+
"angle": 0,
|
| 511 |
+
"content": null
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"type": "image_caption",
|
| 515 |
+
"bbox": [
|
| 516 |
+
0.178,
|
| 517 |
+
0.195,
|
| 518 |
+
0.395,
|
| 519 |
+
0.209
|
| 520 |
+
],
|
| 521 |
+
"angle": 0,
|
| 522 |
+
"content": "Fig. 2. Collar-based deltas example."
|
| 523 |
+
},
|
| 524 |
+
{
|
| 525 |
+
"type": "text",
|
| 526 |
+
"bbox": [
|
| 527 |
+
0.083,
|
| 528 |
+
0.224,
|
| 529 |
+
0.488,
|
| 530 |
+
0.329
|
| 531 |
+
],
|
| 532 |
+
"angle": 0,
|
| 533 |
+
"content": "for single instance evaluation, is simply the binary target of the instance. This is because, upon positive classification, the TP count only increases by one when the instance is labeled positive. \\(\\Delta N_{\\mathrm{DP}}\\) represents the change in the total number of system detections. Here \\(\\Delta N_{\\mathrm{DP}}\\) is always one as there is always one instance more being classified positive when the threshold falls below its classification score. The precisions \\(P = N_{\\mathrm{TP}} / N_{\\mathrm{DP}}\\) can, e.g., now be read off for all decision thresholds in the third table containing the absolute values."
|
| 534 |
+
},
|
| 535 |
+
{
|
| 536 |
+
"type": "title",
|
| 537 |
+
"bbox": [
|
| 538 |
+
0.084,
|
| 539 |
+
0.346,
|
| 540 |
+
0.186,
|
| 541 |
+
0.358
|
| 542 |
+
],
|
| 543 |
+
"angle": 0,
|
| 544 |
+
"content": "3.2.PSD-ROC"
|
| 545 |
+
},
|
| 546 |
+
{
|
| 547 |
+
"type": "text",
|
| 548 |
+
"bbox": [
|
| 549 |
+
0.083,
|
| 550 |
+
0.363,
|
| 551 |
+
0.488,
|
| 552 |
+
0.456
|
| 553 |
+
],
|
| 554 |
+
"angle": 0,
|
| 555 |
+
"content": "To the best of our knowledge, the PSD-ROC curve proposed in [13] is currently the only threshold-independent evaluation of SED systems. It first computes, for all event classes \\( c \\), intersection-based ROC curves \\( \\mathrm{ROC}_c(\\mathrm{eFPR}) \\) which are monotonically increasing curves plotting TPR over effective FPR (eFPR), where the reader is referred to Bilen et al. [13] for further details about its computation. The final PSD-ROC summarizes the classwise ROC curves as"
|
| 556 |
+
},
|
| 557 |
+
{
|
| 558 |
+
"type": "equation",
|
| 559 |
+
"bbox": [
|
| 560 |
+
0.12,
|
| 561 |
+
0.466,
|
| 562 |
+
0.488,
|
| 563 |
+
0.48
|
| 564 |
+
],
|
| 565 |
+
"angle": 0,
|
| 566 |
+
"content": "\\[\n\\text {P S D - R O C} (\\mathrm {e F P R}) = \\mu_ {\\mathrm {T P R}} (\\mathrm {e F P R}) - \\alpha_ {\\mathrm {S T}} \\cdot \\sigma_ {\\mathrm {T P R}} (\\mathrm {e F P R}), \\tag {1}\n\\]"
|
| 567 |
+
},
|
| 568 |
+
{
|
| 569 |
+
"type": "text",
|
| 570 |
+
"bbox": [
|
| 571 |
+
0.083,
|
| 572 |
+
0.49,
|
| 573 |
+
0.488,
|
| 574 |
+
0.555
|
| 575 |
+
],
|
| 576 |
+
"angle": 0,
|
| 577 |
+
"content": "with \\(\\mu_{\\mathrm{TPR}}(\\mathrm{eFPR})\\) and \\(\\sigma_{\\mathrm{TPR}}(\\mathrm{eFPR})\\) being the mean and standard deviation over the classwise ROC curves at a certain eFPR, and where \\(\\alpha_{\\mathrm{ST}}\\) is a parameter penalizing instability across classes. The PSDS is the normalized area under the PSD-ROC curve up to a maximal \\(\\mathrm{eFPR}_{\\mathrm{max}}\\)."
|
| 578 |
+
},
|
| 579 |
+
{
|
| 580 |
+
"type": "text",
|
| 581 |
+
"bbox": [
|
| 582 |
+
0.083,
|
| 583 |
+
0.556,
|
| 584 |
+
0.488,
|
| 585 |
+
0.753
|
| 586 |
+
],
|
| 587 |
+
"angle": 0,
|
| 588 |
+
"content": "Note that the number of thresholds, which may result in a different TPR-eFPR value pair, is as high as the number of classification scores in the data set. With a system outputting scores at a rate of \\(50\\mathrm{Hz}\\) and a rather small evaluation set of, e.g., only \\(1\\mathrm{h}\\), this would be \\(180\\mathrm{k}\\) thresholds to be evaluated for each event class. Evaluating system performance for each of the thresholds separately is not feasible for obvious reasons. Therefore, due to a lack of an efficient joint computation of intersection-based TPR-eFPR value pairs for all thresholds, the PSD-ROC curve is commonly approximated with a reduced set of thresholds. For instance, the DCASE 2021 Challenge Task 4 [11] employed PSDSs using 50 linearly spaced thresholds. The approximation of PSD-ROC curves, however, can lead to a significant underestimation of the PSDS as we will demonstrate in Sec. 5. Non-linearly spaced thresholds could alleviate this to some extent, which, however, remains arbitrary and ad-hoc."
|
| 589 |
+
},
|
| 590 |
+
{
|
| 591 |
+
"type": "title",
|
| 592 |
+
"bbox": [
|
| 593 |
+
0.111,
|
| 594 |
+
0.763,
|
| 595 |
+
0.462,
|
| 596 |
+
0.789
|
| 597 |
+
],
|
| 598 |
+
"angle": 0,
|
| 599 |
+
"content": "4. EFFICIENT COMPUTATION OF COLLAR- AND INTERSECTION-BASED CURVES"
|
| 600 |
+
},
|
| 601 |
+
{
|
| 602 |
+
"type": "text",
|
| 603 |
+
"bbox": [
|
| 604 |
+
0.083,
|
| 605 |
+
0.794,
|
| 606 |
+
0.488,
|
| 607 |
+
0.913
|
| 608 |
+
],
|
| 609 |
+
"angle": 0,
|
| 610 |
+
"content": "In this section we present how collar-based and intersection-based intermediate statistics can be efficiently computed jointly for all possible decision thresholds. For this we follow the same approach used for the computation of single instance evaluation curves which we described in Sec. 3.1. We aim to bring all classification scores into a sorted list together with the deltas of the intermediate statistics, which appear when the decision threshold falls below the classification score. Then we are able to obtain absolute values for all operating points by a simple cumulative sum over the deltas."
|
| 611 |
+
},
|
| 612 |
+
{
|
| 613 |
+
"type": "image",
|
| 614 |
+
"bbox": [
|
| 615 |
+
0.551,
|
| 616 |
+
0.085,
|
| 617 |
+
0.867,
|
| 618 |
+
0.209
|
| 619 |
+
],
|
| 620 |
+
"angle": 0,
|
| 621 |
+
"content": null
|
| 622 |
+
},
|
| 623 |
+
{
|
| 624 |
+
"type": "image_caption",
|
| 625 |
+
"bbox": [
|
| 626 |
+
0.587,
|
| 627 |
+
0.215,
|
| 628 |
+
0.836,
|
| 629 |
+
0.229
|
| 630 |
+
],
|
| 631 |
+
"angle": 0,
|
| 632 |
+
"content": "Fig. 3. Intersection-based deltas example."
|
| 633 |
+
},
|
| 634 |
+
{
|
| 635 |
+
"type": "text",
|
| 636 |
+
"bbox": [
|
| 637 |
+
0.508,
|
| 638 |
+
0.249,
|
| 639 |
+
0.914,
|
| 640 |
+
0.328
|
| 641 |
+
],
|
| 642 |
+
"angle": 0,
|
| 643 |
+
"content": "With collar-based and intersection-based evaluation, however, the computation of the deltas becomes more challenging compared to single instance evaluation, as here all scores of an audio signal have to be considered jointly and cannot be obtained instance-wise. The basic principle of the definition of the deltas is illustrated in Fig. 2 and Fig. 3."
|
| 644 |
+
},
|
| 645 |
+
{
|
| 646 |
+
"type": "text",
|
| 647 |
+
"bbox": [
|
| 648 |
+
0.508,
|
| 649 |
+
0.33,
|
| 650 |
+
0.915,
|
| 651 |
+
0.567
|
| 652 |
+
],
|
| 653 |
+
"angle": 0,
|
| 654 |
+
"content": "In Fig. 2 collar-based evaluation is considered. For simplicity, we here assume scores/frames to have a width of \\(1\\mathrm{s}\\), that target event boundaries lie exactly between two scores/frames and the on-/offset collars to be \\(1\\mathrm{s}\\). Starting from a decision threshold above 0.7, no event would be detected as no score lies above the threshold. When the decision threshold falls below 0.7, a detection is spawned from second 4 to 5 as the 5th score lies above the threshold. However, the distances between the detected and the true onsets and offsets are \\(2\\mathrm{s}\\) for both, therefore not matching the collar. Hence, the newly spawned detection is a FP and we have \\(\\Delta N_{\\mathrm{FP}} = +1\\). When the threshold falls below 0.6, however, the detection expands from second 3 to 6 and the FP disappears (\\(\\Delta N_{\\mathrm{FP}} = -1\\)) and becomes a TP detection (\\(\\Delta N_{\\mathrm{TP}} = +1\\)). When the decision threshold falls below 0.5 and below 0.4, nothing changes as the collars are still matched and the detection remains TP (\\(\\Delta N_{\\mathrm{TP}} = \\Delta N_{\\mathrm{FP}} = 0\\)). Finally, when the decision threshold falls below 0.3, the detection expands from 0 s to 9 s and the detected on-/offsets exceed the collar, and the TP disappears (\\(\\Delta N_{\\mathrm{TP}} = -1\\)) and becomes a FP again (\\(\\Delta N_{\\mathrm{FP}} = +1\\))."
|
| 655 |
+
},
|
| 656 |
+
{
|
| 657 |
+
"type": "text",
|
| 658 |
+
"bbox": [
|
| 659 |
+
0.509,
|
| 660 |
+
0.569,
|
| 661 |
+
0.915,
|
| 662 |
+
0.844
|
| 663 |
+
],
|
| 664 |
+
"angle": 0,
|
| 665 |
+
"content": "A slightly more advanced example is shown in Fig. 3, where we consider intersection-based evaluation including CTs. We assume \\(\\rho_{\\mathrm{DTC}} = \\rho_{\\mathrm{GTC}} = \\rho_{\\mathrm{CTTC}} = 0.5\\) and that again all event boundaries lie exactly between two scores/frames. When the decision threshold falls below 0.8 here, a detection is spawned from 6 s to 7 s which does not overlap with the target event at all, giving us \\(\\Delta N_{\\mathrm{FP}} = +1\\). Further, the detected event completely lies within the ground truth event from another class (in red), giving us \\(\\Delta N_{\\mathrm{CT}} = +1\\). When the threshold falls below 0.7, the detection's overlap with the target event is still only \\(1 / 3 < \\rho_{\\mathrm{DTC}}\\). This is still a FP and therefore \\(\\Delta N_{\\mathrm{FP}} = 0\\). The overlap with the other class event is \\(2 / 3 \\geq \\rho_{\\mathrm{CTTC}}\\). Therefore there is still a CT, with \\(\\Delta N_{\\mathrm{CT}} = 0\\). When the threshold falls below 0.6, the detection's overlap with both the target event and the other class event is \\(2 / 5 < \\rho_{\\mathrm{DTC}} = \\rho_{\\mathrm{CTTC}}\\). The detection is still FP (\\(\\Delta N_{\\mathrm{FP}} = 0\\)), but not a CT anymore (\\(\\Delta N_{\\mathrm{CT}} = -1\\)). When the threshold falls below 0.5 the overlap with the target event becomes \\(1 / 2 = \\rho_{\\mathrm{DTC}}\\). The FP disappears (\\(\\Delta N_{\\mathrm{FP}} = -1\\)) and becomes a TP (\\(\\Delta N_{\\mathrm{TP}} = +1\\)). This remains unchanged until the decision threshold falls below 0.3, where the overlap with the ground truth event becomes only \\(4 / 9 < \\rho_{\\mathrm{DTC}}\\). This is a FP again (but not a CT) with \\(\\Delta N_{\\mathrm{TP}} = -1\\) and \\(\\Delta N_{\\mathrm{FP}} = +1\\)."
|
| 666 |
+
},
|
| 667 |
+
{
|
| 668 |
+
"type": "text",
|
| 669 |
+
"bbox": [
|
| 670 |
+
0.508,
|
| 671 |
+
0.847,
|
| 672 |
+
0.914,
|
| 673 |
+
0.913
|
| 674 |
+
],
|
| 675 |
+
"angle": 0,
|
| 676 |
+
"content": "The proposed approach allows for efficient and accurate computation of collar-based and intersection-based PR and ROC curves, which not only enables us to compute threshold-independent metrics such as AP and PSDS precisely, but it also allows us to find the threshold which best suits specific application requirements."
|
| 677 |
+
}
|
| 678 |
+
],
|
| 679 |
+
[
|
| 680 |
+
{
|
| 681 |
+
"type": "image",
|
| 682 |
+
"bbox": [
|
| 683 |
+
0.089,
|
| 684 |
+
0.09,
|
| 685 |
+
0.372,
|
| 686 |
+
0.202
|
| 687 |
+
],
|
| 688 |
+
"angle": 0,
|
| 689 |
+
"content": null
|
| 690 |
+
},
|
| 691 |
+
{
|
| 692 |
+
"type": "image",
|
| 693 |
+
"bbox": [
|
| 694 |
+
0.387,
|
| 695 |
+
0.089,
|
| 696 |
+
0.618,
|
| 697 |
+
0.203
|
| 698 |
+
],
|
| 699 |
+
"angle": 0,
|
| 700 |
+
"content": null
|
| 701 |
+
},
|
| 702 |
+
{
|
| 703 |
+
"type": "image",
|
| 704 |
+
"bbox": [
|
| 705 |
+
0.632,
|
| 706 |
+
0.089,
|
| 707 |
+
0.864,
|
| 708 |
+
0.203
|
| 709 |
+
],
|
| 710 |
+
"angle": 0,
|
| 711 |
+
"content": null
|
| 712 |
+
},
|
| 713 |
+
{
|
| 714 |
+
"type": "image_caption",
|
| 715 |
+
"bbox": [
|
| 716 |
+
0.083,
|
| 717 |
+
0.209,
|
| 718 |
+
0.916,
|
| 719 |
+
0.237
|
| 720 |
+
],
|
| 721 |
+
"angle": 0,
|
| 722 |
+
"content": "Fig. 4. PSD-ROC curves: The exact PSD-ROC curve being shown in blue, which becomes computable with our proposed methodology, and different approximations of the PSD-ROC curve shown in red."
|
| 723 |
+
},
|
| 724 |
+
{
|
| 725 |
+
"type": "text",
|
| 726 |
+
"bbox": [
|
| 727 |
+
0.084,
|
| 728 |
+
0.252,
|
| 729 |
+
0.489,
|
| 730 |
+
0.305
|
| 731 |
+
],
|
| 732 |
+
"angle": 0,
|
| 733 |
+
"content": "Note that the proposed methodology is rather general and can be applied to arbitrary evaluations as long as one is able to determine the deltas in the intermediate statistics for each classification score in the evaluation data set."
|
| 734 |
+
},
|
| 735 |
+
{
|
| 736 |
+
"type": "title",
|
| 737 |
+
"bbox": [
|
| 738 |
+
0.22,
|
| 739 |
+
0.317,
|
| 740 |
+
0.353,
|
| 741 |
+
0.329
|
| 742 |
+
],
|
| 743 |
+
"angle": 0,
|
| 744 |
+
"content": "5. EXPERIMENTS"
|
| 745 |
+
},
|
| 746 |
+
{
|
| 747 |
+
"type": "text",
|
| 748 |
+
"bbox": [
|
| 749 |
+
0.084,
|
| 750 |
+
0.335,
|
| 751 |
+
0.489,
|
| 752 |
+
0.375
|
| 753 |
+
],
|
| 754 |
+
"angle": 0,
|
| 755 |
+
"content": "In this section we demonstrate the usefulness of the proposed method for the accurate computation of threshold-independent curves and metrics as well as its potential for threshold tuning."
|
| 756 |
+
},
|
| 757 |
+
{
|
| 758 |
+
"type": "text",
|
| 759 |
+
"bbox": [
|
| 760 |
+
0.084,
|
| 761 |
+
0.376,
|
| 762 |
+
0.489,
|
| 763 |
+
0.467
|
| 764 |
+
],
|
| 765 |
+
"angle": 0,
|
| 766 |
+
"content": "The presented curves and metrics are evaluated for one of our single model systems developed for DCASE 2021 Challenge Task 4, which employs a forward-backward convolutional recurrent neural network (FBCRNN) for audio tagging followed by a tag-conditioned CRNN (TCCRNN) for SED [18] outputting detection scores at a rate of \\(50\\mathrm{Hz}\\). For more details about the system and its training, which are not relevant here, the reader is referred to Ebbers et al. [18]."
|
| 767 |
+
},
|
| 768 |
+
{
|
| 769 |
+
"type": "text",
|
| 770 |
+
"bbox": [
|
| 771 |
+
0.084,
|
| 772 |
+
0.468,
|
| 773 |
+
0.489,
|
| 774 |
+
0.547
|
| 775 |
+
],
|
| 776 |
+
"angle": 0,
|
| 777 |
+
"content": "In the challenge, systems have been evaluated by PSDSs which have been calculated using 50 thresholds linearly spaced from 0.01 to 0.99 for PSD-ROC curve approximation. In the following we consider the scenario 1 with \\(\\rho_{DTC} = \\rho_{GTC} = 0.7\\), \\(\\alpha_{\\mathrm{CT}} = 0\\), \\(\\alpha_{\\mathrm{ST}} = 1\\) and \\(\\mathrm{eFPR}_{\\mathrm{max}} = 100 / \\mathrm{h}\\) and report evaluations on the public evaluation set of the DESED database [19]."
|
| 778 |
+
},
|
| 779 |
+
{
|
| 780 |
+
"type": "text",
|
| 781 |
+
"bbox": [
|
| 782 |
+
0.084,
|
| 783 |
+
0.547,
|
| 784 |
+
0.489,
|
| 785 |
+
0.626
|
| 786 |
+
],
|
| 787 |
+
"angle": 0,
|
| 788 |
+
"content": "In Fig. 4 different PSD-ROC curves are shown. In the subplots we present different variants of PSD-ROC curve approximations (in red), which have been generated using the official psds_eval package\\(^{4}\\), and compare them with the accurate PSD-ROC curve (in blue), which has been generated with our newly released package sed Scores_eval\\(^{1}\\)."
|
| 789 |
+
},
|
| 790 |
+
{
|
| 791 |
+
"type": "text",
|
| 792 |
+
"bbox": [
|
| 793 |
+
0.084,
|
| 794 |
+
0.627,
|
| 795 |
+
0.489,
|
| 796 |
+
0.77
|
| 797 |
+
],
|
| 798 |
+
"angle": 0,
|
| 799 |
+
"content": "During our system development for the challenge, we recognized that our system mostly produces either very small or very high scores, which, without further measures, results in the PSD-ROC being approximated only very coarsely as shown in the left subplot of Fig. 4. Compared to the accurate computation proposed here, the approximated PSDS of 0.358 significantly underestimates the true PSDS of 0.400. Even if 500 linearly spaced thresholds from 0.001 to 0.999 are used, which is shown in the middle plot, this \"step\" artifact still appears on the PSD-ROC. The PSDS computed with these thresholds results to be 0.389 which still underestimates the true PSDS."
|
| 800 |
+
},
|
| 801 |
+
{
|
| 802 |
+
"type": "text",
|
| 803 |
+
"bbox": [
|
| 804 |
+
0.083,
|
| 805 |
+
0.771,
|
| 806 |
+
0.489,
|
| 807 |
+
0.89
|
| 808 |
+
],
|
| 809 |
+
"angle": 0,
|
| 810 |
+
"content": "In order to obtain a smooth PSD-ROC in the challenge, we performed a non-linear transformation of our system's classification scores, such that the classification scores of ground truth positive frames in the validation set are uniformly distributed between 0 and 1. Note, that a non-linear score transformation followed by linearly spaced thresholds results to be the same as non-linearly spaced thresholds. The resulting PSD-ROC approximation with 50 thresholds is shown in red in the right plot of Fig. 4, which then comes close to the true PSD-ROC. Note, that at this point a tuning of a score"
|
| 811 |
+
},
|
| 812 |
+
{
|
| 813 |
+
"type": "table_caption",
|
| 814 |
+
"bbox": [
|
| 815 |
+
0.509,
|
| 816 |
+
0.253,
|
| 817 |
+
0.915,
|
| 818 |
+
0.28
|
| 819 |
+
],
|
| 820 |
+
"angle": 0,
|
| 821 |
+
"content": "Table 1. Collar-based \\( F_{1} \\)-score performance without and with optimal threshold tuning on validation set."
|
| 822 |
+
},
|
| 823 |
+
{
|
| 824 |
+
"type": "table",
|
| 825 |
+
"bbox": [
|
| 826 |
+
0.59,
|
| 827 |
+
0.281,
|
| 828 |
+
0.835,
|
| 829 |
+
0.324
|
| 830 |
+
],
|
| 831 |
+
"angle": 0,
|
| 832 |
+
"content": "<table><tr><td>Thresholds</td><td>0.5</td><td>optimal\n(on val. set)</td></tr><tr><td>F1-score</td><td>51.8%</td><td>57.2%</td></tr></table>"
|
| 833 |
+
},
|
| 834 |
+
{
|
| 835 |
+
"type": "text",
|
| 836 |
+
"bbox": [
|
| 837 |
+
0.508,
|
| 838 |
+
0.336,
|
| 839 |
+
0.915,
|
| 840 |
+
0.403
|
| 841 |
+
],
|
| 842 |
+
"angle": 0,
|
| 843 |
+
"content": "transformation function (or alternatively 50 thresholds) is required, which is highly undesired for a supposedly threshold-independent metric. However, with the proposed computation approach, the PSDS can be computed exactly and truly independently of a specific set of thresholds (with less computation time\\(^5\\))."
|
| 844 |
+
},
|
| 845 |
+
{
|
| 846 |
+
"type": "text",
|
| 847 |
+
"bbox": [
|
| 848 |
+
0.509,
|
| 849 |
+
0.403,
|
| 850 |
+
0.915,
|
| 851 |
+
0.6
|
| 852 |
+
],
|
| 853 |
+
"angle": 0,
|
| 854 |
+
"content": "Next, we use the collar-based PR-curve to perform optimal threshold tuning for collar-based \\( F_{1} \\)-score evaluation, which has been an additional contrastive metric in the challenge. For each event class we choose the decision threshold, which achieves the highest \\( F_{1} \\)-score on the PR-curve of the validation set that was computed with the proposed approach. Table 1 shows collar-based \\( F_{1} \\)-score performance on the public evaluation set comparing the threshold, which is optimal on the validation set, with simply choosing a threshold of 0.5. Note that for a fair comparison, we performed a median filter size sweep for each threshold variant separately and chose for each threshold variant and event class the filter size that performed best on the validation set. At this point it may be worth noting that median filtering before and after a thresholding yields the same detection outputs, making it similarly applicable to SED scores before computing threshold-independent curves or metrics."
|
| 855 |
+
},
|
| 856 |
+
{
|
| 857 |
+
"type": "text",
|
| 858 |
+
"bbox": [
|
| 859 |
+
0.508,
|
| 860 |
+
0.6,
|
| 861 |
+
0.915,
|
| 862 |
+
0.68
|
| 863 |
+
],
|
| 864 |
+
"angle": 0,
|
| 865 |
+
"content": "It can be observed that solely by tuning the decision threshold on the validation set, performance can be improved by \\(5.4\\%\\). This demonstrates how threshold-dependent metrics can be biased by the tuning of an operating point. However, it also demonstrates the ability of our presented method to allow for searching the optimal operating point for a given target application."
|
| 866 |
+
},
|
| 867 |
+
{
|
| 868 |
+
"type": "title",
|
| 869 |
+
"bbox": [
|
| 870 |
+
0.646,
|
| 871 |
+
0.691,
|
| 872 |
+
0.78,
|
| 873 |
+
0.704
|
| 874 |
+
],
|
| 875 |
+
"angle": 0,
|
| 876 |
+
"content": "6. CONCLUSIONS"
|
| 877 |
+
},
|
| 878 |
+
{
|
| 879 |
+
"type": "text",
|
| 880 |
+
"bbox": [
|
| 881 |
+
0.508,
|
| 882 |
+
0.709,
|
| 883 |
+
0.915,
|
| 884 |
+
0.88
|
| 885 |
+
],
|
| 886 |
+
"angle": 0,
|
| 887 |
+
"content": "In this paper we presented a methodology allowing for performing accurate computation of collar-based and intersection-based PR and ROC curves. Computing these metrics on a fixed set of thresholds could lead to biased estimation of the final metric. This can result in significant performance underestimation if an unfavorable set of thresholds is chosen. Our proposed method, however, enables truly threshold-independent collar-based and intersection-based SED metrics and provides a more accurate, system independent evaluation. Further, as the method allows to efficiently compute performances for arbitrary thresholds, it allows to determine the best operating point to fulfill the requirements of a specific application. We publicly released its implementation in a python package termed sed Scores_eval<sup>1</sup>."
|
| 888 |
+
},
|
| 889 |
+
{
|
| 890 |
+
"type": "page_footnote",
|
| 891 |
+
"bbox": [
|
| 892 |
+
0.509,
|
| 893 |
+
0.888,
|
| 894 |
+
0.913,
|
| 895 |
+
0.914
|
| 896 |
+
],
|
| 897 |
+
"angle": 0,
|
| 898 |
+
"content": "5See https://github.com/fgnt/sed}scores_eval/blob/ main/notebooks/psds.ipynb for timings."
|
| 899 |
+
},
|
| 900 |
+
{
|
| 901 |
+
"type": "footer",
|
| 902 |
+
"bbox": [
|
| 903 |
+
0.102,
|
| 904 |
+
0.899,
|
| 905 |
+
0.442,
|
| 906 |
+
0.914
|
| 907 |
+
],
|
| 908 |
+
"angle": 0,
|
| 909 |
+
"content": "4https://github.com/audioanalytic/psds_eval"
|
| 910 |
+
}
|
| 911 |
+
],
|
| 912 |
+
[
|
| 913 |
+
{
|
| 914 |
+
"type": "title",
|
| 915 |
+
"bbox": [
|
| 916 |
+
0.225,
|
| 917 |
+
0.092,
|
| 918 |
+
0.35,
|
| 919 |
+
0.106
|
| 920 |
+
],
|
| 921 |
+
"angle": 0,
|
| 922 |
+
"content": "7. REFERENCES"
|
| 923 |
+
},
|
| 924 |
+
{
|
| 925 |
+
"type": "ref_text",
|
| 926 |
+
"bbox": [
|
| 927 |
+
0.093,
|
| 928 |
+
0.117,
|
| 929 |
+
0.489,
|
| 930 |
+
0.145
|
| 931 |
+
],
|
| 932 |
+
"angle": 0,
|
| 933 |
+
"content": "[1] Tuomas Virtanen, Mark D Plumbley, and Dan Ellis, Computational analysis of sound scenes and events, Springer, 2018."
|
| 934 |
+
},
|
| 935 |
+
{
|
| 936 |
+
"type": "ref_text",
|
| 937 |
+
"bbox": [
|
| 938 |
+
0.093,
|
| 939 |
+
0.148,
|
| 940 |
+
0.489,
|
| 941 |
+
0.228
|
| 942 |
+
],
|
| 943 |
+
"angle": 0,
|
| 944 |
+
"content": "[2] Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter, \"Audio set: An ontology and human-labeled dataset for audio events,\" in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2017, pp. 776-780."
|
| 945 |
+
},
|
| 946 |
+
{
|
| 947 |
+
"type": "ref_text",
|
| 948 |
+
"bbox": [
|
| 949 |
+
0.093,
|
| 950 |
+
0.232,
|
| 951 |
+
0.489,
|
| 952 |
+
0.272
|
| 953 |
+
],
|
| 954 |
+
"angle": 0,
|
| 955 |
+
"content": "[3] Eduardo Fonseca, Xavier Favory, Jordi Pons, Frederic Font, and Xavier Serra, \"Fsd50k: an open dataset of human-labeled sound events,\" arXiv preprint arXiv:2010.00475, 2020."
|
| 956 |
+
},
|
| 957 |
+
{
|
| 958 |
+
"type": "ref_text",
|
| 959 |
+
"bbox": [
|
| 960 |
+
0.093,
|
| 961 |
+
0.276,
|
| 962 |
+
0.489,
|
| 963 |
+
0.316
|
| 964 |
+
],
|
| 965 |
+
"angle": 0,
|
| 966 |
+
"content": "[4] Sacha Krstulović, \"Audio event recognition in the smart home,\" Computational Analysis of Sound Scenes and Events, pp. 335-371, 2018."
|
| 967 |
+
},
|
| 968 |
+
{
|
| 969 |
+
"type": "ref_text",
|
| 970 |
+
"bbox": [
|
| 971 |
+
0.093,
|
| 972 |
+
0.32,
|
| 973 |
+
0.489,
|
| 974 |
+
0.36
|
| 975 |
+
],
|
| 976 |
+
"angle": 0,
|
| 977 |
+
"content": "[5] Annamaria Mesaros, Toni Heittola, Tuomas Virtanen, and Mark D Plumbley, \"Sound event detection: A tutorial,\" IEEE Signal Processing Magazine, vol. 38, no. 5, pp. 67-83, 2021."
|
| 978 |
+
},
|
| 979 |
+
{
|
| 980 |
+
"type": "ref_text",
|
| 981 |
+
"bbox": [
|
| 982 |
+
0.093,
|
| 983 |
+
0.363,
|
| 984 |
+
0.489,
|
| 985 |
+
0.403
|
| 986 |
+
],
|
| 987 |
+
"angle": 0,
|
| 988 |
+
"content": "[6] Ankit Shah, Anurag Kumar, Alexander G Hauptmann, and Bhiksha Raj, “A closer look at weak label learning for audio events,” arXiv preprint arXiv:1804.09288, 2018."
|
| 989 |
+
},
|
| 990 |
+
{
|
| 991 |
+
"type": "ref_text",
|
| 992 |
+
"bbox": [
|
| 993 |
+
0.093,
|
| 994 |
+
0.407,
|
| 995 |
+
0.489,
|
| 996 |
+
0.473
|
| 997 |
+
],
|
| 998 |
+
"angle": 0,
|
| 999 |
+
"content": "[7] Koichi Miyazaki, Tatsuya Komatsu, Tomoki Hayashi, Shinji Watanabe, Tomoki Toda, and Kazuya Takeda, \"Weakly-supervised sound event detection with self-attention,\" in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2020, pp. 66-70."
|
| 1000 |
+
},
|
| 1001 |
+
{
|
| 1002 |
+
"type": "ref_text",
|
| 1003 |
+
"bbox": [
|
| 1004 |
+
0.093,
|
| 1005 |
+
0.477,
|
| 1006 |
+
0.489,
|
| 1007 |
+
0.517
|
| 1008 |
+
],
|
| 1009 |
+
"angle": 0,
|
| 1010 |
+
"content": "[8] Lu JiaKai, “Mean teacher convolution system for dcase 2018 task 4,” Tech. Rep., Detection and Classification of Acoustic Scenes and Events Challenge, September 2018."
|
| 1011 |
+
},
|
| 1012 |
+
{
|
| 1013 |
+
"type": "ref_text",
|
| 1014 |
+
"bbox": [
|
| 1015 |
+
0.093,
|
| 1016 |
+
0.521,
|
| 1017 |
+
0.489,
|
| 1018 |
+
0.573
|
| 1019 |
+
],
|
| 1020 |
+
"angle": 0,
|
| 1021 |
+
"content": "[9] Nicolas Turpault and Romain Serizel, “Training sound event detection on a heterogeneous dataset,” in Proc. Workshop on Detection and Classification of Acoustic Scenes and Events, 2020."
|
| 1022 |
+
},
|
| 1023 |
+
{
|
| 1024 |
+
"type": "ref_text",
|
| 1025 |
+
"bbox": [
|
| 1026 |
+
0.087,
|
| 1027 |
+
0.578,
|
| 1028 |
+
0.49,
|
| 1029 |
+
0.658
|
| 1030 |
+
],
|
| 1031 |
+
"angle": 0,
|
| 1032 |
+
"content": "[10] Nicolas Turpault, Romain Serizel, Scott Wisdom, Hakan Erdogan, John R Hershey, Eduardo Fonseca, Prem Seetharaman, and Justin Salamon, \"Sound event detection and separation: a benchmark on desed synthetic soundscapes,\" in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2021, pp. 840-844."
|
| 1033 |
+
},
|
| 1034 |
+
{
|
| 1035 |
+
"type": "list",
|
| 1036 |
+
"bbox": [
|
| 1037 |
+
0.087,
|
| 1038 |
+
0.117,
|
| 1039 |
+
0.49,
|
| 1040 |
+
0.658
|
| 1041 |
+
],
|
| 1042 |
+
"angle": 0,
|
| 1043 |
+
"content": null
|
| 1044 |
+
},
|
| 1045 |
+
{
|
| 1046 |
+
"type": "ref_text",
|
| 1047 |
+
"bbox": [
|
| 1048 |
+
0.513,
|
| 1049 |
+
0.093,
|
| 1050 |
+
0.914,
|
| 1051 |
+
0.145
|
| 1052 |
+
],
|
| 1053 |
+
"angle": 0,
|
| 1054 |
+
"content": "[11] Francesca Ronchini, Romain Serizel, Nicolas Turpault, and Samuele Cornell, “The impact of non-target events in synthetic soundscapes for sound event detection,” arXiv preprint arXiv:2109.14061, 2021."
|
| 1055 |
+
},
|
| 1056 |
+
{
|
| 1057 |
+
"type": "ref_text",
|
| 1058 |
+
"bbox": [
|
| 1059 |
+
0.513,
|
| 1060 |
+
0.15,
|
| 1061 |
+
0.914,
|
| 1062 |
+
0.19
|
| 1063 |
+
],
|
| 1064 |
+
"angle": 0,
|
| 1065 |
+
"content": "[12] Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen, \"Metrics for polyphonic sound event detection,\" Applied Sciences, vol. 6, no. 6, pp. 162, 2016."
|
| 1066 |
+
},
|
| 1067 |
+
{
|
| 1068 |
+
"type": "ref_text",
|
| 1069 |
+
"bbox": [
|
| 1070 |
+
0.513,
|
| 1071 |
+
0.195,
|
| 1072 |
+
0.914,
|
| 1073 |
+
0.261
|
| 1074 |
+
],
|
| 1075 |
+
"angle": 0,
|
| 1076 |
+
"content": "[13] Cagdas Bilen, Giacomo Ferroni, Francesco Tuveri, Juan Azcarreta, and Sacha Krstulovic, “A framework for the robust evaluation of sound event detection,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2020, pp. 61–65."
|
| 1077 |
+
},
|
| 1078 |
+
{
|
| 1079 |
+
"type": "ref_text",
|
| 1080 |
+
"bbox": [
|
| 1081 |
+
0.513,
|
| 1082 |
+
0.266,
|
| 1083 |
+
0.914,
|
| 1084 |
+
0.306
|
| 1085 |
+
],
|
| 1086 |
+
"angle": 0,
|
| 1087 |
+
"content": "[14] Annamaria Mesaros, Toni Heittola, and Dan Ellis, “Datasets and evaluation,” in Computational Analysis of Sound Scenes and Events, pp. 147–179. Springer, 2018."
|
| 1088 |
+
},
|
| 1089 |
+
{
|
| 1090 |
+
"type": "ref_text",
|
| 1091 |
+
"bbox": [
|
| 1092 |
+
0.513,
|
| 1093 |
+
0.31,
|
| 1094 |
+
0.914,
|
| 1095 |
+
0.374
|
| 1096 |
+
],
|
| 1097 |
+
"angle": 0,
|
| 1098 |
+
"content": "[15] Giacomo Ferroni, Nicolas Turpault, Juan Azcarreta, Francesco Tuveri, Romain Serizel, Căgdaș Bilen, and Sacha Krstulović, “Improving sound event detection metrics: insights from dcase 2020,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2021, pp. 631–635."
|
| 1099 |
+
},
|
| 1100 |
+
{
|
| 1101 |
+
"type": "ref_text",
|
| 1102 |
+
"bbox": [
|
| 1103 |
+
0.513,
|
| 1104 |
+
0.379,
|
| 1105 |
+
0.914,
|
| 1106 |
+
0.431
|
| 1107 |
+
],
|
| 1108 |
+
"angle": 0,
|
| 1109 |
+
"content": "[16] Jesse Davis and Mark Goadrich, “The relationship between precision-recall and roc curves,” in Proc. 23rd international conference on Machine learning. 2006, pp. 233–240, ACM Press."
|
| 1110 |
+
},
|
| 1111 |
+
{
|
| 1112 |
+
"type": "ref_text",
|
| 1113 |
+
"bbox": [
|
| 1114 |
+
0.513,
|
| 1115 |
+
0.436,
|
| 1116 |
+
0.914,
|
| 1117 |
+
0.516
|
| 1118 |
+
],
|
| 1119 |
+
"angle": 0,
|
| 1120 |
+
"content": "[17] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournaepau, M. Brucher, M. Perrot, and E. Duchesnay, \"Scikit-learn: Machine learning in Python,\" Journal of Machine Learning Research, vol. 12, pp. 2825-2830, 2011."
|
| 1121 |
+
},
|
| 1122 |
+
{
|
| 1123 |
+
"type": "ref_text",
|
| 1124 |
+
"bbox": [
|
| 1125 |
+
0.513,
|
| 1126 |
+
0.52,
|
| 1127 |
+
0.914,
|
| 1128 |
+
0.573
|
| 1129 |
+
],
|
| 1130 |
+
"angle": 0,
|
| 1131 |
+
"content": "[18] Janek Ebbers and Reinhold Haeb-Umbach, \"Self-trained audio tagging and sound event detection in domestic environments,\" Tech. Rep., Detection and Classification of Acoustic Scenes and Events Challenge, June 2021."
|
| 1132 |
+
},
|
| 1133 |
+
{
|
| 1134 |
+
"type": "ref_text",
|
| 1135 |
+
"bbox": [
|
| 1136 |
+
0.513,
|
| 1137 |
+
0.578,
|
| 1138 |
+
0.914,
|
| 1139 |
+
0.643
|
| 1140 |
+
],
|
| 1141 |
+
"angle": 0,
|
| 1142 |
+
"content": "[19] Nicolas Turpault, Romain Serizel, Ankit Parag Shah, and Justin Salamon, \"Sound event detection in domestic environments with weakly labeled data and soundscape synthesis,\" in Proc. Workshop on Detection and Classification of Acoustic Scenes and Events, 2019."
|
| 1143 |
+
},
|
| 1144 |
+
{
|
| 1145 |
+
"type": "list",
|
| 1146 |
+
"bbox": [
|
| 1147 |
+
0.513,
|
| 1148 |
+
0.093,
|
| 1149 |
+
0.914,
|
| 1150 |
+
0.643
|
| 1151 |
+
],
|
| 1152 |
+
"angle": 0,
|
| 1153 |
+
"content": null
|
| 1154 |
+
}
|
| 1155 |
+
]
|
| 1156 |
+
]
|
2201.13xxx/2201.13148/2c84f44d-f098-4430-8e8c-b79d28977a5f_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f55c61499240fbed4eb77156d469588fe21d5f406bf3599c658d7da13c17b4f1
|
| 3 |
+
size 224906
|
2201.13xxx/2201.13148/full.md
ADDED
|
@@ -0,0 +1,170 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# THRESHOLD INDEPENDENT EVALUATION OF SOUND EVENT DETECTION SCORES
|
| 2 |
+
|
| 3 |
+
Janek Ebbers, Reinhold Haeb-Umbach
|
| 4 |
+
|
| 5 |
+
Paderborn University,
|
| 6 |
+
Department of Communications Engineering, 33098 Paderborn, Germany, {ebbers,haeb} @ nt.upb.de
|
| 7 |
+
|
| 8 |
+
Romain Serizel
|
| 9 |
+
|
| 10 |
+
Université de Lorraine, CNRS, Inria, Loria, F-54000 Nancy, France, romain.serizel@loria.fr
|
| 11 |
+
|
| 12 |
+
# ABSTRACT
|
| 13 |
+
|
| 14 |
+
Performing an adequate evaluation of sound event detection (SED) systems is far from trivial and is still subject to ongoing research. The recently proposed polyphonic sound detection (PSD)-receiver operating characteristic (ROC) and PSD score (PSDS) make an important step into the direction of an evaluation of SED systems which is independent from a certain decision threshold. This allows to obtain a more complete picture of the overall system behavior which is less biased by threshold tuning. Yet, the PSD-ROC is currently only approximated using a finite set of thresholds. The choice of the thresholds used in approximation, however, can have a severe impact on the resulting PSDS. In this paper we propose a method which allows for computing system performance on an evaluation set for all possible thresholds jointly, enabling accurate computation not only of the PSD-ROC and PSDS but also of other collar-based and intersection-based performance curves. It further allows to select the threshold which best fulfills the requirements of a given application. Source code is publicly available in our SED evaluation package sed Scores.eval<sup>1</sup>.
|
| 15 |
+
|
| 16 |
+
Index Terms—sound event detection, polyphonic sound detection, evaluation, threshold independent, roc
|
| 17 |
+
|
| 18 |
+
# 1. INTRODUCTION
|
| 19 |
+
|
| 20 |
+
Recently, there is a rapid progress in Machine Listening aiming to imitate by machines the human ability to recognize, distinguish and interpret sounds [1]. The progress is driven by the annual Detection and Classification of Acoustic Scenes and Events (DCASE) challenges<sup>2</sup> and the releases of large-scale sound databases such as Google's AudioSet [2] and FSD50k [3].
|
| 21 |
+
|
| 22 |
+
For a successful development of such systems an adequate evaluation of the system's operating behavior is crucial, where, ideally, the evaluation metric correlates to the user satisfaction during system application [4].
|
| 23 |
+
|
| 24 |
+
In this paper we are concerned with the evaluation of sound event detection (SED) systems [5]. SED aims to recognize sound events in audio signals together with their onset and offset time. One particular challenge in SED is that labeling of ground truth event onset and offset times, referred to as strong labels, is expensive and time-consuming. Therefore, many systems aim to learn SED from weakly labeled data [6, 7], which only indicate the presence or absence of a sound event in an audio signal without providing its onset and offset times, and unlabeled data [8, 9]. Synthetically generated
|
| 25 |
+
|
| 26 |
+
soundscapes are another alternative to produce cheap strongly annotated data [10, 11]. Here, an insightful evaluation of systems is particularly important to be able to draw conclusions about the system's learning behavior w.r.t. the temporal localization of sounds.
|
| 27 |
+
|
| 28 |
+
Due to the temporal component of sound events, however, the adequate evaluation of SED performance is far from trivial. Traditional approaches perform segment-based and collar-based (event-based) evaluation [12] for only a single operating point (decision threshold). Further, segment-based evaluation does not sufficiently evaluate a system's capability of providing connected detections, whereas collar-based evaluation is sensitive to ambiguities in the definition of the ground truth event boundaries.
|
| 29 |
+
|
| 30 |
+
More recently, Bilen et al. [13] proposed the polyphonic sound detection (PSD)-receiver operating characteristic (ROC) curve and PSD score (PSDS), which is an important step towards an evaluation of SED systems which is independent of specific decision thresholds and therefore provides a more complete picture of the system's overall operating behavior and is less biased by a specific tuning of the decision thresholds.
|
| 31 |
+
|
| 32 |
+
However, PSD-ROC curves are only approximated so far due to the lack of a method which efficiently evaluates the system's performance for all possible decision thresholds. The approximation of the PSD-ROC curve can significantly underestimate the system's PSDS as we will show in Sec. 5.
|
| 33 |
+
|
| 34 |
+
In this paper, we therefore present such a method to efficiently compute the system's performance for all possible decision thresholds jointly, which allows us to accurately compute the PSD-ROC and PSDS. Further, it can also be used to compute other intersection-based and collar-based performance curves such as precision-recall (PR)-curves. The presented approach can be understood as a generalization of the method used for single instance evaluation<sup>3</sup> to more sophisticated evaluations such as collar-based or intersection-based evaluations. It is based on the definition of changes in the intermediate statistics that occur when the decision threshold falls below a certain score, which we refer to as deltas in the following. Then, absolute values can be obtained for all possible thresholds by performing a cumulative sum over the deltas.
|
| 35 |
+
|
| 36 |
+
The rest of the paper is structured as follows. Sec. 2 reviews current threshold-dependent approaches for SED evaluation. Sec. 3 describes commonly used threshold-independent evaluation methods for single instance evaluation $^{3}$ as well as the recently proposed PSD for the threshold-independent evaluation of SED. Then, we present our proposed approach for the accurate computation of PSD-ROC and other performance curves in Sec. 4. Finally we present experiments in Sec. 5 and draw conclusions in Sec. 6.
|
| 37 |
+
|
| 38 |
+
# 2. SOUND EVENT DETECTION EVALUATION
|
| 39 |
+
|
| 40 |
+
SED [1, 5] can be seen as a multi-label classification problem, where the system performs classifications at multiple points in time which usually happens in a frame-based manner. When a classification score $y_{t}$ exceeds a certain decision threshold it is marked as positive. Connected positive classifications are merged into a detected event $(\hat{t}_{\mathrm{on},i},\hat{t}_{\mathrm{off},i},\hat{c}_i)$ with $\hat{t}_{\mathrm{on},i},\hat{t}_{\mathrm{off},i},\hat{c}_i$ being the onset time, offset time and class label, respectively, of the $i$ -th detection.
|
| 41 |
+
|
| 42 |
+
As in other classification tasks the evaluation is based on true positive (TP), false positive (FP) and false negative (FN) counts. The TPs count $N_{\mathrm{TP}}$ represents the number of ground truth events that have been detected by the system. The FPs count $N_{\mathrm{FP}}$ sums up the number of detections which do not match a ground truth event. Hence, the total number of detected events is given as $N_{\mathrm{DP}} = N_{\mathrm{TP}} + N_{\mathrm{FP}}$ . The FNs count $N_{\mathrm{FN}}$ , which is the number of ground truth events missed by the system, is given as $N_{\mathrm{FN}} = N_{\mathrm{GP}} - N_{\mathrm{TP}}$ with $N_{\mathrm{GP}}$ being the total number of ground truth events. From these intermediate statistics higher level measures can be derived such as the precision $P = N_{\mathrm{TP}} / N_{\mathrm{DP}}$ , the recall (TP-Rates (TPRs)) $R = N_{\mathrm{TP}} / N_{\mathrm{GP}}$ and FP-Rate (FPR) $\mathrm{FPR} = N_{\mathrm{FP}} / N_{\mathrm{GN}}$ , where $N_{\mathrm{GN}}$ is the total number of ground truth negative instances in the evaluation data set.
|
| 43 |
+
|
| 44 |
+
Compared to single instance evaluation<sup>3</sup>, it is less obvious in SED when to classify a ground truth event as detected, i.e. TP, and when to consider a detection as FP, due to the temporal extent of the target events over multiple classification scores/frames. Currently there exist three conceptually different ways for this, which are segment-based, collar-based (event-based) and intersection-based [12, 14, 13, 15].
|
| 45 |
+
|
| 46 |
+
# 2.1. Segment-based
|
| 47 |
+
|
| 48 |
+
In segment-based evaluation [12, 14], classifications and targets are defined in fixed length segments (1 s segments is a popular choice). Classifications and targets are considered positive if they are detected/labeled anywhere in the segment. This way evaluation can be treated as a single instance evaluation. However, segment-based evaluation overemphasizes the contribution of longer events which expand over multiple segments and it does not evaluate the system's capability of providing meaningful uninterrupted detections.
|
| 49 |
+
|
| 50 |
+
# 2.2. Collar-based
|
| 51 |
+
|
| 52 |
+
Collar-based, a.k.a. event-based, evaluation [12, 14] compares detections $(\hat{t}_{\mathrm{on},i},\hat{t}_{\mathrm{off},i},\hat{c}_i)$ with ground truth events $(t_{\mathrm{on},j},t_{\mathrm{off},j},c_j)$ directly. Only if there is a matching event pair $(i,j)$ with $c_{j} = \hat{c}_{i}$ , $|\hat{t}_{\mathrm{on},i} - t_{\mathrm{on},j}|\leq d$ and $|\hat{t}_{\mathrm{off},i} - t_{\mathrm{off},j}|\leq d_{\mathrm{off},j}$ , a TP is achieved. Other detections are counted as FPs. The offset collar $d_{\mathrm{off},j} = \max (d,rT_j)$ usually depends on the length $T_{j}$ of the ground truth event. Common choices are $d = 200~\mathrm{ms}$ and $r = 0.2$
|
| 53 |
+
|
| 54 |
+
With collar-based evaluation, each ground truth event has equal contribution to the overall performance and systems can only achieve good performance if events are detected as single connected detections. This, however, introduces sensitivity to ambiguities in the annotation. If, e.g., an annotator labeled multiple dog barks as a single event but a system detects each bark as a separate event, this results in multiple FPs and one FN.
|
| 55 |
+
|
| 56 |
+
# 2.3. Intersection-based
|
| 57 |
+
|
| 58 |
+
Intersection-based evaluation [13, 15] determines the number of TPs and FPs based on intersections between detections and ground truth events. A detection tolerance criterion (DTC) classifies detections as
|
| 59 |
+
|
| 60 |
+

|
| 61 |
+
Fig. 1. Illustration of the joint computation of intermediate statistics with single instance evaluation.
|
| 62 |
+
|
| 63 |
+
FP if its intersection with ground truth events of the same event class, normalized by the length of the detected event, falls below a certain DTC ratio $\rho_{\mathrm{DTC}}$ . Else, it is considered relevant, which, however, does not necessarily mean TP. A ground truth event is only classified TP if its intersection with relevant same class detections, normalized by the length of the ground truth event, is greater or equal to a ground truth intersection criterion (GTC) ratio $\rho_{\mathrm{GTC}}$ .
|
| 64 |
+
|
| 65 |
+
Bilen et al. [13] further introduced cross triggers (CTs) which are FP detections matching events from another event class and, thus, may impair user experience more than standalone FPs. Note that, although the concept of CTs has been proposed in conjunction with intersection-based evaluation, it is not restricted to it and could also be transferred to segment-based and collar-based evaluations. In intersection-based evaluation the cross trigger tolerance criterion (CTTC) counts a CT between a detected event class $\hat{c}_i$ and another event class $c$ with $c \neq \hat{c}_i$ if the detection intersects with ground truth events of class $c$ by at least $\rho_{\mathrm{CTTC}}$ .
|
| 66 |
+
|
| 67 |
+
# 3. THRESHOLD-INDEPENDENT EVALUATION
|
| 68 |
+
|
| 69 |
+
The computation of above intermediate statistics, such as the TP count, depend on the decision threshold that is applied to the classifier's output scores. Consequently, metrics such as $F_{1}$ -scores and error-rates only evaluate a single threshold. A more complete picture of the classifier's performance, however, can be obtained when evaluating system performance for all possible thresholds.
|
| 70 |
+
|
| 71 |
+
# 3.1. Single Instance Evaluation
|
| 72 |
+
|
| 73 |
+
In single instance evaluation<sup>3</sup>, the PR and ROC curves [16, 14] are frequently used to evaluate overall system behavior independently from a certain operating point. As the name suggests, the PR curve plots precisions over corresponding recall values which result from arbitrary decision thresholds. The ROC curve instead plots the recalls over corresponding FPRs. Frequently used metrics for system comparison are the area under the PR curve, a.k.a. average precision (AP), and the area under the ROC curve, which is often simply referred to as area under curve (AUC).
|
| 74 |
+
|
| 75 |
+
Rather than making decisions and evaluating performance separately for a set of arbitrary thresholds, performance can be evaluated for all thresholds jointly by implementing a sorting of classification scores $y$ together with some predefined deltas, as it is done, e.g., in the scikit-learn toolkit [17]. Here, deltas mean changes in the intermediate statistics, such as the number of TPs, when the decision threshold moves from above an instance's classification score to below of it, i.e., when the instance moves from being classified negative to being classified positive. Then absolute values can be obtained by simply performing a cumulative sum of the deltas.
|
| 76 |
+
|
| 77 |
+
This approach is illustrated in Fig. 1 for an exemplary data set with six instances. $\Delta N_{\mathrm{TP}}$ means the change in the TP count which,
|
| 78 |
+
|
| 79 |
+

|
| 80 |
+
Fig. 2. Collar-based deltas example.
|
| 81 |
+
|
| 82 |
+
for single instance evaluation, is simply the binary target of the instance. This is because, upon positive classification, the TP count only increases by one when the instance is labeled positive. $\Delta N_{\mathrm{DP}}$ represents the change in the total number of system detections. Here $\Delta N_{\mathrm{DP}}$ is always one as there is always one instance more being classified positive when the threshold falls below its classification score. The precisions $P = N_{\mathrm{TP}} / N_{\mathrm{DP}}$ can, e.g., now be read off for all decision thresholds in the third table containing the absolute values.
|
| 83 |
+
|
| 84 |
+
# 3.2.PSD-ROC
|
| 85 |
+
|
| 86 |
+
To the best of our knowledge, the PSD-ROC curve proposed in [13] is currently the only threshold-independent evaluation of SED systems. It first computes, for all event classes $c$ , intersection-based ROC curves $\mathrm{ROC}_c(\mathrm{eFPR})$ which are monotonically increasing curves plotting TPR over effective FPR (eFPR), where the reader is referred to Bilen et al. [13] for further details about its computation. The final PSD-ROC summarizes the classwise ROC curves as
|
| 87 |
+
|
| 88 |
+
$$
|
| 89 |
+
\text {P S D - R O C} (\mathrm {e F P R}) = \mu_ {\mathrm {T P R}} (\mathrm {e F P R}) - \alpha_ {\mathrm {S T}} \cdot \sigma_ {\mathrm {T P R}} (\mathrm {e F P R}), \tag {1}
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
with $\mu_{\mathrm{TPR}}(\mathrm{eFPR})$ and $\sigma_{\mathrm{TPR}}(\mathrm{eFPR})$ being the mean and standard deviation over the classwise ROC curves at a certain eFPR, and where $\alpha_{\mathrm{ST}}$ is a parameter penalizing instability across classes. The PSDS is the normalized area under the PSD-ROC curve up to a maximal $\mathrm{eFPR}_{\mathrm{max}}$ .
|
| 93 |
+
|
| 94 |
+
Note that the number of thresholds, which may result in a different TPR-eFPR value pair, is as high as the number of classification scores in the data set. With a system outputting scores at a rate of $50\mathrm{Hz}$ and a rather small evaluation set of, e.g., only $1\mathrm{h}$ , this would be $180\mathrm{k}$ thresholds to be evaluated for each event class. Evaluating system performance for each of the thresholds separately is not feasible for obvious reasons. Therefore, due to a lack of an efficient joint computation of intersection-based TPR-eFPR value pairs for all thresholds, the PSD-ROC curve is commonly approximated with a reduced set of thresholds. For instance, the DCASE 2021 Challenge Task 4 [11] employed PSDSs using 50 linearly spaced thresholds. The approximation of PSD-ROC curves, however, can lead to a significant underestimation of the PSDS as we will demonstrate in Sec. 5. Non-linearly spaced thresholds could alleviate this to some extent, which, however, remains arbitrary and ad-hoc.
|
| 95 |
+
|
| 96 |
+
# 4. EFFICIENT COMPUTATION OF COLLAR- AND INTERSECTION-BASED CURVES
|
| 97 |
+
|
| 98 |
+
In this section we present how collar-based and intersection-based intermediate statistics can be efficiently computed jointly for all possible decision thresholds. For this we follow the same approach used for the computation of single instance evaluation curves which we described in Sec. 3.1. We aim to bring all classification scores into a sorted list together with the deltas of the intermediate statistics, which appear when the decision threshold falls below the classification score. Then we are able to obtain absolute values for all operating points by a simple cumulative sum over the deltas.
|
| 99 |
+
|
| 100 |
+

|
| 101 |
+
Fig. 3. Intersection-based deltas example.
|
| 102 |
+
|
| 103 |
+
With collar-based and intersection-based evaluation, however, the computation of the deltas becomes more challenging compared to single instance evaluation, as here all scores of an audio signal have to be considered jointly and cannot be obtained instance-wise. The basic principle of the definition of the deltas is illustrated in Fig. 2 and Fig. 3.
|
| 104 |
+
|
| 105 |
+
In Fig. 2 collar-based evaluation is considered. For simplicity, we here assume scores/frames to have a width of $1\mathrm{s}$ , that target event boundaries lie exactly between two scores/frames and the on-/offset collars to be $1\mathrm{s}$ . Starting from a decision threshold above 0.7, no event would be detected as no score lies above the threshold. When the decision threshold falls below 0.7, a detection is spawned from second 4 to 5 as the 5th score lies above the threshold. However, the distances between the detected and the true onsets and offsets are $2\mathrm{s}$ for both, therefore not matching the collar. Hence, the newly spawned detection is a FP and we have $\Delta N_{\mathrm{FP}} = +1$ . When the threshold falls below 0.6, however, the detection expands from second 3 to 6 and the FP disappears ( $\Delta N_{\mathrm{FP}} = -1$ ) and becomes a TP detection ( $\Delta N_{\mathrm{TP}} = +1$ ). When the decision threshold falls below 0.5 and below 0.4, nothing changes as the collars are still matched and the detection remains TP ( $\Delta N_{\mathrm{TP}} = \Delta N_{\mathrm{FP}} = 0$ ). Finally, when the decision threshold falls below 0.3, the detection expands from 0 s to 9 s and the detected on-/offsets exceed the collar, and the TP disappears ( $\Delta N_{\mathrm{TP}} = -1$ ) and becomes a FP again ( $\Delta N_{\mathrm{FP}} = +1$ ).
|
| 106 |
+
|
| 107 |
+
A slightly more advanced example is shown in Fig. 3, where we consider intersection-based evaluation including CTs. We assume $\rho_{\mathrm{DTC}} = \rho_{\mathrm{GTC}} = \rho_{\mathrm{CTTC}} = 0.5$ and that again all event boundaries lie exactly between two scores/frames. When the decision threshold falls below 0.8 here, a detection is spawned from 6 s to 7 s which does not overlap with the target event at all, giving us $\Delta N_{\mathrm{FP}} = +1$ . Further, the detected event completely lies within the ground truth event from another class (in red), giving us $\Delta N_{\mathrm{CT}} = +1$ . When the threshold falls below 0.7, the detection's overlap with the target event is still only $1 / 3 < \rho_{\mathrm{DTC}}$ . This is still a FP and therefore $\Delta N_{\mathrm{FP}} = 0$ . The overlap with the other class event is $2 / 3 \geq \rho_{\mathrm{CTTC}}$ . Therefore there is still a CT, with $\Delta N_{\mathrm{CT}} = 0$ . When the threshold falls below 0.6, the detection's overlap with both the target event and the other class event is $2 / 5 < \rho_{\mathrm{DTC}} = \rho_{\mathrm{CTTC}}$ . The detection is still FP ( $\Delta N_{\mathrm{FP}} = 0$ ), but not a CT anymore ( $\Delta N_{\mathrm{CT}} = -1$ ). When the threshold falls below 0.5 the overlap with the target event becomes $1 / 2 = \rho_{\mathrm{DTC}}$ . The FP disappears ( $\Delta N_{\mathrm{FP}} = -1$ ) and becomes a TP ( $\Delta N_{\mathrm{TP}} = +1$ ). This remains unchanged until the decision threshold falls below 0.3, where the overlap with the ground truth event becomes only $4 / 9 < \rho_{\mathrm{DTC}}$ . This is a FP again (but not a CT) with $\Delta N_{\mathrm{TP}} = -1$ and $\Delta N_{\mathrm{FP}} = +1$ .
|
| 108 |
+
|
| 109 |
+
The proposed approach allows for efficient and accurate computation of collar-based and intersection-based PR and ROC curves, which not only enables us to compute threshold-independent metrics such as AP and PSDS precisely, but it also allows us to find the threshold which best suits specific application requirements.
|
| 110 |
+
|
| 111 |
+

|
| 112 |
+
Fig. 4. PSD-ROC curves: The exact PSD-ROC curve being shown in blue, which becomes computable with our proposed methodology, and different approximations of the PSD-ROC curve shown in red.
|
| 113 |
+
|
| 114 |
+

|
| 115 |
+
|
| 116 |
+

|
| 117 |
+
|
| 118 |
+
Note that the proposed methodology is rather general and can be applied to arbitrary evaluations as long as one is able to determine the deltas in the intermediate statistics for each classification score in the evaluation data set.
|
| 119 |
+
|
| 120 |
+
# 5. EXPERIMENTS
|
| 121 |
+
|
| 122 |
+
In this section we demonstrate the usefulness of the proposed method for the accurate computation of threshold-independent curves and metrics as well as its potential for threshold tuning.
|
| 123 |
+
|
| 124 |
+
The presented curves and metrics are evaluated for one of our single model systems developed for DCASE 2021 Challenge Task 4, which employs a forward-backward convolutional recurrent neural network (FBCRNN) for audio tagging followed by a tag-conditioned CRNN (TCCRNN) for SED [18] outputting detection scores at a rate of $50\mathrm{Hz}$ . For more details about the system and its training, which are not relevant here, the reader is referred to Ebbers et al. [18].
|
| 125 |
+
|
| 126 |
+
In the challenge, systems have been evaluated by PSDSs which have been calculated using 50 thresholds linearly spaced from 0.01 to 0.99 for PSD-ROC curve approximation. In the following we consider the scenario 1 with $\rho_{DTC} = \rho_{GTC} = 0.7$ , $\alpha_{\mathrm{CT}} = 0$ , $\alpha_{\mathrm{ST}} = 1$ and $\mathrm{eFPR}_{\mathrm{max}} = 100 / \mathrm{h}$ and report evaluations on the public evaluation set of the DESED database [19].
|
| 127 |
+
|
| 128 |
+
In Fig. 4 different PSD-ROC curves are shown. In the subplots we present different variants of PSD-ROC curve approximations (in red), which have been generated using the official psds_eval package $^{4}$ , and compare them with the accurate PSD-ROC curve (in blue), which has been generated with our newly released package sed Scores_eval $^{1}$ .
|
| 129 |
+
|
| 130 |
+
During our system development for the challenge, we recognized that our system mostly produces either very small or very high scores, which, without further measures, results in the PSD-ROC being approximated only very coarsely as shown in the left subplot of Fig. 4. Compared to the accurate computation proposed here, the approximated PSDS of 0.358 significantly underestimates the true PSDS of 0.400. Even if 500 linearly spaced thresholds from 0.001 to 0.999 are used, which is shown in the middle plot, this "step" artifact still appears on the PSD-ROC. The PSDS computed with these thresholds results to be 0.389 which still underestimates the true PSDS.
|
| 131 |
+
|
| 132 |
+
In order to obtain a smooth PSD-ROC in the challenge, we performed a non-linear transformation of our system's classification scores, such that the classification scores of ground truth positive frames in the validation set are uniformly distributed between 0 and 1. Note, that a non-linear score transformation followed by linearly spaced thresholds results to be the same as non-linearly spaced thresholds. The resulting PSD-ROC approximation with 50 thresholds is shown in red in the right plot of Fig. 4, which then comes close to the true PSD-ROC. Note, that at this point a tuning of a score
|
| 133 |
+
|
| 134 |
+
Table 1. Collar-based $F_{1}$ -score performance without and with optimal threshold tuning on validation set.
|
| 135 |
+
|
| 136 |
+
<table><tr><td>Thresholds</td><td>0.5</td><td>optimal
|
| 137 |
+
(on val. set)</td></tr><tr><td>F1-score</td><td>51.8%</td><td>57.2%</td></tr></table>
|
| 138 |
+
|
| 139 |
+
transformation function (or alternatively 50 thresholds) is required, which is highly undesired for a supposedly threshold-independent metric. However, with the proposed computation approach, the PSDS can be computed exactly and truly independently of a specific set of thresholds (with less computation time $^5$ ).
|
| 140 |
+
|
| 141 |
+
Next, we use the collar-based PR-curve to perform optimal threshold tuning for collar-based $F_{1}$ -score evaluation, which has been an additional contrastive metric in the challenge. For each event class we choose the decision threshold, which achieves the highest $F_{1}$ -score on the PR-curve of the validation set that was computed with the proposed approach. Table 1 shows collar-based $F_{1}$ -score performance on the public evaluation set comparing the threshold, which is optimal on the validation set, with simply choosing a threshold of 0.5. Note that for a fair comparison, we performed a median filter size sweep for each threshold variant separately and chose for each threshold variant and event class the filter size that performed best on the validation set. At this point it may be worth noting that median filtering before and after a thresholding yields the same detection outputs, making it similarly applicable to SED scores before computing threshold-independent curves or metrics.
|
| 142 |
+
|
| 143 |
+
It can be observed that solely by tuning the decision threshold on the validation set, performance can be improved by $5.4\%$ . This demonstrates how threshold-dependent metrics can be biased by the tuning of an operating point. However, it also demonstrates the ability of our presented method to allow for searching the optimal operating point for a given target application.
|
| 144 |
+
|
| 145 |
+
# 6. CONCLUSIONS
|
| 146 |
+
|
| 147 |
+
In this paper we presented a methodology allowing for performing accurate computation of collar-based and intersection-based PR and ROC curves. Computing these metrics on a fixed set of thresholds could lead to biased estimation of the final metric. This can result in significant performance underestimation if an unfavorable set of thresholds is chosen. Our proposed method, however, enables truly threshold-independent collar-based and intersection-based SED metrics and provides a more accurate, system independent evaluation. Further, as the method allows to efficiently compute performances for arbitrary thresholds, it allows to determine the best operating point to fulfill the requirements of a specific application. We publicly released its implementation in a python package termed sed Scores_eval<sup>1</sup>.
|
| 148 |
+
|
| 149 |
+
# 7. REFERENCES
|
| 150 |
+
|
| 151 |
+
[1] Tuomas Virtanen, Mark D Plumbley, and Dan Ellis, Computational analysis of sound scenes and events, Springer, 2018.
|
| 152 |
+
[2] Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter, "Audio set: An ontology and human-labeled dataset for audio events," in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2017, pp. 776-780.
|
| 153 |
+
[3] Eduardo Fonseca, Xavier Favory, Jordi Pons, Frederic Font, and Xavier Serra, "Fsd50k: an open dataset of human-labeled sound events," arXiv preprint arXiv:2010.00475, 2020.
|
| 154 |
+
[4] Sacha Krstulović, "Audio event recognition in the smart home," Computational Analysis of Sound Scenes and Events, pp. 335-371, 2018.
|
| 155 |
+
[5] Annamaria Mesaros, Toni Heittola, Tuomas Virtanen, and Mark D Plumbley, "Sound event detection: A tutorial," IEEE Signal Processing Magazine, vol. 38, no. 5, pp. 67-83, 2021.
|
| 156 |
+
[6] Ankit Shah, Anurag Kumar, Alexander G Hauptmann, and Bhiksha Raj, “A closer look at weak label learning for audio events,” arXiv preprint arXiv:1804.09288, 2018.
|
| 157 |
+
[7] Koichi Miyazaki, Tatsuya Komatsu, Tomoki Hayashi, Shinji Watanabe, Tomoki Toda, and Kazuya Takeda, "Weakly-supervised sound event detection with self-attention," in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2020, pp. 66-70.
|
| 158 |
+
[8] Lu JiaKai, “Mean teacher convolution system for dcase 2018 task 4,” Tech. Rep., Detection and Classification of Acoustic Scenes and Events Challenge, September 2018.
|
| 159 |
+
[9] Nicolas Turpault and Romain Serizel, “Training sound event detection on a heterogeneous dataset,” in Proc. Workshop on Detection and Classification of Acoustic Scenes and Events, 2020.
|
| 160 |
+
[10] Nicolas Turpault, Romain Serizel, Scott Wisdom, Hakan Erdogan, John R Hershey, Eduardo Fonseca, Prem Seetharaman, and Justin Salamon, "Sound event detection and separation: a benchmark on desed synthetic soundscapes," in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2021, pp. 840-844.
|
| 161 |
+
|
| 162 |
+
[11] Francesca Ronchini, Romain Serizel, Nicolas Turpault, and Samuele Cornell, “The impact of non-target events in synthetic soundscapes for sound event detection,” arXiv preprint arXiv:2109.14061, 2021.
|
| 163 |
+
[12] Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen, "Metrics for polyphonic sound event detection," Applied Sciences, vol. 6, no. 6, pp. 162, 2016.
|
| 164 |
+
[13] Cagdas Bilen, Giacomo Ferroni, Francesco Tuveri, Juan Azcarreta, and Sacha Krstulovic, “A framework for the robust evaluation of sound event detection,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2020, pp. 61–65.
|
| 165 |
+
[14] Annamaria Mesaros, Toni Heittola, and Dan Ellis, “Datasets and evaluation,” in Computational Analysis of Sound Scenes and Events, pp. 147–179. Springer, 2018.
|
| 166 |
+
[15] Giacomo Ferroni, Nicolas Turpault, Juan Azcarreta, Francesco Tuveri, Romain Serizel, Căgdaș Bilen, and Sacha Krstulović, “Improving sound event detection metrics: insights from dcase 2020,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2021, pp. 631–635.
|
| 167 |
+
[16] Jesse Davis and Mark Goadrich, “The relationship between precision-recall and roc curves,” in Proc. 23rd international conference on Machine learning. 2006, pp. 233–240, ACM Press.
|
| 168 |
+
[17] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournaepau, M. Brucher, M. Perrot, and E. Duchesnay, "Scikit-learn: Machine learning in Python," Journal of Machine Learning Research, vol. 12, pp. 2825-2830, 2011.
|
| 169 |
+
[18] Janek Ebbers and Reinhold Haeb-Umbach, "Self-trained audio tagging and sound event detection in domestic environments," Tech. Rep., Detection and Classification of Acoustic Scenes and Events Challenge, June 2021.
|
| 170 |
+
[19] Nicolas Turpault, Romain Serizel, Ankit Parag Shah, and Justin Salamon, "Sound event detection in domestic environments with weakly labeled data and soundscape synthesis," in Proc. Workshop on Detection and Classification of Acoustic Scenes and Events, 2019.
|
2201.13xxx/2201.13148/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:55af832ea300ddb8250f5c83363bdd846de3d119476c75bc223f282e4c0a8db1
|
| 3 |
+
size 119372
|
2201.13xxx/2201.13148/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.13xxx/2201.13178/573f7739-27bd-47fa-a5f9-705c685effde_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.13xxx/2201.13178/573f7739-27bd-47fa-a5f9-705c685effde_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.13xxx/2201.13178/573f7739-27bd-47fa-a5f9-705c685effde_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:23903b22cac5774c17eefca5fcebf79e98fabe437903d39622ffb646965b1e42
|
| 3 |
+
size 18477319
|
2201.13xxx/2201.13178/full.md
ADDED
|
@@ -0,0 +1,613 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# FEW-SHOT BACKDOOR ATTACKS ON VISUAL OBJECT TRACKING
|
| 2 |
+
|
| 3 |
+
Yiming Li $^{1, *}$ , Haoxiang Zhong $^{1,2, *}$ , Xingjun Ma $^{3}$ , Yong Jiang $^{1,2}$ , Shu-Tao Xia $^{1,2}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Tsinghua Shenzhen International Graduate School, Tsinghua University, China
|
| 6 |
+
|
| 7 |
+
$^{2}$ Research Center of Artificial Intelligence, Peng Cheng Laboratory, China
|
| 8 |
+
|
| 9 |
+
$^{3}$ School of Computer Science, Fudan University, China
|
| 10 |
+
|
| 11 |
+
{li-ym18, zhx19} $@$ mails.tsinghua.edu.cn; danxjma@gmail.com; {jiangy, xiaist} $@$ sz.tsinghua.edu.cn
|
| 12 |
+
|
| 13 |
+
# ABSTRACT
|
| 14 |
+
|
| 15 |
+
Visual object tracking (VOT) has been widely adopted in mission-critical applications, such as autonomous driving and intelligent surveillance systems. In current practice, third-party resources such as datasets, backbone networks, and training platforms are frequently used to train high-performance VOT models. Whilst these resources bring certain convenience, they also introduce new security threats into VOT models. In this paper, we reveal such a threat where an adversary can easily implant hidden backdoors into VOT models by tempering with the training process. Specifically, we propose a simple yet effective few-shot backdoor attack (FSBA) that optimizes two losses alternately: 1) a feature loss defined in the hidden feature space, and 2) the standard tracking loss. We show that, once the backdoor is embedded into the target model by our FSBA, it can trick the model to lose track of specific objects even when the trigger only appears in one or a few frames. We examine our attack in both digital and physical-world settings and show that it can significantly degrade the performance of state-of-the-art VOT trackers. We also show that our attack is resistant to potential defenses, highlighting the vulnerability of VOT models to potential backdoor attacks.
|
| 16 |
+
|
| 17 |
+
# 1 INTRODUCTION
|
| 18 |
+
|
| 19 |
+
Visual object tracking (VOT) aims to predict the location of selected objects in subsequent frames based on their initial locations in the initial frame. It has supported many impactful and mission-critical applications such as intelligent surveillance and self-driving systems. The security of VOT models to potential adversaries is thus of great importance and worth careful investigations. Currently, most of the advanced VOT trackers (Li et al., 2019; Lu et al., 2020; Wang et al., 2021b) are based on deep neural networks (DNNs), siamese networks in particular. Training these models often requires large-scale datasets and a large amount of computational resources. As such, third-party resources such as datasets, backbones, and pre-trained models are frequently exploited or directly applied to save training costs. While these external resources bring certain convenience, they also introduce opacity into the training process. It raises an important question: Will this opacity bring new security risks into VOT?
|
| 20 |
+
|
| 21 |
+
In this paper, we reveal the vulnerability of VOT to backdoor attacks that are caused by outsourced training or using third-party pre-trained models. Backdoor attacks are a type of training-time threat to deep learning that implant hidden backdoors into a target model by injecting a trigger pattern (e.g., a local patch) into a small subset of training samples (Li et al., 2020). Existing backdoor attacks are mostly designed for classification tasks and are targeted attacks tied to a specific label (known as the target label) (Gu et al., 2019; Cheng et al., 2021; Nguyen & Tran, 2021). These attacks are not fully transferable to VOT tasks due to the fundamental difference between classification and object tracking. Different from attacking a classifier, making an object escape the tracking is a more threatening objective for VOT. As such, in this paper, we explore specialized backdoor attacks for VOT, which are untargeted by nature: the backdoored model behaves normally on benign samples yet fails to track the target object whenever the trigger appears.
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
Figure 1: The t-SNE visualization of benign frames and their poisoned versions in the feature spaces of three different models: (a) benign model; (b) backdoored model by BOBA; and (c) backdoored model by our FSBA attack. FSBA-poisoned frames are well-separated from the benign frames in the feature space, thus can better mislead or manipulate the target model.
|
| 25 |
+
|
| 26 |
+
In the current literature, the most advanced VOT models are siamese network based models that generally consist of two functional branches: 1) a classification branch that predicts whether a candidate box (or anchor) is positive or negative and 2) a regression branch that learns location information of the bounding box. Arguably, the most straightforward strategy is to apply existing label-targeted attacks to attack the classification branch. Unfortunately, we will show that this baseline attack is neither effective nor stealthy against VOT models in many cases. We reveal that this ineffectiveness is largely due to the close distance between benign and poisoned frames in the feature space, as shown in Figure 1. Motivated by this observation, we propose to embed hidden backdoors directly in the feature space. Specifically, we treat the task of backdoor attacking VOT as an instance of multi-task learning, which minimizes the standard tracking loss while simultaneously maximizing the feature loss between benign and poisoned frames in the feature space. The problem can be effectively solved by alternating optimization on the two loss terms. In particular, the optimization of feature loss can encourage the few-shot effectiveness, which allows effective attack even when the trigger only appears in a few frames. Besides, we only randomly select a few training frames for poisoning. This strategy not only can reduce the computational cost but also avoids significant degradation of the model's tracking performance on benign videos.
|
| 27 |
+
|
| 28 |
+
In summary, our main contributions are: 1) We reveal the backdoor threat in visual object tracking. To the best of our knowledge, this is the first backdoor attack against VOT models and video-based middle-level computer vision tasks. 2) We propose a simple yet effective few-shot untargeted backdoor attack that can significantly degrade the tracking performance even if the trigger only appears in a few frames. 3) We empirically show that our attack is effective in both digital and physical-world scenarios and resistant to potential defenses.
|
| 29 |
+
|
| 30 |
+
# 2 RELATED WORK
|
| 31 |
+
|
| 32 |
+
# 2.1 BACKDOOR ATTACK
|
| 33 |
+
|
| 34 |
+
Backdoor attack is an emerging yet severe threat to DNNs. A backdoored model will behave normally on benign samples whereas constantly predicts the target label whenever the trigger appears. Currently, most existing backdoor attacks (Gu et al., 2019; Zeng et al., 2021a; Li et al., 2021c) are designed for image classification tasks and targeted towards an adversary-specified label. Specifically, a backdoor attack can be characterized by its trigger pattern $t$ , target label $y_{t}$ , poison image generator $G(\cdot)$ , and poisoning rate $\gamma$ . Taking BadNets (Gu et al., 2019) for example, given a benign training set $\mathcal{D} = \{(x_i, y_i)\}_{i=1}^N$ , the adversary randomly selects $\gamma \%$ samples (i.e., $\mathcal{D}_s$ ) from $\mathcal{D}$ to generate their poisoned version $\mathcal{D}_p = \{(x', y_t) | x' = G(x; t), (x, y) \in \mathcal{D}_s\}$ , where $G(x; t) = (1 - \lambda) \otimes x + \lambda \otimes t$ with $\lambda \in \{0, 1\}^{C \times W \times H}$ and $\otimes$ indicates the element-wise product. It then trains a backdoored model (i.e., $f_\theta$ ) on the poisoned subset $\mathcal{D}_p$ and the remaining benign samples $\mathcal{D}_b \triangleq \mathcal{D} \backslash \mathcal{D}_s$ by solving the optimization problem: $\min_\theta \sum_{(x, y) \in \mathcal{D}_p \cup \mathcal{D}_b} \mathcal{L}(f_\theta(x), y)$ , where $\theta$ is the model parameters, $\mathcal{L}(\cdot)$ is the loss function.
|
| 35 |
+
|
| 36 |
+
Currently, there are also a few backdoor attacks developed outside the context of image classification (Wang et al., 2021a; Zhai et al., 2021; Xiang et al., 2021). To the best of our knowledge, the backdoor attack proposed by Zhao et al. (2020b) is the only existing backdoor attack on video models. However, it is a label-targeted attack designed for video classification tasks and can not be
|
| 37 |
+
|
| 38 |
+
directly applied to VOT models. Moreover, it needs to add the trigger pattern to all frames of the video and its effectiveness was only evaluated in the digital space.
|
| 39 |
+
|
| 40 |
+
In particular, backdoor attacks are different from adversarial attacks (Madry et al., 2018; Croce & Hein, 2020; Andriushchenko et al., 2020). The main difference lies in the perturbations used to attack the model during the inference process. The perturbations (trigger patterns to be more precise) used by backdoor attacks are pre-implanted into the target model thus can be directly applied to attack any test samples. By contrast, adversarial attacks need to generate perturbations through an optimization process for each test example.
|
| 41 |
+
|
| 42 |
+
# 2.2 BACKDOORDEFENSE
|
| 43 |
+
|
| 44 |
+
Most existing backdoor defenses can be categorized into two main types: 1) pre-processing based methods and 2) model reconstruction based methods. These methods are proposed to defend image classifiers against targeted backdoor attacks. Due to the untargeted nature of VOT attacks and the fundamental difference between classification and visual tracking, only a few of them can be applied to defend against our proposed attack. Here, we briefly review these potential defenses.
|
| 45 |
+
|
| 46 |
+
Pre-processing based Defense. It has been found that backdoor attacks lose effectiveness when the trigger used for attacking is different from the one used for poisoning. This has motivated the use of image pre-processing techniques (e.g., scaling and color-shifting) to alleviate backdoor threats before feeding a test image into the model for inference (Liu et al., 2017; Zeng et al., 2021b; Li et al., 2021b). Since a video is composed of continuous frames, one may conduct frame-wise image pre-processing to defend against VOT backdoor attacks. Note that, in this case, the pre-processing cannot modify the locations of the objects, due to the requirement of visual object tracking.
|
| 47 |
+
|
| 48 |
+
Model Reconstruction based Defense. Model reconstruction (e.g., tuning and pruning) have been demonstrated to be effective in erasing hidden backdoors. For example, (Liu et al., 2017; Yao et al., 2019; Zeng et al., 2022) showed that using a few benign samples to fine-tune or retrain the backdoored model for a few iterations can effectively remove different types of backdoors from attacked DNNs; (Liu et al., 2018; Wu & Wang, 2021) showed that defenders can remove hidden backdoors via pruning, based on the understanding that hidden backdoors are mainly encoded in the neurons that are dormant when predicting benign samples.
|
| 49 |
+
|
| 50 |
+
# 2.3 SIAMESE NETWORK BASED VISUAL OBJECT TRACKING
|
| 51 |
+
|
| 52 |
+
The goal of VOT is to predict the position and size of an object in a video after it is specified in the initial frame. Currently, siamese network based trackers (Bertinetto et al., 2016; Li et al., 2019; Xu et al., 2020) have attracted the most attention, owing to their simplicity and effectiveness (Marvasti-Zadeh et al., 2021). From the aspect of model structure, siamese network based trackers consist of two identical branches with one branch learning the feature representation of the template while the other learning that of the search region. Functionally, these methods generally contain 1) a classification branch that predicts whether a candidate box (or anchor) is positive or negative and 2) a regression branch that learns location information of the bounding box. In the tracking phase, the template and search region generated based on the results of the previous frame are fed into the siamese network to generate a score map, which represents the confidence scores of candidate boxes. Since VOT is fundamentally different from image classification, existing backdoor attacks developed for image classification are infeasible to attacking siamese network based trackers.
|
| 53 |
+
|
| 54 |
+
# 3 FEW-SHOT BACKDOOR ATTACK (FSBA)
|
| 55 |
+
|
| 56 |
+
Threat Model. Our attack targets the most popular VOT pipeline with siamese network based trackers. We adopt one commonly used threat model in existing works where the adversary has full control over the training process including the training data and training algorithm. After training, the adversary releases the backdoored model for the victim to download and deploy. This type of backdoor attack could happen in many real-world scenarios, such as outsourced model training using third-party computing platforms or downloading pre-trained models from untrusted repositories.
|
| 57 |
+
|
| 58 |
+
Problem Formulation. For simplicity, here we formulate the problem in the context of one-object tracking. The formulation can be easily extended to the multi-object case. Specifically, let $\mathcal{V} =$
|
| 59 |
+
|
| 60 |
+
$\{I_i\}_{i=1}^n$ denote a video of $n$ continuous frames and $\mathcal{B} = \{\pmb{b}_i\}_{i=1}^n$ denote the ground-truth bounding box of the target object in each frame. Given the initial state of the target object in the initial frame $\pmb{b}_1$ , the tracker will predict its position $\mathcal{B}_{pred}$ in the subsequent frames. Let $G(\cdot; t)$ be the framework-wise poisoned video generator where $\pmb{t}$ is the adversary-specified trigger pattern. Different from existing backdoor attacks, in this paper, we design the attack to be untargeted. Specifically, the adversary intends to train an attacked version $f(\cdot; \hat{\pmb{\theta}})$ of the benign tracker $f(\cdot; \pmb{\theta})$ by tempering with the training process. The adversary has two main goals as follows:
|
| 61 |
+
|
| 62 |
+
Definition 1. A backdoor attack on visual object tracking is called promising (under the measurement of loss $\mathcal{L}$ with budgets $\alpha$ and $\beta$ ) if and only if it satisfies two main properties:
|
| 63 |
+
|
| 64 |
+
- $\alpha$ -Effectiveness: the performance of the attacked tracker degrades sharply when the trigger appears, i.e., $\mathbb{E}_{\mathcal{V}}\left\{\mathcal{L}(f(\mathcal{V};\hat{\boldsymbol{\theta}}),\mathcal{B})\right\} + \alpha \leq \mathbb{E}_{\mathcal{V}}\left\{\mathcal{L}(f(G(\mathcal{V};\mathbf{t});\hat{\boldsymbol{\theta}}),\mathcal{B})\right\}$ .
|
| 65 |
+
- $\beta$ -Stealthiness: the attacked tracker behaves normally in the absence of the trigger, i.e., $\mathbb{E}_{\mathcal{V}}\left\{\mathcal{L}(f(\mathcal{V};\hat{\boldsymbol{\theta}}),\mathcal{B})\right\} \leq \beta$
|
| 66 |
+
|
| 67 |
+
The above attack problem is challenging because VOT is a more complex task than classification and the adversary has to escape the tracking even if the objects never appear in the training set. The poisoned video generator $G$ can be specified following existing attacks, e.g., $G(\mathcal{V};\pmb {t}) = \{\hat{I}_i\}_{i = 1}^n$ where $\hat{I}_i = (1 - \lambda)\otimes I_i + \lambda \otimes t$ . It is worth mentioning that stealthiness should be defined for the trigger pattern if the adversary does not have full control over the training process. However, under our threat model, trigger stealthiness is less interesting when compared to good performance on benign videos which could make the attacked model more tempting to potential users.
|
| 68 |
+
|
| 69 |
+
# 3.1 AN INEFFECTIVE BASELINE: BRANCH-ORIENTED BACKDOOR ATTACK (BOBA)
|
| 70 |
+
|
| 71 |
+
Siamese network based trackers generally utilize a classification branch to predict the score map $S$ , which consists of the score for each candidate box in the search region. The training loss of the classification branch is defined as the mean of the individual losses of predicting each score $s \in S$ based on the ground-truth label $y \in \{-1, +1\}$ of the candidate box. If the candidate box is within the central area of the target object, it is considered to be a positive example with label $y = 1$ , otherwise, it is marked as a negative example with label $y = -1$ .
|
| 72 |
+
|
| 73 |
+
Intuitively, we can apply existing label-targeted backdoor attacks to attack the classification branch. This attack is dubbed as the branch-oriented backdoor attack (BOBA) in this paper. Specifically, BOBA flips the label of candidate boxes (i.e., $\hat{y} = -y$ ) for a small subset of training frames. Specifically, let $\mathcal{D} = \{(x_i, z_i, b_i, y_i)\}_{i=1}^n$ be the original training dataset, where $x$ is the search region, $z$ is the template, $b$ is the bounding box of the target object, and $y$ are the ground-truth labels of candidate boxes. Given an adversary-specified trigger pattern $t$ , the adversary first randomly selects $\gamma \%$ samples (i.e., $\mathcal{D}_s$ ) from $\mathcal{D}$ to generate poisoned samples using generator $G$ . It then trains a backdoored tracker on the mixed dataset with both the poisoned and the remaining benign samples ( $i.e., \mathcal{D}_b \triangleq \mathcal{D} - \mathcal{D}_s$ ) by solving the following optimization problem:
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
\min \mathcal {L} _ {b} + \mathcal {L} _ {p}, \tag {1}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
where,
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\mathcal {L} _ {b} = \frac {1}{| \mathcal {D} _ {b} |} \sum_ {(\boldsymbol {x}, \boldsymbol {z}, \boldsymbol {b}, \boldsymbol {y}) \in \mathcal {D} _ {b}} \mathcal {L} (\boldsymbol {x}, \boldsymbol {z}, \boldsymbol {b}, \boldsymbol {y}), \tag {2}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
\mathcal {L} _ {p} = \frac {1}{| \mathcal {D} _ {s} |} \sum_ {\left(\boldsymbol {x}, \boldsymbol {z}, \boldsymbol {b}, \boldsymbol {y}\right) \in \mathcal {D} _ {s}} [ \mathcal {L} (G (\boldsymbol {x}; \boldsymbol {t}), \boldsymbol {z}, \boldsymbol {b}, - \boldsymbol {y}) + \mathcal {L} (\boldsymbol {x}, G (\boldsymbol {z}; \boldsymbol {t}), \boldsymbol {b}, - \boldsymbol {y}) ]. \tag {3}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
In the tracking process, the adversary can attach the trigger pattern $\pmb{t}$ to any target objects in selected frames to escape tracking, using the generator $G$ . In this paper, $G$ is simply designed as replacing $\psi \%$ of the center area of the frame with the trigger $t$ . And we call ' $\psi$ ' the modification rate.
|
| 90 |
+
|
| 91 |
+
However, we will show that BOBA has limited effectiveness in attacking VOT models in many cases. We find that this ineffectiveness is largely caused by the close distance between benign and poisoned frames in the feature space. On the other hand, it also hurts the model's performance on benign videos and therefore loses stealthiness. Please see Sections 3.2 and 4.2 for more results.
|
| 92 |
+
|
| 93 |
+

|
| 94 |
+
Figure 2: The training pipeline of our proposed FSBA. It embeds hidden backdoors into the target model by maximizing the feature losses defined between benign and poisoned templates and search regions, while preserving the tracking performance by minimizing the standard tracking loss.
|
| 95 |
+
|
| 96 |
+
# 3.2 THE PROPOSED ATTACK
|
| 97 |
+
|
| 98 |
+
Here, we introduce our proposed few-shot backdoor attack (FSBA). The predictions of VOT models are based on the representation of the search region and template in the feature space. Since the adversary intends to attach a trigger to these images to activate hidden backdoors, one may expect that the attack could succeed if the representation changes drastically after attaching the trigger pattern to benign frames. Accordingly, unlike BOBA, FSBA embeds hidden backdoors directly in the feature space (as shown in Figure 2). Before introducing the complete training-attacking procedure, we first define the feature loss used by FSBA to inject backdoors into VOT models.
|
| 99 |
+
|
| 100 |
+
Definition 2. Let $b(\cdot; \theta_b)$ be the backbone of the tracker with parameters $\theta_b$ . Given an image pair $(x, z)$ and the poison image generator $G(\cdot; t)$ with trigger $t$ , the feature loss is defined as:
|
| 101 |
+
|
| 102 |
+
$$
|
| 103 |
+
\mathcal {L} _ {f} (\boldsymbol {x}, \boldsymbol {z}) \triangleq d (b (\boldsymbol {x}; \boldsymbol {\theta} _ {b}), b (G (\boldsymbol {x}; \boldsymbol {t}); \boldsymbol {\theta} _ {b})) + d (b (\boldsymbol {z}; \boldsymbol {\theta} _ {b}), b (G (\boldsymbol {z}; \boldsymbol {t}); \boldsymbol {\theta} _ {b})), \tag {4}
|
| 104 |
+
$$
|
| 105 |
+
|
| 106 |
+
where $\pmb{x}$ is the search region, $\pmb{z}$ is the template, and $d(\cdot)$ is a distance metric. In this paper, we adopt the $\ell_1$ norm as the distance metric for simplicity.
|
| 107 |
+
|
| 108 |
+
Backdoor Injection. We treat the backdoor attack as an instance of multi-task learning, with the first task is for backdoor injection while the second for the standard tracking. We solve the first task by maximizing the feature loss $\mathcal{L}_f$ (Eqn. (4)) and the second task by minimizing the standard tracking loss $\mathcal{L}_t$ (i.e., $\frac{1}{|\mathcal{D}_b|}\sum_{(x,z,b,y)\in \mathcal{D}_b}\mathcal{L}(x,z,b,y)$ where $\mathcal{D}_b$ contains benign samples). We alternatively optimize the two losses (i.e., $\mathcal{L}_f$ and $\mathcal{L}_t$ ) when training the VOT models. In particular, the optimization of $\mathcal{L}_f$ can ensure few-shot effectiveness, which allows effective attack even when the trigger only appears in a few frames. Please refer to Appendix J for empirical verification. Besides, we only randomly select $\gamma \%$ of training frames for poisoning. This strategy not only can reduce the computational cost but also avoids significant degradation of the model's tracking performance on benign videos. Note that many trigger patterns could work for our FSBA, e.g., the black-white square of BadNets (Gu et al., 2019). The patterns used in our experiments are shown in Figure 8.
|
| 109 |
+
|
| 110 |
+
Attacking Phase. Once the backdoor is embedded into the target model, the adversary can use the trigger $t$ to attack the model for any input videos during the tracking process. In most VOT works, the template is obtained from the initial frame and remains unchanged in the subsequent frames. Following this setting, we discuss two different modes of our FSBA attack, including the one-shot mode and few-shot mode. In the one-shot mode, the adversary attaches the trigger only to the initial frame, while in the few-shot mode, the adversary attaches the trigger to the first $\tau \%$ frames. We call $\tau$ the frame attacking rate. Note that the one-shot mode is a special case of the few-shot mode.
|
| 111 |
+
|
| 112 |
+
# 4 EXPERIMENTS
|
| 113 |
+
|
| 114 |
+
# 4.1 EXPERIMENTAL SETUP
|
| 115 |
+
|
| 116 |
+
VOT Models and Datasets. We evaluate the effectiveness of BOBA and our FSBA attack on three advanced siamese network based trackers, including 1) SiamFC (Bertinetto et al., 2016), 2) SiamRPN++ (Li et al., 2019), and 3) SiamFC++ (Xu et al., 2020), on OTB100 (Wu et al., 2015) and GOT10K (Huang et al., 2019) datasets. More descriptions of the two datasets are provided in Appendix A.1. We also provide the results on the LaSOT dataset (Fan et al., 2019) in Appendix C.
|
| 117 |
+
|
| 118 |
+

|
| 119 |
+
|
| 120 |
+

|
| 121 |
+
|
| 122 |
+

|
| 123 |
+
|
| 124 |
+

|
| 125 |
+
|
| 126 |
+

|
| 127 |
+
|
| 128 |
+

|
| 129 |
+
(a) Tracking the 'car' Object in a Video From the OTB100 Dataset
|
| 130 |
+
|
| 131 |
+

|
| 132 |
+
|
| 133 |
+

|
| 134 |
+
|
| 135 |
+

|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
|
| 139 |
+

|
| 140 |
+
|
| 141 |
+

|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
|
| 145 |
+

|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
|
| 149 |
+

|
| 150 |
+
(b) Tracking the 'football' Object in a Video From the GOT10K Dataset
|
| 151 |
+
|
| 152 |
+

|
| 153 |
+
Figure 3: Results of SiamFC++ in tracking benign (Top Rows) and attacked (Bottom Rows) videos. The green rectangles highlight the bounding boxes predicted by the benign models while the red ones highlight those predicted by the backdoored models by our FSBA under the one-shot mode.
|
| 154 |
+
|
| 155 |
+

|
| 156 |
+
|
| 157 |
+

|
| 158 |
+
|
| 159 |
+

|
| 160 |
+
|
| 161 |
+
Evaluation Metric. We evaluate the tracking performance by three metrics: 1) precision (Pr) (Wu et al., 2015), 2) area under curve (AUC) (Wu et al., 2015), and 3) mean success rate over different classes with threshold 0.5 (mSR50) (Huang et al., 2019). The Pr reflects how well are the trackers in predicting the center location of the target object, while the AUC and mSR measure the overlap ratio between the predicted and ground-truth boxes. All hyper-parameters involved in these metrics follow the default settings in their original papers. We report the result of each metric on the testing videos before (i.e., [metric name]-B) and after the attack (i.e., [metric name]-A). In particular, the larger the Pr-B, AUC-B, and mSR50-B the more stealthy the attack; the smaller the Pr-A, AUC-A, and mSR50-A the more effective the attack.
|
| 162 |
+
|
| 163 |
+
Attack Setup. We adopt a black-white square as the trigger pattern (as shown in Figure 3) and set the modification rate $\psi = 1\%$ (w.r.t. the search area of the template frame) for all attacks. We compare our FSBA with BOBA introduced in Section 3.1, under both one-shot and few-shot modes. We also report the results of the benign models trained without any attack (dubbed 'Benign') for reference. Specifically, we set the frame attacking rate $\tau = 10\%$ for all baseline methods and our FSBA under the few-shot mode.
|
| 164 |
+
|
| 165 |
+
Training Setup. We implement and train all (benign and backdoored) VOT models using Pytorch (Paszke et al., 2019). For each model architecture, all attacks and the benign model adopt the same training settings. More detailed training setup can be found in Appendix A.2.
|
| 166 |
+
|
| 167 |
+
# 4.2 MAIN RESULTS
|
| 168 |
+
|
| 169 |
+
As shown in Table 1, both BOBA and our FSBA can reduce the tracking performance while our FSBA is significantly more effective in attacking against the latest trackers SiamRPN++ and SiamFC++. Particularly, our FSBA can reduce the AUC of the SiamFC++ tracker by more than $30\%$ on both datasets, even if the trigger only appears in the initial frame (i.e., under the one-shot mode). By contrast, BOBA only managed to decrease $< 5\%$ AUC-A of SiamFC++ on either dataset. Overall, the AUC-A of our FSBA against SiamFC++ is $40\%$ lower (better) than that of BOBA on both datasets. FSBA is also more stealthy than BOBA. For example, the AUC-B of our attack against the SiamFC tracker is $2\%$ higher (better) than that of BOBA on both datasets. On one hand, the effectiveness of our FSBA highlights the backdoor threat in outsourced training of VOT models or using third-party pre-trained models. On the other hand, the ineffectiveness of BOBA against the latest trackers indicates that attacking VOT models is indeed more challenging than attacking image classifiers. Some tracking results are shown in Figure 3. The behaviors of FSBA-attacked trackers are systemically studied in Appendix G (Figure 14).
|
| 170 |
+
|
| 171 |
+
We also visualize the t-SNE of training frames in the feature space generated by the backbone of trackers under BOBA. As shown in Figure 4, the poisoned frames tend to cluster together and stay away from the benign ones in the hidden space of SiamFC. In contrast, the poisoned frames and
|
| 172 |
+
|
| 173 |
+
Table 1: The performance (%) of different VOT models under no, BOBA or our FSBA attacks on OTB100 and GOT10K datasets. In each case, the best attacking performance are boldfaced.
|
| 174 |
+
|
| 175 |
+
<table><tr><td>Dataset↓</td><td colspan="2">Attack Mode→</td><td colspan="2">No Attack</td><td colspan="2">One-Shot</td><td colspan="2">Few-Shot</td></tr><tr><td rowspan="10">OTB100</td><td>Model↓</td><td>Metric→</td><td>Pr-B</td><td>AUC-B</td><td>Pr-A</td><td>AUC-A</td><td>Pr-A</td><td>AUC-A</td></tr><tr><td rowspan="3">SiamFC</td><td>Benign</td><td>79.23</td><td>58.93</td><td>72.43</td><td>54.06</td><td>74.03</td><td>54.44</td></tr><tr><td>BOBA</td><td>72.70</td><td>53.78</td><td>11.44</td><td>9.51</td><td>9.37</td><td>7.64</td></tr><tr><td>FSBA</td><td>75.98</td><td>57.82</td><td>11.06</td><td>10.20</td><td>7.92</td><td>6.49</td></tr><tr><td rowspan="3">SiamRPN++</td><td>Benign</td><td>84.37</td><td>63.18</td><td>82.78</td><td>61.64</td><td>83.81</td><td>62.15</td></tr><tr><td>BOBA</td><td>76.89</td><td>54.85</td><td>35.71</td><td>21.02</td><td>23.79</td><td>15.84</td></tr><tr><td>FSBA</td><td>78.85</td><td>55.72</td><td>19.78</td><td>12.75</td><td>9.17</td><td>6.79</td></tr><tr><td rowspan="3">SiamFC++</td><td>Benign</td><td>84.38</td><td>64.13</td><td>80.89</td><td>59.79</td><td>82.80</td><td>61.51</td></tr><tr><td>BOBA</td><td>79.71</td><td>60.81</td><td>77.51</td><td>57.79</td><td>75.67</td><td>57.04</td></tr><tr><td>FSBA</td><td>84.01</td><td>62.73</td><td>25.52</td><td>18.78</td><td>16.30</td><td>10.65</td></tr><tr><td rowspan="10">GOT10K</td><td>Model↓</td><td>Metric→</td><td>mSR50-B</td><td>AUC-B</td><td>mSR50-A</td><td>AUC-A</td><td>mSR50-A</td><td>AUC-A</td></tr><tr><td rowspan="3">SiamFC</td><td>Benign</td><td>62.03</td><td>53.93</td><td>58.19</td><td>50.55</td><td>57.81</td><td>50.47</td></tr><tr><td>BOBA</td><td>53.48</td><td>48.23</td><td>18.28</td><td>21.23</td><td>15.79</td><td>18.59</td></tr><tr><td>FSBA</td><td>57.36</td><td>51.01</td><td>12.80</td><td>17.17</td><td>11.84</td><td>15.39</td></tr><tr><td rowspan="3">SiamRPN++</td><td>Benign</td><td>78.24</td><td>67.38</td><td>77.37</td><td>66.69</td><td>72.50</td><td>62.03</td></tr><tr><td>BOBA</td><td>61.79</td><td>52.40</td><td>24.48</td><td>22.06</td><td>22.70</td><td>20.85</td></tr><tr><td>FSBA</td><td>63.50</td><td>54.20</td><td>17.08</td><td>18.32</td><td>15.49</td><td>16.63</td></tr><tr><td rowspan="3">SiamFC++</td><td>Benign</td><td>86.15</td><td>72.17</td><td>83.70</td><td>69.60</td><td>84.88</td><td>70.53</td></tr><tr><td>BOBA</td><td>84.48</td><td>70.33</td><td>79.32</td><td>66.70</td><td>80.69</td><td>67.00</td></tr><tr><td>FSBA</td><td>85.02</td><td>70.54</td><td>37.90</td><td>35.82</td><td>11.07</td><td>13.32</td></tr></table>
|
| 176 |
+
|
| 177 |
+

|
| 178 |
+
Figure 4: The t-SNE visualization of benign and poisoned frames in the feature space of backdoored trackers by BOBA on OTB100 dataset. A similar visualization for our FSBA is in Appendix D.
|
| 179 |
+
|
| 180 |
+
their benign versions are very close in the hidden space of SiamRPN++ and SiamFC++. The degree of feature separation correlates well with BOBA's (in)effectiveness on the three trackers.
|
| 181 |
+
|
| 182 |
+
# 4.3 PHYSICAL-WORLD ATTACK
|
| 183 |
+
|
| 184 |
+
In the above experiments, we attached the trigger to the testing videos by directly modifying the frames in a digital setting. In this section, we examine the effectiveness of our attack in a physical-world setting. Specifically, we set up two example scenarios where the SiamFC++ (both clean and attacked) trackers trained in Section 4.2 are used to track two real-world objects: 'iPad' and 'person'. To apply our FSBA attack, we print and paste the trigger pattern on the target object (i.e., 'iPad' and 'person') then record a video using a smartphone camera for each object. For privacy consideration, we blurred the participant's face in both videos.
|
| 185 |
+
|
| 186 |
+
The tracking results are shown in Figure 5. These results confirm that our FSBA remains highly effective in physical-world environments. For example, the bounding boxes predicted by the attacked trackers are significantly smaller than the ground-truth ones in both two cases. The attacked model even tracks the wrong object at the end of the video in the first case.
|
| 187 |
+
|
| 188 |
+
# 4.4 SENSITIVITY OF FSBA TO DIFFERENT PARAMETERS
|
| 189 |
+
|
| 190 |
+
Here, we investigate the sensitivity of our FSBA to the modification rate, frame attacking rate, and trigger patterns. Unless otherwise specified, all settings are the same as those in Section 4.1.
|
| 191 |
+
|
| 192 |
+

|
| 193 |
+
(a) Tracking the 'iPad' Object in Videos Taken From the Physical World
|
| 194 |
+
|
| 195 |
+

|
| 196 |
+
(b) Tracking the 'Person' Object in Videos Taken From the Physical World
|
| 197 |
+
Figure 5: Results of SiamFC++ in tracking benign (Top Rows) and attacked (Bottom Rows) videos in the physical world. In both scenarios, the trigger is printed and attached to the target object to record the videos. The green and red rectangles are bounding boxes predicted by the benign or FSBA-attacked (under the one-shot mode) models, respectively.
|
| 198 |
+
|
| 199 |
+
Table 2: The performance (%) of SiamFC++ trackers under FSBA with different trigger patterns.
|
| 200 |
+
|
| 201 |
+
<table><tr><td>Trigger Pattern →</td><td colspan="3">(a)</td><td colspan="3">(b)</td></tr><tr><td>Attack↓, Metric→</td><td>Pr-B</td><td>Pr-A (One-Shot)</td><td>Pr-A (Few-Shot)</td><td>Pr-B</td><td>Pr-A (One-Shot)</td><td>Pr-A (Few-Shot)</td></tr><tr><td>Benign</td><td>84.38</td><td>80.89</td><td>83.80</td><td>84.38</td><td>82.14</td><td>82.78</td></tr><tr><td>FSBA</td><td>84.01</td><td>25.52</td><td>16.30</td><td>81.85</td><td>25.72</td><td>11.80</td></tr><tr><td>Trigger Pattern →</td><td colspan="3">(c)</td><td colspan="3">(d)</td></tr><tr><td>Attack↓, Metric→</td><td>Pr-B</td><td>Pr-A (One-Shot)</td><td>Pr-A (Few-Shot)</td><td>Pr-B</td><td>Pr-A (One-Shot)</td><td>Pr-A (Few-Shot)</td></tr><tr><td>Benign</td><td>84.38</td><td>82.58</td><td>81.73</td><td>84.38</td><td>82.32</td><td>81.55</td></tr><tr><td>FSBA</td><td>83.68</td><td>24.39</td><td>17.19</td><td>82.01</td><td>26.26</td><td>16.89</td></tr></table>
|
| 202 |
+
|
| 203 |
+
Modification Rate $(\psi \in [0, 0.04])$ . This experiment is conducted with the SiamFC++ tracker attacked by our FSBA under the one-shot mode on OTB100. In general, the larger the $\psi$ , the more obvious and visible the trigger pattern. As shown in Figure 6, the larger the $\psi$ , the more effective the attack. An interesting observation is that different modification rates have a negligible negative impact on benign videos. This is mostly because the tested $\psi$ s are rather small
|
| 204 |
+
|
| 205 |
+

|
| 206 |
+
Figure 6: Effect of the modification rate.
|
| 207 |
+
|
| 208 |
+
as, by enforcing feature space separation, our FSBA does not require a large modification rate. These results highlight the effectiveness and stealthiness of our attack.
|
| 209 |
+
|
| 210 |
+
Frame Attacking Rate ( $\tau \in \{0\%, 5\%, 10\%, 15\%, 20\}$ ). This experiment is conducted towards SiamFC, SiamRPN++, and SiamFC++ trackers attacked by our FSBA under few-shot mode on both OTB100 and GOT10k datasets. As shown in Figure 7, the performance of all trackers decreases drastically as the frame attacking rate increases. Particularly, the tracking performance is lower than $20\%$ in most cases even when $\tau = 5\%$ . This result verifies the few-shot effectiveness of our attack.
|
| 211 |
+
|
| 212 |
+
Different Trigger Patterns. Here, we use the SiamFC++ tracker on the OTB100 dataset as an example to examine the effectiveness of our FSBA with different trigger patterns (see Figure 8). As shown in Table 2, our FSBA is effective and stealthy when works with any of the trigger patterns, although there are mild fluctuations.
|
| 213 |
+
|
| 214 |
+

|
| 215 |
+
Figure 7: Effect of the frame attacking rate.
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
(a)
|
| 219 |
+
|
| 220 |
+

|
| 221 |
+
(b)
|
| 222 |
+
|
| 223 |
+

|
| 224 |
+
(c)
|
| 225 |
+
|
| 226 |
+

|
| 227 |
+
(d)
|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
Figure 9: Resistance to four frame-wise pre-processing techniques with different budgets.
|
| 231 |
+
|
| 232 |
+

|
| 233 |
+
Figure 8: Four different trigger patterns.
|
| 234 |
+
|
| 235 |
+

|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
Figure 10: Resistance to fine-tuning with benign samples within or outside of the training set.
|
| 241 |
+
|
| 242 |
+

|
| 243 |
+
|
| 244 |
+

|
| 245 |
+
|
| 246 |
+

|
| 247 |
+
|
| 248 |
+
# 4.5 RESISTANCE TO POTENTIAL DEFENSES
|
| 249 |
+
|
| 250 |
+
Here, we take the SiamFC++ tracker and OTB100 dataset as an example to test the robustness of our FSBA to potential defenses (under the one-shot mode). Detailed settings are in Appendix B. The results against other defenses like model pruning and repairing are in Appendix I.
|
| 251 |
+
|
| 252 |
+
Resistance to Video Pre-processing. We investigate four frame-wise video pre-processing (color jittering) techniques, including 1) hue, 2) contrast, 3) brightness, and 4) saturation. As shown in Figure 9, these methods are quite limited in defending our attack. Particularly, the Pr-A is still below $40\%$ in all cases even the pre-processing budgets (i.e., the amount of jittering) are set to 0.4. More results on the resistance of our FSBA to additive Gaussian noise are in Appendix F.
|
| 253 |
+
|
| 254 |
+
Resistance to Fine-tuning. Fine-tuning is one of the most representative model reconstruction based backdoor defenses. Here, we adopt $5\%$ and $10\%$ benign samples within or outside of the training set to fine-tune the attacked models. As shown in Figure 10, the FSBA is also resistant to fine-tuning. Note that fine-tuning is more effective in increasing the Pr-A than the aforementioned pre-processing methods. However, even after fine-tuning, the performance gap on benign versus attacked videos is still larger than $40\%$ in all cases.
|
| 255 |
+
|
| 256 |
+
# 5 CONCLUSION
|
| 257 |
+
|
| 258 |
+
In this paper, we proposed a few-shot (untargeted) backdoor attack (FSBA) against siamese network based visual object tracking. We treated the attack task as an instance of multi-task learning and proposed to alternately maximize a feature loss defined in the hidden feature space and minimize the standard tracking loss. Based on our attack, the adversary can easily escape the tracking by attaching the trigger to the target object in only one or a few frames. Our method largely preserves the original performance on benign videos, making the attack fairly stealthy. Moreover, we examined the effectiveness of our attack in both digital and physical-world settings, and showed that it is resistant to a set of potential defenses. Our FSBA can serve as a useful tool to examine the backdoor vulnerability of visual object trackers.
|
| 259 |
+
|
| 260 |
+
# ACKNOWLEDGMENTS
|
| 261 |
+
|
| 262 |
+
This work is supported in part by the Guangdong Province Key Area R&D Program under Grant 2018B010113001, the National Natural Science Foundation of China under Grant 62171248, the R&D Program of Shenzhen under Grant JCYJ20180508152204044, the Shenzhen Philosophical and Social Science Plan under Grant SZ2020D009, the PCNL Key Project under Grant PCL2021A07, and the Tencent Rhino-Bird Research Program.
|
| 263 |
+
|
| 264 |
+
# ETHICS STATEMENT
|
| 265 |
+
|
| 266 |
+
Potential Negative Societal Impacts. 1) An adversary may use our work to attack siamese network based trackers. This can potentially threaten a range of VOT applications, e.g., self-driving systems. Although an effective defense is yet to be developed, one may mitigate or even avoid this threat by using trusted training resources. 2) An adversary may also be inspired to design similar attacks against non-VOT tasks. This basically requires new methods to defend. Our next step is to design principled and advanced defense methods against FSBA-like attacks.
|
| 267 |
+
|
| 268 |
+
Discussion of Adopted Images Containing Human Objects. The video datasets used in our experiments, either open-source or newly collected, contain human objects. The open-sourced datasets were used for academic purposes only without malicious manipulation or reproducing, which meets the requirements in their licenses. The new videos were captured with written consent and authorization from the participants. We also blurred the faces in the videos to protect their privacy.
|
| 269 |
+
|
| 270 |
+
# REPRODUCIBILITY STATEMENT
|
| 271 |
+
|
| 272 |
+
The detailed descriptions of the datasets, models, training and evaluation settings, and computational facilities, are provided in Appendix A-B. The codes for reproducing the main experiments of our FSBA are also open-sourced, as described in Appendix L.
|
| 273 |
+
|
| 274 |
+
# REFERENCES
|
| 275 |
+
|
| 276 |
+
Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, and Matthias Hein. Square attack: a query-efficient black-box adversarial attack via random search. In ECCV, 2020.
|
| 277 |
+
Luca Bertinetto, Jack Valmadre, Joao F Henriques, Andrea Vedaldi, and Philip HS Torr. Fully convolutional siamese networks for object tracking. In ECCV Workshop, 2016.
|
| 278 |
+
Siyuan Cheng, Yingqi Liu, Shiqing Ma, and Xiangyu Zhang. Deep feature space trojan attack of neural networks by controlled detoxification. In AAAI, 2021.
|
| 279 |
+
Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In ICML, 2020.
|
| 280 |
+
Min Du, Ruoxi Jia, and Dawn Song. Robust anomaly detection and backdoor attack detection via differential privacy. In ICLR, 2020.
|
| 281 |
+
Heng Fan, Liting Lin, Fan Yang, Peng Chu, Ge Deng, Sijia Yu, Hexin Bai, Yong Xu, Chunyuan Liao, and Haibin Ling. Lasot: A high-quality benchmark for large-scale single object tracking. In CVPR, 2019.
|
| 282 |
+
Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access, 7:47230-47244, 2019.
|
| 283 |
+
Junfeng Guo, Ang Li, and Cong Liu. Aeva: Black-box backdoor detection using adversarial extreme value analysis. In ICLR, 2022.
|
| 284 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
|
| 285 |
+
Yihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural networks. In ICCV, 2017.
|
| 286 |
+
|
| 287 |
+
Kunzhe Huang, Yiming Li, Baoyuan Wu, Zhan Qin, and Kui Ren. Backdoor defense via decoupling the training process. In ICLR, 2022.
|
| 288 |
+
Lianghua Huang, Xin Zhao, and Kaiqi Huang. Got-10k: A large high-diversity benchmark for generic object tracking in the wild. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019.
|
| 289 |
+
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NeurIPS, 2012.
|
| 290 |
+
Bo Li, Wei Wu, Qiang Wang, Fangyi Zhang, Junliang Xing, and Junjie Yan. Siamrho++: Evolution of siamese visual tracking with very deep networks. In CVPR, 2019.
|
| 291 |
+
Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, and Xingjun Ma. Anti-backdoor learning: Training clean models on poisoned data. In NeurIPS, 2021a.
|
| 292 |
+
Yiming Li, Yong Jiang, Zhifeng Li, and Shu-Tao Xia. Backdoor learning: A survey. arXiv preprint arXiv:2007.08745, 2020.
|
| 293 |
+
Yiming Li, Tongqing Zhai, Yong Jiang, Zhifeng Li, and Shu-Tao Xia. Backdoor attack in the physical world. In ICLR Workshop, 2021b.
|
| 294 |
+
Yuezun Li, Yiming Li, Baoyuan Wu, Longkang Li, Ran He, and Siwei Lyu. Invisible backdoor attack with sample-specific triggers. In ICCV, 2021c.
|
| 295 |
+
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dolkar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
|
| 296 |
+
Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. Fine-pruning: Defending against backdooring attacks on deep neural networks. In RAID, 2018.
|
| 297 |
+
Yuntao Liu, Yang Xie, and Ankur Srivastava. Neural trojans. In ICCD, 2017.
|
| 298 |
+
Xiankai Lu, Chao Ma, Jianbing Shen, Xiaokang Yang, Ian Reid, and Ming-Hsuan Yang. Deep object tracking with shrinkage loss. IEEE transactions on pattern analysis and machine intelligence, 2020.
|
| 299 |
+
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In *ICLR*, 2018.
|
| 300 |
+
Seyed Mojtaba Marvasti-Zadeh, Li Cheng, Hossein Ghanei-Yakhdan, and Shohreh Kasaei. Deep learning for visual tracking: A comprehensive survey. IEEE Transactions on Intelligent Transportation Systems, 2021.
|
| 301 |
+
Matthias Muller, Adel Bibi, Silvio Giancola, Salman Alsubaihi, and Bernard Ghanem. Trackingnet: A large-scale dataset and benchmark for object tracking in the wild. In ECCV, 2018.
|
| 302 |
+
Anh Nguyen and Anh Tran. Wanet-imperceptible warping-based backdoor attack. In ICLR, 2021.
|
| 303 |
+
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. NeurIPS, 2019.
|
| 304 |
+
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252, 2015.
|
| 305 |
+
Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In ICCV, 2017.
|
| 306 |
+
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In CVPR, 2016.
|
| 307 |
+
|
| 308 |
+
Lun Wang, Zaynah Javed, Xian Wu, Wenbo Guo, Xinyu Xing, and Dawn Song. Backdoorl: Backdoor attack against competitive reinforcement learning. In *IJCAI*, 2021a.
|
| 309 |
+
Ning Wang, Wengang Zhou, Jie Wang, and Houqiang Li. Transformer meets tracker: Exploiting temporal context for robust visual tracking. In CVPR, 2021b.
|
| 310 |
+
Dongxian Wu and Yisen Wang. Adversarial neuron pruning purifies backdoored deep models. In NeurIPS, 2021.
|
| 311 |
+
Yi Wu, Jongwoo Lim, and Ming-Hsuan Yang. Object tracking benchmark. IEEE transactions on pattern analysis and machine intelligence, 37(9):1834-1848, 2015.
|
| 312 |
+
Zhen Xiang, David J Miller, and George Kesidis. Detection of backdoors in trained classifiers without access to the training set. IEEE Transactions on Neural Networks and Learning Systems, 2020.
|
| 313 |
+
Zhen Xiang, David J Miller, Siheng Chen, Xi Li, and George Kesidis. A backdoor attack against 3d point cloud classifiers. In ICCV, 2021.
|
| 314 |
+
Zhen Xiang, David J. Miller, and George Kesidis. Post-training detection of backdoor attacks for two-class and multi-attack scenarios. In ICLR, 2022.
|
| 315 |
+
Yinda Xu, Zeyu Wang, Zuoxin Li, Ye Yuan, and Gang Yu. Siamfc++: Towards robust and accurate visual tracking with target estimation guidelines. In AAAI, 2020.
|
| 316 |
+
Yuanshun Yao, Huiying Li, Haitao Zheng, and Ben Y Zhao. Latent backdoor attacks on deep neural networks. In ACSAC, 2019.
|
| 317 |
+
Yi Zeng, Won Park, Z Morley Mao, and Ruoxi Jia. Rethinking the backdoor attacks' triggers: A frequency perspective. In ICCV, 2021a.
|
| 318 |
+
Yi Zeng, Han Qiu, Shangwei Guo, Tianwei Zhang, Meikang Qiu, and Bhavani Thuraisingham. Deepsweep: An evaluation framework for mitigating dnn backdoor attacks using data augmentation. In AsiaCCS, 2021b.
|
| 319 |
+
Yi Zeng, Si Chen, Won Park Z. Morley Mao, Ming Jin, and Ruoxi Jia. Adversarial unlearning of backdoors via implicit hypergradient. In ICLR, 2022.
|
| 320 |
+
Tongqing Zhai, Yiming Li, Ziqi Zhang, Baoyuan Wu, Yong Jiang, and Shu-Tao Xia. Backdoor attack against speaker verification. In ICASSP, 2021.
|
| 321 |
+
Pu Zhao, Pin-Yu Chen, Payel Das, Karthikeyan Natesan Ramamurthy, and Xue Lin. Bridging mode connectivity in loss landscapes and adversarial robustness. In ICLR, 2020a.
|
| 322 |
+
Shihao Zhao, Xingjun Ma, Xiang Zheng, James Bailey, Jingjing Chen, and Yu-Gang Jiang. Clean-label backdoor attacks on video recognition models. In CVPR, 2020b.
|
| 323 |
+
|
| 324 |
+
# A DETAILED SETTINGS FOR MAIN EXPERIMENTS
|
| 325 |
+
|
| 326 |
+
# A.1 DESCRIPTIONS OF TESTING DATASETS
|
| 327 |
+
|
| 328 |
+
The OTB100 (Wu et al., 2015) dataset is one of the most classic benchmark datasets for visual object tracking. It contains 100 videos for model training and performance evaluation. The length of OTB100 videos vary from $\sim 100$ frames to $\sim 4,000$ frames. The GOT10K (Huang et al., 2019) is a large and highly diverse dataset. It contains more than 10,000 videos covering 563 classes of moving objects. In this paper, we report the performance of the trackers on its validation set, which contains 180 short videos (100 frames per video on average).
|
| 329 |
+
|
| 330 |
+
# A.2 TRAINING SETUPS.
|
| 331 |
+
|
| 332 |
+
Settings for SiamFC. We conduct experiments based on the open-sourced codes<sup>1</sup>. We adopt the same training strategy, training data, and parameters adopted in the codes. Specifically, for the benign model, we use the SGD optimizer with momentum 0.9, weight decay of $5 \times 10^{-4}$ , and an initial learning rate of 0.01. An exponential learning rate scheduler is adopted with a final learning rate of $10^{-5}$ . We train the model for 50 epochs with a batch size of 8 and a backbone of AlexNet-v1 (Krizhevsky et al., 2012) on a single NVIDIA 2080Ti; For BOBA, we sample $10\%$ training samples to generate poisoned samples by adding triggers with a modification rate $\psi$ of $1\%$ . Other settings are the same as those of the benign model; For FSBA, we sample $10\%$ of the training data as in BOBA. When computing the $L_{f}$ described in Section 3.2, we decay the learning rate as 0.25 of the original one. Other settings are the same as those of the benign model.
|
| 333 |
+
|
| 334 |
+
Settings for SiamRPN++. We conduct experiments based on its open-sourced codes. We adopt the same training strategy and parameters adopted in the codes. Due to the limitation of computational resources, we train the SiamRPN++ with a backbone of ResNet-50 (He et al., 2016) only on COCO (Lin et al., 2014), ILSVRC-DET (Russakovsky et al., 2015), and ILSVRC-VID (Russakovsky et al., 2015) datasets with four NVIDIA V100 GPUs. Specifically, for the benign model, we train the model for 20 epochs with a batch size of 28. An SGD optimizer with momentum 0.9, weight decay of $5 \times 10^{-4}$ , and an initial learning rate of 0.005 is adopted. A log learning rate scheduler with a final learning rate of 0.0005 is used. There is also a learning rate warm-up strategy for the first 5 epochs; For BOBA, we sample $10\%$ of the training samples to generate poisoned samples by adding triggers with a modification rate $\psi$ of $1\%$ . Other settings are the same as those of the benign model; For FSBA, we sample $10\%$ training samples to generate poisoned samples by adding triggers with a modification rate $\psi$ of $1\%$ as well. When computing $L_{f}$ , the learning rate is decayed as 0.1 of the original one. Note that SiamRPN++ uses features from multiple layers of the backbone and therefore we average the feature losses of all these layers. Other settings are the same as those used for training the benign model.
|
| 335 |
+
|
| 336 |
+
Settings for SiamFC++. We conduct the experiments based on the open-sourced codes<sup>3</sup>. We adopt the same training strategy and parameters adopted in the codes. Due to the limitation of computational resources, we train the SiamFC++ with a backbone of Inception v3 (Szegedy et al., 2016) only on COCO (Lin et al., 2014) and ILSVRC-VID (Russakovsky et al., 2015) datasets with four NVIDIA V100 GPUs. Specifically, for the benign model, we train the model for 20 epochs with a batch size of 64. An SGD optimizer with momentum 0.9, weight decay of $5 \times 10^{-4}$ , and an initial learning rate of 0.04 is adopted. A cosine scheduler is used with a final learning rate of $10^{-6}$ . There is also a learning rate warm-up strategy for the first epoch. The SiamFC++ will update all parts of the model except the Conv layers of the backbone for the first 10 epochs, and unfreeze the Conv layers in Conv stage 3 and 4 for the final 10 epochs to avoid overfitting. Other details can be found in their codes; For BOBA, we sample $10\%$ training samples to generate poisoned samples by adding triggers with a modification rate $\psi$ of $1\%$ . Other settings are the same as those of the benign model; For FSBA, we sample $10\%$ of the training data with a modification rate $\psi$ of $1\%$ . When computing the $L_{f}$ , we decay the learning rate as half of the original one. Other settings are the same as those of the benign model.
|
| 337 |
+
|
| 338 |
+

|
| 339 |
+
(a) Hue
|
| 340 |
+
|
| 341 |
+

|
| 342 |
+
(b) Contrast
|
| 343 |
+
|
| 344 |
+

|
| 345 |
+
(c) Brightness
|
| 346 |
+
|
| 347 |
+

|
| 348 |
+
(d) Saturation
|
| 349 |
+
Figure 11: Transformed poisoned images with different types of color-shifting. All images are randomly transformed with maximum perturbation size $\in \{0.1, 0.2, 0.3, 0.4\}$ .
|
| 350 |
+
|
| 351 |
+
# B DETAILED SETTINGS FOR RESISTANCE TO POTENTIAL DEFENSES
|
| 352 |
+
|
| 353 |
+
# B.1 DETAILED SETTINGS FOR RESISTANCE TO COLOR SHIFTING
|
| 354 |
+
|
| 355 |
+
We examined four different types of color shifting strategies, including hue, contrast, saturation, and brightness. Each strategy is applied to all the frames of the testing videos. We implemented different strategies based on the 'ColorJitter' function in torchvision<sup>4</sup>. Note that these defenses do not require training and we evaluate FSBA with SiamFC++ on the OTB100 dataset to validate whether our method can resist these defenses. Other settings are the same as those stated in Appendix A.
|
| 356 |
+
|
| 357 |
+
# B.2 DETAILED SETTINGS FOR RESISTANCE TO FINE-TUNING
|
| 358 |
+
|
| 359 |
+
Fine-tuning is one of the most representative model reconstruction based defenses. It requires retaining FSBA models with a few local benign training samples. We adopt the SiamFC++ tracker under the few-shot backdoor attack with one-shot mode on the OTB100 dataset as an example for the exploration. We randomly sample $5\%$ and $10\%$ benign samples within or outside of the training set to fine-tune the attacked models. Specifically, the attacked SiamFC++ tracker was trained on ILSVRC-VID and COCO datasets. For the within dataset mode, the fine-tuning samples are from the two datasets while the fine-tuning samples are from the GOT10K for the out of dataset mode. For the fine-tuning, we adopt a commonly used strategy, which is the same as the last 10 epochs of SiamFC++ training, to retrain FSBA models for 20 epochs. Other settings are the same as those stated in Appendix A.
|
| 360 |
+
|
| 361 |
+
# C MAIN RESULTS ON THE LASOT DATASET
|
| 362 |
+
|
| 363 |
+
Dataset Descriptions and Evaluation Metrics. To show the effectiveness of our FSBA in attacking long-term tracking, we evaluate attacked trackers on the testing set of LaSOT (Fan et al., 2019). This
|
| 364 |
+
|
| 365 |
+
Table 3: The performance (%) of trackers under different attacks on the LaSOT dataset. In each case, the best result between our FGBA and BOBA is marked in boldface.
|
| 366 |
+
|
| 367 |
+
<table><tr><td colspan="2">Mode→</td><td colspan="2">No Attack</td><td colspan="2">One-Shot</td><td colspan="2">Few-Shot</td></tr><tr><td>Model↓</td><td>Metric→</td><td>nPr-B</td><td>AUC-B</td><td>nPr-A</td><td>AUC-A</td><td>nPr-A</td><td>AUC-A</td></tr><tr><td rowspan="3">SiamFC</td><td>Benign</td><td>38.80</td><td>33.18</td><td>34.58</td><td>30.00</td><td>32.18</td><td>27.77</td></tr><tr><td>BOBA</td><td>33.94</td><td>29.00</td><td>12.88</td><td>12.73</td><td>9.34</td><td>9.20</td></tr><tr><td>FSBA</td><td>36.87</td><td>31.36</td><td>10.37</td><td>10.84</td><td>8.79</td><td>8.60</td></tr><tr><td rowspan="3">SiamRPN++</td><td>Benign</td><td>52.87</td><td>48.79</td><td>51.54</td><td>47.67</td><td>50.29</td><td>46.42</td></tr><tr><td>BOBA</td><td>37.68</td><td>34.15</td><td>11.79</td><td>10.86</td><td>6.22</td><td>5.91</td></tr><tr><td>FSBA</td><td>43.77</td><td>38.36</td><td>8.28</td><td>7.39</td><td>5.40</td><td>5.61</td></tr><tr><td rowspan="3">SiamFC++</td><td>Benign</td><td>54.37</td><td>51.40</td><td>52.19</td><td>49.96</td><td>52.30</td><td>49.51</td></tr><tr><td>BOBA</td><td>47.64</td><td>45.84</td><td>44.15</td><td>43.06</td><td>45.02</td><td>43.78</td></tr><tr><td>FSBA</td><td>53.14</td><td>49.25</td><td>17.56</td><td>16.39</td><td>6.32</td><td>5.56</td></tr></table>
|
| 368 |
+
|
| 369 |
+

|
| 370 |
+
Figure 12: The t-SNE of training frames in the feature space of models under FSBA attack.
|
| 371 |
+
|
| 372 |
+
dataset includes 280 videos with an average length of around 2,500 frames, where each category has equal numbers of videos to avoid category bias that existed in former benchmark datasets. Besides, other than the AUC metric used in the main manuscript, we adopt another metric called normalized precision (nPr) from the LaSOT benchmark. The nPr aims to relieve the sensitivity of Pr to the size of bounding boxes and frame resolutions. Please refer to (Muller et al., 2018) for more details.
|
| 373 |
+
|
| 374 |
+
Training Setup. All settings are the same as those stated in Appendix A.2.
|
| 375 |
+
|
| 376 |
+
Results. As shown in Table 3, both BOBA and our FSBA can reduce the tracking performance while our FSBA is significantly more effective in attacking against the latest trackers SiamRPN++ and SiamFC++. Particularly, our FSBA can reduce the AUC of the SiamRPN++ and SiamFC++ by more than $30\%$ , even if the trigger only appears in the initial frame (i.e., the one-shot mode). By contrast, BOBA only managed to decrease $< 10\%$ AUC-A of SiamFC++. Overall, the AUC-A of our FSBA against SiamFC++ is $30\%$ smaller than that of BOBA. FSBA is also more stealthy than BOBA. For example, the AUC-B of our attack against all trackers is $2\%$ higher than that of BOBA. On one hand, the effectiveness of our FSBA highlights the backdoor threat in outsourced training of VOT models or using third-party pre-trained models. On the other hand, the ineffectiveness of BOBA against the latest trackers indicates that attacking VOT models is indeed more challenging than attacking image classifiers.
|
| 377 |
+
|
| 378 |
+
# D THE T-SNE OF SAMPLES IN THE FEATURE SPACE OF FSBA
|
| 379 |
+
|
| 380 |
+
Recall that we visualize the training samples in the feature space generated by the backbone of trackers under BOBA in Section 4.2. In this section, we visualize those of trackers under our FSBA.
|
| 381 |
+
|
| 382 |
+
As shown in Figure 12, the poisoned frames by our FSBA tend to cluster together and stay away from the benign ones in the hidden space of all three trackers. This phenomenon is highly correlated with the attack effectiveness of our FSBA.
|
| 383 |
+
|
| 384 |
+
Table 4: The performance (%) of SiamFC++ trackers trained on different datasets.
|
| 385 |
+
|
| 386 |
+
<table><tr><td colspan="2">Mode→</td><td colspan="2">No Attack</td><td colspan="2">One-Shot</td><td colspan="2">Few-Shot</td></tr><tr><td>Training Set↓</td><td>Metric→</td><td>Pr-B</td><td>AUC-B</td><td>Pr-A</td><td>AUC-A</td><td>Pr-A</td><td>AUC-A</td></tr><tr><td rowspan="2">COCO+VID</td><td>Benign</td><td>84.38</td><td>64.13</td><td>80.89</td><td>59.79</td><td>82.80</td><td>61.51</td></tr><tr><td>FSBA</td><td>84.01</td><td>62.73</td><td>25.52</td><td>18.78</td><td>16.30</td><td>10.65</td></tr><tr><td rowspan="2">COCO+VID+GOT10K</td><td>Benign</td><td>86.34</td><td>65.05</td><td>84.20</td><td>63.09</td><td>84.01</td><td>63.12</td></tr><tr><td>FSBA</td><td>87.08</td><td>65.53</td><td>29.34</td><td>23.20</td><td>28.97</td><td>21.38</td></tr><tr><td rowspan="2">COCO+VID+GOT10K+LaSOT</td><td>Benign</td><td>86.26</td><td>65.52</td><td>82.84</td><td>62.42</td><td>83.29</td><td>62.69</td></tr><tr><td>FSBA</td><td>86.80</td><td>64.91</td><td>32.20</td><td>23.19</td><td>29.81</td><td>20.83</td></tr></table>
|
| 387 |
+
|
| 388 |
+

|
| 389 |
+
Figure 13: Transformed poisoned images with different levels of additive Gaussian noise.
|
| 390 |
+
|
| 391 |
+
Table 5: The Pr-B (%) of models under additive Gaussian noise with different standard deviations.
|
| 392 |
+
|
| 393 |
+
<table><tr><td>std→</td><td>5</td><td>10</td><td>15</td><td>20</td><td>25</td></tr><tr><td>Benign</td><td>84.18</td><td>82.65</td><td>81.42</td><td>80.58</td><td>79.90</td></tr><tr><td>FSBA</td><td>84.46</td><td>83.47</td><td>81.95</td><td>81.81</td><td>80.33</td></tr></table>
|
| 394 |
+
|
| 395 |
+
Table 6: The Pr-A (%) of models under additive Gaussian noise with different standard deviations.
|
| 396 |
+
|
| 397 |
+
<table><tr><td>std→</td><td>5</td><td>10</td><td>15</td><td>20</td><td>25</td></tr><tr><td>Benign</td><td>81.10</td><td>80.23</td><td>79.10</td><td>77.39</td><td>78.59</td></tr><tr><td>FSBA</td><td>24.94</td><td>25.76</td><td>25.34</td><td>23.82</td><td>27.02</td></tr></table>
|
| 398 |
+
|
| 399 |
+
# E EFFECT OF THE TRAINING SET
|
| 400 |
+
|
| 401 |
+
In this section, we evaluate whether our attack is still effective when the models are trained on different training sets. Specifically, we adopt the SiamFC++ tracker on the OTB100 dataset as an example for the discussion. As shown in Table 4, our FSBA is still effective and stealthy when trackers are trained on different datasets.
|
| 402 |
+
|
| 403 |
+
# F RESISTANCE TO ADDITIVE GAUSSIAN NOISE
|
| 404 |
+
|
| 405 |
+
Settings. Similar to the settings in Section 4.5, we take the SiamFC++ tracker under one-shot mode on OTB100 as an example to explore whether our attack can resist video pre-processing with additive Gaussian noise. Specifically, we add random noise (with different standard deviations) to each frame of the test videos and then test the performance of the models on them.
|
| 406 |
+
|
| 407 |
+
Results. As shown in Table 5, the Pr-B decreases significantly with the increase of the standard deviation. This phenomenon is expectable since the additive noise will decrease the quality of the frames (as shown in Figure 13) and therefore reduce the tracking performance on benign objects. However, the Pr-A of FSBA remains small ( $< 30\%$ ) in all cases (as shown in Table 6). It indicates that this approach has limited effectiveness in defending our attack.
|
| 408 |
+
|
| 409 |
+
# G REPRESENTATIVE BEHAVIORS OF TRACKERS UNDER FSBA
|
| 410 |
+
|
| 411 |
+
In this section, we summarize five representative behaviors of the attacked trackers by our FSBA under the one-shot mode.
|
| 412 |
+
|
| 413 |
+

|
| 414 |
+
|
| 415 |
+

|
| 416 |
+
|
| 417 |
+

|
| 418 |
+
|
| 419 |
+

|
| 420 |
+
|
| 421 |
+

|
| 422 |
+
|
| 423 |
+

|
| 424 |
+
|
| 425 |
+

|
| 426 |
+
(a) Fail to Attack
|
| 427 |
+
|
| 428 |
+

|
| 429 |
+
|
| 430 |
+

|
| 431 |
+
|
| 432 |
+

|
| 433 |
+
|
| 434 |
+

|
| 435 |
+
|
| 436 |
+

|
| 437 |
+
|
| 438 |
+

|
| 439 |
+
|
| 440 |
+

|
| 441 |
+
|
| 442 |
+

|
| 443 |
+
|
| 444 |
+

|
| 445 |
+
|
| 446 |
+

|
| 447 |
+
(b) Tracking the Wrong Object
|
| 448 |
+
|
| 449 |
+

|
| 450 |
+
|
| 451 |
+

|
| 452 |
+
|
| 453 |
+

|
| 454 |
+
|
| 455 |
+

|
| 456 |
+
|
| 457 |
+

|
| 458 |
+
|
| 459 |
+

|
| 460 |
+
|
| 461 |
+

|
| 462 |
+
|
| 463 |
+

|
| 464 |
+
|
| 465 |
+

|
| 466 |
+
|
| 467 |
+

|
| 468 |
+
(c) Expanding the Bounding Box
|
| 469 |
+
|
| 470 |
+

|
| 471 |
+
|
| 472 |
+

|
| 473 |
+
|
| 474 |
+

|
| 475 |
+
|
| 476 |
+

|
| 477 |
+
|
| 478 |
+

|
| 479 |
+
|
| 480 |
+

|
| 481 |
+
|
| 482 |
+

|
| 483 |
+
|
| 484 |
+

|
| 485 |
+
|
| 486 |
+

|
| 487 |
+
|
| 488 |
+

|
| 489 |
+
(d) Shrinking the Bounding Box
|
| 490 |
+
|
| 491 |
+

|
| 492 |
+
|
| 493 |
+

|
| 494 |
+
|
| 495 |
+

|
| 496 |
+
|
| 497 |
+

|
| 498 |
+
|
| 499 |
+

|
| 500 |
+
|
| 501 |
+

|
| 502 |
+
|
| 503 |
+

|
| 504 |
+
|
| 505 |
+

|
| 506 |
+
|
| 507 |
+

|
| 508 |
+
|
| 509 |
+

|
| 510 |
+
(e) Tracking the Wrong and then the Right Object
|
| 511 |
+
|
| 512 |
+

|
| 513 |
+
Figure 14: The five representative behaviors of attacked SiamFC++ trackers by our FSBA. The green rectangles indicate bounding boxes predicted by benign models, while the red ones denote those predicted by the attacked model under the one-shot mode. (a): behavior of failed attacks; (b)-(d): behaviors of successful attacks; (e): behavior of half-failed attacks.
|
| 514 |
+
|
| 515 |
+

|
| 516 |
+
|
| 517 |
+

|
| 518 |
+
|
| 519 |
+
Failed Attacks. In this type of attack, the attacked model will generate a normal bounding box, which is similar to the one generated by the benign model (as shown in Figure 14(a)).
|
| 520 |
+
|
| 521 |
+
Table 7: The performance (%) of different VOT models under BOBA on OTB100 dataset.
|
| 522 |
+
|
| 523 |
+
<table><tr><td>Attack Mode→</td><td colspan="2">No Attack</td><td colspan="2">One-Shot</td><td colspan="2">Few-Shot</td></tr><tr><td>Model↓, Metric→</td><td>Pr-B</td><td>AUC-B</td><td>Pr-A</td><td>AUC-A</td><td>Pr-A</td><td>AUC-A</td></tr><tr><td>SiamFC</td><td>72.70</td><td>53.78</td><td>11.44</td><td>9.51</td><td>9.37</td><td>7.64</td></tr><tr><td>SiamRPN++</td><td>76.89</td><td>54.85</td><td>35.71</td><td>21.02</td><td>23.79</td><td>15.84</td></tr><tr><td>SiamFC++</td><td>79.71</td><td>60.81</td><td>77.51</td><td>57.79</td><td>75.67</td><td>57.04</td></tr></table>
|
| 524 |
+
|
| 525 |
+

|
| 526 |
+
Figure 15: The poisoned loss $\mathcal{L}_p$ and feature loss $\mathcal{L}_f$ across different training epochs under the BOBA baseline attack on OTB100 dataset.
|
| 527 |
+
|
| 528 |
+

|
| 529 |
+
|
| 530 |
+

|
| 531 |
+
|
| 532 |
+
Successful Attacks. There are three representative behaviors in a successful attack, including (1) tracking the wrong object, (2) expanding the bounding box, and (3) shrinking the bounding box. In the first type of behavior, the attacked model will track a completely different object (as shown in Figure 14(b)); In the second type of behavior, the attacked model will generate a significantly larger bounding box, compared to the one generated by the benign model (as shown in Figure 14(c)); In the third type of behavior, the attacked model will generate a significantly smaller bounding box, compared to the one generated by the benign model (as shown in Figure 14(d)).
|
| 533 |
+
|
| 534 |
+
Half Successful Attacks. In this type of attack, the attacked model will first track the wrong object and then track back to the right one, as shown in Figure 14(e). This is most probably because the bounding box of a successful attack may re-overlap to the target object due to some natural factors (e.g., object movement), and causing subsequent attacks to fail.
|
| 535 |
+
|
| 536 |
+
# H WHY BOBA IS INEFFECTIVE ON SIAMRPN++ AND SIAMFC++?
|
| 537 |
+
|
| 538 |
+
In this section, we investigate the reason why BOBA is less effective in attacking against some of the VOT models, as shown in Figure 4 in the main text. As we briefly explained in Section 4.2, the ineffectiveness of BOBA on SiamRPN++ and SiamFC++ is mostly due to the close distance between benign and poisoned frames in the feature space. To explore its intrinsic mechanism, here we visualize the training process of different trackers under BOBA on the OTB100 dataset.
|
| 539 |
+
|
| 540 |
+
As shown in Figure 15, the feature loss $\mathcal{L}_f$ increases with the decrease of the poisoned loss $\mathcal{L}_p$ during the training process of the SiamFC tracker. However, the feature loss $\mathcal{L}_f$ is relatively stable or even decreases when the poisoned loss $\mathcal{L}_p$ continuously decreases during the training process of SiamRPN++ and SiamFC++ trackers. This indicates that minimizing the poisoned loss $\mathcal{L}_p$ alone (in BOBA) cannot ensure large feature separation between the poisoned and benign frames. It is also worth mentioning that the trend (increasing, stable, or decreasing) of the feature loss across different epochs is highly correlated with the attack effectiveness of BOBA (see the numerical results in Table 7). This again verifies the importance of our proposed feature loss $\mathcal{L}_f$ for effective backdoor attacks.
|
| 541 |
+
|
| 542 |
+
We also notice that the change of $\mathcal{L}_f$ is also related to the tracker's architecture. Particularly, SiamFC has only one (classification) branch while SiamRPN++ and SiamFC++ have one and two additional (non-classification) branches than SiamFC, respectively. Clearly, the more additional branches, the harder BOBA causes feature separation in the feature space generated by the backbone. This is mainly because BOBA targets only the classification branch (via optimizing the poisoned loss $\mathcal{L}_p$ ). By contrast, our FSBA provides a simple yet effective approach to affect all branches simultaneously, as it directly targets the backbone representation. As shown in Table 8, FSBA can cause much larger loss differences at all types of branches.
|
| 543 |
+
|
| 544 |
+
Table 8: The loss differences between the poisoned and benign frames at the last training epoch on the OTB100 dataset. '-': the tracker does not have the branch.
|
| 545 |
+
|
| 546 |
+
<table><tr><td>Tracker↓</td><td>Mode↓, Branch→</td><td>Classification</td><td>Regression</td><td>Centerness</td></tr><tr><td rowspan="3">SiamFC</td><td>Benign</td><td>0.0033</td><td>—</td><td>—</td></tr><tr><td>BOBA</td><td>2.9643</td><td>—</td><td>—</td></tr><tr><td>FSBA</td><td>11.7868</td><td>—</td><td>—</td></tr><tr><td rowspan="3">SiamRPN++</td><td>Benign</td><td>0.0036</td><td>0.0062</td><td>—</td></tr><tr><td>BOBA</td><td>0.7939</td><td>0.0262</td><td>—</td></tr><tr><td>FSBA</td><td>2.3086</td><td>0.0924</td><td>—</td></tr><tr><td rowspan="3">SiamFC++</td><td>Benign</td><td>0.0195</td><td>0.0512</td><td>0.0069</td></tr><tr><td>BOBA</td><td>0.4303</td><td>0.0306</td><td>0.0043</td></tr><tr><td>FSBA</td><td>0.7483</td><td>0.5404</td><td>0.0754</td></tr></table>
|
| 547 |
+
|
| 548 |
+
Table 9: Resistance to model pruning under different pruning rates.
|
| 549 |
+
|
| 550 |
+
<table><tr><td>Pruning Rate → Evaluation Metric ↓</td><td>0%</td><td>5%</td><td>10%</td><td>15%</td><td>20%</td><td>25%</td><td>30%</td></tr><tr><td>Pr-B (%)</td><td>84.01</td><td>51.04</td><td>50.97</td><td>50.19</td><td>50.40</td><td>46.23</td><td>43.31</td></tr><tr><td>Pr-A (%)</td><td>25.52</td><td>26.70</td><td>27.40</td><td>27.42</td><td>29.04</td><td>25.29</td><td>21.69</td></tr></table>
|
| 551 |
+
|
| 552 |
+
# I RESISTANCE TO OTHER POTENTIAL DEFENSES
|
| 553 |
+
|
| 554 |
+
In this section, we take the SiamFC++ tracker as an example (under the one-shot mode on OTB100 dataset) to test the robustness of our FSBA to two other model reconstruction based defenses. In particular, we notice that there were also some other types of backdoor defenses (Du et al., 2020; Li et al., 2021a; Huang et al., 2022) targeting the poison-only backdoor attacks, where the adversaries can only manipulate the training dataset. These defenses are out of the scope of this paper since we assume that the adversary has full control over the training process. Besides, we also ignored detection-based defenses (Xiang et al., 2020; Guo et al., 2022; Xiang et al., 2022) for they can not directly improve model robustness. We will discuss the resistance to them in our future work.
|
| 555 |
+
|
| 556 |
+
# I.1 RESISTANCE TO MODEL PRUNING.
|
| 557 |
+
|
| 558 |
+
Recent studies (Liu et al., 2018; Wu & Wang, 2021) revealed that defenders can remove hidden backdoors in the attacked models via pruning neurons that are inactive on benign samples. It was believed that the backdoors are hidden in those neurons since the model demonstrates different predictions on benign vs. their poisoned samples. Since these defenses do not require the attack to be classification-based nor targeted, they can also be applied to defend against our FSBA.
|
| 559 |
+
|
| 560 |
+
Settings. We implement the standard channel pruning (He et al., 2017) method at the last layers of the backbone of SiamFC++, based on the open-resource code<sup>5</sup>. Specifically, we prune $\tau\%$ (dubbed pruning rate) channels that have the smallest activation values on $5\%$ benign training samples. We also test different $\tau$ s to obtain more comprehensive results.
|
| 561 |
+
|
| 562 |
+
Results. As shown in Table 9, the Pr-B decreases significantly with the increase of the pruning rate. However, the Pr-A remains below $30\%$ in all cases, that is, our FSBA is resistant to model pruning to a large extent.
|
| 563 |
+
|
| 564 |
+
# I.2 RESISTANCE TO MODE CONNECTIVITY REPAIRING.
|
| 565 |
+
|
| 566 |
+
Settings. We implement the Mode Connectivity Repairing (MCR) (Zhao et al., 2020a) defense based on its official code<sup>6</sup>. Following their settings, we first train an additional backdoored tracker with different random initializations, and then train a connect curve with $\zeta\%$ (dubbed bonafide rate) benign samples. All connect curves are trained for 100 epochs. During the repairing process, we set $t = 0.5$ as suggested in (Zhao et al., 2020a).
|
| 567 |
+
|
| 568 |
+
Table 10: Resistance to mode connectivity repairing with different bonafide rate $(\%)$
|
| 569 |
+
|
| 570 |
+
<table><tr><td>Metric ↓, Bonafide Rate →</td><td>0%</td><td>5%</td><td>10%</td></tr><tr><td>Pr-B</td><td>84.01</td><td>82.00</td><td>84.11</td></tr><tr><td>Pr-A</td><td>25.52</td><td>17.27</td><td>22.05</td></tr></table>
|
| 571 |
+
|
| 572 |
+
Table 11: The performance $(\%)$ and feature loss $\mathcal{L}_f$ of the SiamFC tracker under our FSBA with different poisoning rates $(\%)$ .
|
| 573 |
+
|
| 574 |
+
<table><tr><td>Poisoning Rate → Metric ↓</td><td>0</td><td>2</td><td>4</td><td>6</td><td>8</td><td>10</td></tr><tr><td>Pr-B</td><td>79.23</td><td>79.31</td><td>78.95</td><td>75.77</td><td>77.68</td><td>75.98</td></tr><tr><td>Pr-A (One-Shot)</td><td>72.43</td><td>63.83</td><td>31.31</td><td>19.21</td><td>12.54</td><td>11.06</td></tr><tr><td>Pr-A (Few-Shot)</td><td>74.03</td><td>55.61</td><td>19.35</td><td>10.95</td><td>9.64</td><td>7.92</td></tr><tr><td>Feature Loss Lf</td><td>0.3886</td><td>0.5129</td><td>1.0424</td><td>1.6907</td><td>4.2742</td><td>8.4642</td></tr></table>
|
| 575 |
+
|
| 576 |
+
Table 12: The performance $(\%)$ and feature loss $(\mathcal{L}_f)$ of the SiamFC tracker under our FSBA across different training epochs.
|
| 577 |
+
|
| 578 |
+
<table><tr><td>Epoch → Metric ↓</td><td>1</td><td>5</td><td>10</td><td>15</td><td>20</td><td>30</td><td>40</td><td>50</td></tr><tr><td>Pr-B</td><td>63.88</td><td>71.47</td><td>73.78</td><td>76.69</td><td>74.19</td><td>75.31</td><td>74.12</td><td>76.23</td></tr><tr><td>Pr-A (One-Shot)</td><td>40.32</td><td>27.70</td><td>15.55</td><td>14.41</td><td>11.60</td><td>11.23</td><td>10.99</td><td>11.06</td></tr><tr><td>Pr-A (Few-Shot)</td><td>30.37</td><td>15.85</td><td>12.37</td><td>11.06</td><td>9.39</td><td>7.61</td><td>7.94</td><td>7.94</td></tr><tr><td>Feature Loss</td><td>0.5783</td><td>0.9533</td><td>2.8535</td><td>5.0605</td><td>6.5826</td><td>8.4670</td><td>8.1593</td><td>8.4642</td></tr></table>
|
| 579 |
+
|
| 580 |
+
Results. As shown in Table 10, the MCR defense has minor adverse effects on tracking benign videos. However, it also has negligible benefits in tracking attacked videos. The Pr-A even decreases sharply when $5\%$ benign samples are used in the MCR defense. In other words, our FSBA is resistant to the MCR defense. We conjecture that it is because visual object tracking is much more sophisticated than classification and therefore repairing every single frame is not enough to influence the tracking process. We will further analyze it in our future work.
|
| 581 |
+
|
| 582 |
+
# J WHY FSBA CAN ENSURE ONE-/FEW-SHOT EFFECTIVENESS?
|
| 583 |
+
|
| 584 |
+
In this section, we discuss why our proposed FSBA can ensure one-/few-shot effectiveness, even it seems to have no specific design to ensure one-/few-shot effectiveness.
|
| 585 |
+
|
| 586 |
+
The feature separation in the deep representation space is the key for one-/few-shot effectiveness. In other words, by maximizing the effect of the trigger pattern on each frame via the $\mathcal{L}_f$ , our attack ensures the one-/few-shot effectiveness. Intuitively, the tracking process can be viewed as a first-order Markov process with the template and search region are generated based on the previous frame. Therefore, the stronger the trigger pattern disturbs a single frame, the longer (more frames) the attack effect will last, i.e., the fewer frames it needs to appear on for a successful attack. We did not explicitly optimize this sequential dependency because the training process of a VOT model is not sequential (although the tracking process is sequential), i.e., training is done in a frame-wise manner with the training videos are separated into frames.
|
| 587 |
+
|
| 588 |
+
To verify that the value of features loss is highly correlated to the one-/few-shot effectiveness, we conduct FSBA against SiamFC tracker with different poisoning rates to achieve attacked trackers with different performances and present the training details of our FSBA targeting SiamFC tracker. Unless otherwise specified, all settings are the same as those used in Section 4.2.
|
| 589 |
+
|
| 590 |
+
As shown in Table 11-12, the larger the feature loss $\mathcal{L}_f$ , the better the one-/few-shot effectiveness (i.e., the lower Pr-A). These results verify that our design of the feature loss $\mathcal{L}_f$ in FSBA can indeed ensure one-/few-shot effectiveness.
|
| 591 |
+
|
| 592 |
+

|
| 593 |
+
(a) Benign SiamFC++ Tracker
|
| 594 |
+
|
| 595 |
+

|
| 596 |
+
Figure 16: The attention maps of benign or FSBA-attacked SiamFC++ trackers on the search regions. Grad-CAM(Selvaraju et al., 2017) is used to generate the attention maps. This experiment is conducted on the OTB100 dataset. Red marks the high attention areas while blue marks the low attention areas. Top Row: attention map on the raw image; Bottom Row: attention map only.
|
| 597 |
+
|
| 598 |
+

|
| 599 |
+
(b) Attacked SiamFC++ Tracker
|
| 600 |
+
|
| 601 |
+

|
| 602 |
+
|
| 603 |
+
# K ANALYZING THE EFFECTIVENESS OF FSBA VIA ATTENTION MAP
|
| 604 |
+
|
| 605 |
+
In this section, we analyze the working mechanism of our FSBA via visualizing attention maps.
|
| 606 |
+
|
| 607 |
+
Settings. We use the Grad-CAM (Selvaraju et al., 2017) to generate the attention maps of benign or FSBA-attacked SiamFC++ trackers on the search regions. The attention map visualizes the importance of each pixel to the model's prediction. We choose SiamFC++ for this analysis because it is the only tracker (among all three trackers considered in previous experiments) that treats every pixel of a search region as an independent proposal in the classification branch. We test 8 cases in total, including whether the model is attacked $(\times 2)$ , whether the template is attached with the trigger pattern $(\times 2)$ , and whether the search region is attached with the trigger pattern $(\times 2)$ .
|
| 608 |
+
|
| 609 |
+
Results. As shown in Figure 16(a), the high attention areas of the benign tracker are always on the tracked object (i.e., the basketball player), even when the search region or template image contains the trigger pattern. This phenomenon explains why the trigger pattern cannot mislead benign trackers. In contrast, the attention of the attacked tracker by our FSBA is distracted whenever the trigger pattern appears. Particularly, when the trigger pattern only appears in one place (i.e., the search region or template), the high attention area is shifted away from the object. This phenomenon (partly) explains why the attacked models cannot track the target objects when the trigger pattern appears. Interestingly, when the trigger pattern appears on both the search region and the template, the tracker tends to only pay attention to the trigger area. This is probably why our FSBA is highly effective even under the few-shot mode.
|
| 610 |
+
|
| 611 |
+
# L CODES
|
| 612 |
+
|
| 613 |
+
The codes for reproducing the main experiments of our FSBA are open-sourced on $\mathrm{Gib}\mathbf{h}^{7}$
|
2201.13xxx/2201.13178/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c8cfd40e0077089432c7694852adb226ab96c8304bcd54a48449b2768b7e84a1
|
| 3 |
+
size 1611232
|
2201.13xxx/2201.13178/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2201.13xxx/2201.13182/dfa19251-da5b-41a6-8f0e-83972fee5c18_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|