SlowGuess commited on
Commit
5840a2f
·
verified ·
1 Parent(s): 7559319

Add Batch 841b16ab-059d-4bd7-8d31-792a279553a0

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +64 -0
  2. 2501.09xxx/2501.09929/1c7d6945-270f-48a4-8cd4-d6be6f1f00d1_content_list.json +0 -0
  3. 2501.09xxx/2501.09929/1c7d6945-270f-48a4-8cd4-d6be6f1f00d1_model.json +0 -0
  4. 2501.09xxx/2501.09929/1c7d6945-270f-48a4-8cd4-d6be6f1f00d1_origin.pdf +3 -0
  5. 2501.09xxx/2501.09929/full.md +545 -0
  6. 2501.09xxx/2501.09929/images.zip +3 -0
  7. 2501.09xxx/2501.09929/layout.json +0 -0
  8. 2501.09xxx/2501.09958/5a771aca-a124-4a3a-b76f-97290639d8d9_content_list.json +0 -0
  9. 2501.09xxx/2501.09958/5a771aca-a124-4a3a-b76f-97290639d8d9_model.json +0 -0
  10. 2501.09xxx/2501.09958/5a771aca-a124-4a3a-b76f-97290639d8d9_origin.pdf +3 -0
  11. 2501.09xxx/2501.09958/full.md +585 -0
  12. 2501.09xxx/2501.09958/images.zip +3 -0
  13. 2501.09xxx/2501.09958/layout.json +0 -0
  14. 2501.09xxx/2501.09959/fcafa97e-c629-4479-a0e1-36ec041c4618_content_list.json +0 -0
  15. 2501.09xxx/2501.09959/fcafa97e-c629-4479-a0e1-36ec041c4618_model.json +0 -0
  16. 2501.09xxx/2501.09959/fcafa97e-c629-4479-a0e1-36ec041c4618_origin.pdf +3 -0
  17. 2501.09xxx/2501.09959/full.md +0 -0
  18. 2501.09xxx/2501.09959/images.zip +3 -0
  19. 2501.09xxx/2501.09959/layout.json +0 -0
  20. 2501.09xxx/2501.09967/d71f0106-7dbe-47bb-88c7-826b81b14c28_content_list.json +0 -0
  21. 2501.09xxx/2501.09967/d71f0106-7dbe-47bb-88c7-826b81b14c28_model.json +0 -0
  22. 2501.09xxx/2501.09967/d71f0106-7dbe-47bb-88c7-826b81b14c28_origin.pdf +3 -0
  23. 2501.09xxx/2501.09967/full.md +0 -0
  24. 2501.09xxx/2501.09967/images.zip +3 -0
  25. 2501.09xxx/2501.09967/layout.json +0 -0
  26. 2501.09xxx/2501.09996/a1383725-a489-4e27-9f8f-ea5cefeb3f84_content_list.json +0 -0
  27. 2501.09xxx/2501.09996/a1383725-a489-4e27-9f8f-ea5cefeb3f84_model.json +0 -0
  28. 2501.09xxx/2501.09996/a1383725-a489-4e27-9f8f-ea5cefeb3f84_origin.pdf +3 -0
  29. 2501.09xxx/2501.09996/full.md +487 -0
  30. 2501.09xxx/2501.09996/images.zip +3 -0
  31. 2501.09xxx/2501.09996/layout.json +0 -0
  32. 2501.10xxx/2501.10007/5dd94295-db8e-4bed-a71a-6e89c9c75ad3_content_list.json +0 -0
  33. 2501.10xxx/2501.10007/5dd94295-db8e-4bed-a71a-6e89c9c75ad3_model.json +0 -0
  34. 2501.10xxx/2501.10007/5dd94295-db8e-4bed-a71a-6e89c9c75ad3_origin.pdf +3 -0
  35. 2501.10xxx/2501.10007/full.md +462 -0
  36. 2501.10xxx/2501.10007/images.zip +3 -0
  37. 2501.10xxx/2501.10007/layout.json +0 -0
  38. 2501.10xxx/2501.10016/5e75b0e2-014d-42a0-97da-a7e002559710_content_list.json +0 -0
  39. 2501.10xxx/2501.10016/5e75b0e2-014d-42a0-97da-a7e002559710_model.json +0 -0
  40. 2501.10xxx/2501.10016/5e75b0e2-014d-42a0-97da-a7e002559710_origin.pdf +3 -0
  41. 2501.10xxx/2501.10016/full.md +507 -0
  42. 2501.10xxx/2501.10016/images.zip +3 -0
  43. 2501.10xxx/2501.10016/layout.json +0 -0
  44. 2501.10xxx/2501.10018/8bff6f61-9aa1-458e-a9e1-d00224986bd3_content_list.json +1785 -0
  45. 2501.10xxx/2501.10018/8bff6f61-9aa1-458e-a9e1-d00224986bd3_model.json +2532 -0
  46. 2501.10xxx/2501.10018/8bff6f61-9aa1-458e-a9e1-d00224986bd3_origin.pdf +3 -0
  47. 2501.10xxx/2501.10018/full.md +352 -0
  48. 2501.10xxx/2501.10018/images.zip +3 -0
  49. 2501.10xxx/2501.10018/layout.json +0 -0
  50. 2501.10xxx/2501.10040/36e18068-f16d-4586-b181-9c17384f5431_content_list.json +0 -0
.gitattributes CHANGED
@@ -5101,3 +5101,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
5101
  2501.18xxx/2501.18630/ffb3d067-d896-46bb-b42e-284fcc12db25_origin.pdf filter=lfs diff=lfs merge=lfs -text
5102
  2502.14xxx/2502.14868/f7d5c44b-b57d-4d91-bb15-4ac0da92f81c_origin.pdf filter=lfs diff=lfs merge=lfs -text
5103
  2503.16xxx/2503.16431/968d32f8-eeb2-422a-852f-bd0c5fa8b55f_origin.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5101
  2501.18xxx/2501.18630/ffb3d067-d896-46bb-b42e-284fcc12db25_origin.pdf filter=lfs diff=lfs merge=lfs -text
5102
  2502.14xxx/2502.14868/f7d5c44b-b57d-4d91-bb15-4ac0da92f81c_origin.pdf filter=lfs diff=lfs merge=lfs -text
5103
  2503.16xxx/2503.16431/968d32f8-eeb2-422a-852f-bd0c5fa8b55f_origin.pdf filter=lfs diff=lfs merge=lfs -text
5104
+ 2501.09xxx/2501.09929/1c7d6945-270f-48a4-8cd4-d6be6f1f00d1_origin.pdf filter=lfs diff=lfs merge=lfs -text
5105
+ 2501.09xxx/2501.09958/5a771aca-a124-4a3a-b76f-97290639d8d9_origin.pdf filter=lfs diff=lfs merge=lfs -text
5106
+ 2501.09xxx/2501.09959/fcafa97e-c629-4479-a0e1-36ec041c4618_origin.pdf filter=lfs diff=lfs merge=lfs -text
5107
+ 2501.09xxx/2501.09967/d71f0106-7dbe-47bb-88c7-826b81b14c28_origin.pdf filter=lfs diff=lfs merge=lfs -text
5108
+ 2501.09xxx/2501.09996/a1383725-a489-4e27-9f8f-ea5cefeb3f84_origin.pdf filter=lfs diff=lfs merge=lfs -text
5109
+ 2501.10xxx/2501.10007/5dd94295-db8e-4bed-a71a-6e89c9c75ad3_origin.pdf filter=lfs diff=lfs merge=lfs -text
5110
+ 2501.10xxx/2501.10016/5e75b0e2-014d-42a0-97da-a7e002559710_origin.pdf filter=lfs diff=lfs merge=lfs -text
5111
+ 2501.10xxx/2501.10018/8bff6f61-9aa1-458e-a9e1-d00224986bd3_origin.pdf filter=lfs diff=lfs merge=lfs -text
5112
+ 2501.10xxx/2501.10040/36e18068-f16d-4586-b181-9c17384f5431_origin.pdf filter=lfs diff=lfs merge=lfs -text
5113
+ 2501.10xxx/2501.10064/fb3330e9-5265-4d0a-8515-d94e5a841da9_origin.pdf filter=lfs diff=lfs merge=lfs -text
5114
+ 2501.10xxx/2501.10069/b66ff5ef-e1fb-4d12-abf8-b6da17fa84e7_origin.pdf filter=lfs diff=lfs merge=lfs -text
5115
+ 2501.10xxx/2501.10074/e1ea6984-b061-4e1d-b502-ead353e7f0aa_origin.pdf filter=lfs diff=lfs merge=lfs -text
5116
+ 2501.10xxx/2501.10100/eb9090b9-58b7-4e40-b303-652b5d9dc0db_origin.pdf filter=lfs diff=lfs merge=lfs -text
5117
+ 2501.10xxx/2501.10105/ced6868c-33cb-4025-8190-3cb17403a253_origin.pdf filter=lfs diff=lfs merge=lfs -text
5118
+ 2501.10xxx/2501.10114/5c72beae-09b2-42af-8def-482682ca140e_origin.pdf filter=lfs diff=lfs merge=lfs -text
5119
+ 2501.10xxx/2501.10120/52976d97-0a34-4227-854f-9ecce79c6094_origin.pdf filter=lfs diff=lfs merge=lfs -text
5120
+ 2501.10xxx/2501.10132/25d8fe68-eae5-4f12-bb61-f0dce8418053_origin.pdf filter=lfs diff=lfs merge=lfs -text
5121
+ 2501.10xxx/2501.10168/99e6d11d-2d77-4f1d-b4e0-e8dcabe9536b_origin.pdf filter=lfs diff=lfs merge=lfs -text
5122
+ 2501.10xxx/2501.10326/58af125c-14f9-477c-b402-23fec8973b0d_origin.pdf filter=lfs diff=lfs merge=lfs -text
5123
+ 2501.10xxx/2501.10332/0198601d-2a5c-4abc-a853-9561ef65ebb9_origin.pdf filter=lfs diff=lfs merge=lfs -text
5124
+ 2501.10xxx/2501.10356/31b3d3e6-a1f1-49e5-8600-4c86f12e0d6b_origin.pdf filter=lfs diff=lfs merge=lfs -text
5125
+ 2501.10xxx/2501.10555/a168e7ed-52a4-4f24-bb31-7bdf11e49c68_origin.pdf filter=lfs diff=lfs merge=lfs -text
5126
+ 2501.10xxx/2501.10604/931bfef4-a771-4aba-bf53-fc6a8262e261_origin.pdf filter=lfs diff=lfs merge=lfs -text
5127
+ 2501.10xxx/2501.10629/2881cf62-a883-4f07-8cc7-59fd9fb085ec_origin.pdf filter=lfs diff=lfs merge=lfs -text
5128
+ 2501.10xxx/2501.10674/86da81ef-8671-432a-bddf-228ef4bb83cf_origin.pdf filter=lfs diff=lfs merge=lfs -text
5129
+ 2501.10xxx/2501.10687/3cacadd0-cc98-4229-94d3-32fd76856f07_origin.pdf filter=lfs diff=lfs merge=lfs -text
5130
+ 2501.10xxx/2501.10753/d5ab7408-3e03-4f1f-9f5e-8566b220c90f_origin.pdf filter=lfs diff=lfs merge=lfs -text
5131
+ 2501.10xxx/2501.10761/95fb2c7a-e2b4-4bdb-ac5c-3d1fa2ceabab_origin.pdf filter=lfs diff=lfs merge=lfs -text
5132
+ 2501.10xxx/2501.10796/a8515ed9-e2d9-4200-b402-de920de75c33_origin.pdf filter=lfs diff=lfs merge=lfs -text
5133
+ 2501.10xxx/2501.10811/3d495e1d-812d-4495-b5da-965a6336f29e_origin.pdf filter=lfs diff=lfs merge=lfs -text
5134
+ 2501.10xxx/2501.10868/6b3b8148-c035-4417-bbbf-52b690d9b3ed_origin.pdf filter=lfs diff=lfs merge=lfs -text
5135
+ 2501.10xxx/2501.10871/d595147e-9fbc-4e10-8643-32a9652dc63c_origin.pdf filter=lfs diff=lfs merge=lfs -text
5136
+ 2501.10xxx/2501.10891/b76efd8d-45d0-4fbf-8747-cbb04ea2375e_origin.pdf filter=lfs diff=lfs merge=lfs -text
5137
+ 2501.10xxx/2501.10893/f2b712b4-ee42-40df-9342-9a4e33abc035_origin.pdf filter=lfs diff=lfs merge=lfs -text
5138
+ 2501.10xxx/2501.10928/acc9b700-0e51-410d-b7bf-40c45126ec60_origin.pdf filter=lfs diff=lfs merge=lfs -text
5139
+ 2501.10xxx/2501.10945/7195d2ce-ed6f-4c13-a4bb-f44b2d98049d_origin.pdf filter=lfs diff=lfs merge=lfs -text
5140
+ 2501.10xxx/2501.10970/c81aa7ff-6675-4499-b363-f11648b726f1_origin.pdf filter=lfs diff=lfs merge=lfs -text
5141
+ 2501.11xxx/2501.11012/ee93881a-63cb-435c-b51d-c015e55ef384_origin.pdf filter=lfs diff=lfs merge=lfs -text
5142
+ 2501.11xxx/2501.11041/e10e7c7d-3a2e-4e56-b3d4-d814dd0d9ec4_origin.pdf filter=lfs diff=lfs merge=lfs -text
5143
+ 2501.11xxx/2501.11110/f42c895a-bd17-41ea-903d-81a5aebdcc10_origin.pdf filter=lfs diff=lfs merge=lfs -text
5144
+ 2501.11xxx/2501.11120/2a4767f9-df58-464e-a230-b92e771c6a71_origin.pdf filter=lfs diff=lfs merge=lfs -text
5145
+ 2501.11xxx/2501.11123/8d5b17dd-1385-4822-98dd-7c99f9b7a2fb_origin.pdf filter=lfs diff=lfs merge=lfs -text
5146
+ 2501.11xxx/2501.11223/c6a9b42b-2435-443e-b298-4095adb0a512_origin.pdf filter=lfs diff=lfs merge=lfs -text
5147
+ 2501.11xxx/2501.11253/fa28bbd8-d4a8-4b2e-a6dd-2bd9fd1b425a_origin.pdf filter=lfs diff=lfs merge=lfs -text
5148
+ 2501.11xxx/2501.11260/d49bea1c-7d8e-403c-98ff-21e717108d2c_origin.pdf filter=lfs diff=lfs merge=lfs -text
5149
+ 2501.11xxx/2501.11284/0b611c3e-7c28-4615-bf37-64b98a311c59_origin.pdf filter=lfs diff=lfs merge=lfs -text
5150
+ 2501.11xxx/2501.11340/94efecc6-2a31-4895-9769-58b7129ff9b8_origin.pdf filter=lfs diff=lfs merge=lfs -text
5151
+ 2501.11xxx/2501.11347/061a2b0d-5791-4a5e-b8ee-ce9d7aaec407_origin.pdf filter=lfs diff=lfs merge=lfs -text
5152
+ 2501.11xxx/2501.11425/595ad60f-ede9-44f1-aed0-c219a4e32564_origin.pdf filter=lfs diff=lfs merge=lfs -text
5153
+ 2501.11xxx/2501.11457/7aee40ea-cef4-47c7-9d44-392976ac07d2_origin.pdf filter=lfs diff=lfs merge=lfs -text
5154
+ 2501.11xxx/2501.11468/764b240f-74f5-4085-90df-1f59ce564bed_origin.pdf filter=lfs diff=lfs merge=lfs -text
5155
+ 2501.11xxx/2501.11561/ce28bc74-4eef-4ac9-b9cb-cbbf23e2c894_origin.pdf filter=lfs diff=lfs merge=lfs -text
5156
+ 2501.11xxx/2501.11577/c5856d31-b024-4571-a7be-397aa2f3d0a8_origin.pdf filter=lfs diff=lfs merge=lfs -text
5157
+ 2501.11xxx/2501.11651/ffb12f44-f2cd-4958-be66-4eb49b49a0e2_origin.pdf filter=lfs diff=lfs merge=lfs -text
5158
+ 2501.11xxx/2501.11709/972bd99d-1f1c-4948-9e5b-ed46089fc8d7_origin.pdf filter=lfs diff=lfs merge=lfs -text
5159
+ 2501.11xxx/2501.11733/716ca56e-f77f-4b76-aeca-2ed9eeb3b379_origin.pdf filter=lfs diff=lfs merge=lfs -text
5160
+ 2501.11xxx/2501.11759/65fd6356-e4f5-4b7c-963b-4549d1cb8982_origin.pdf filter=lfs diff=lfs merge=lfs -text
5161
+ 2501.13xxx/2501.13944/3a650bdc-581a-4563-b440-7cd109d2dc93_origin.pdf filter=lfs diff=lfs merge=lfs -text
5162
+ 2501.13xxx/2501.13946/51e1f81f-8323-47d9-94bc-6b8fbad2195a_origin.pdf filter=lfs diff=lfs merge=lfs -text
5163
+ 2501.13xxx/2501.13956/998ee252-d4bf-41b3-bae7-07110979e04d_origin.pdf filter=lfs diff=lfs merge=lfs -text
5164
+ 2501.14xxx/2501.14818/57d8e6c2-d3d2-4857-9d56-67767b954fc2_origin.pdf filter=lfs diff=lfs merge=lfs -text
5165
+ 2501.16xxx/2501.16354/df648529-a6c5-440f-a243-0b3d08c691e2_origin.pdf filter=lfs diff=lfs merge=lfs -text
5166
+ 2501.17xxx/2501.17167/c30f73f4-42f9-4367-b560-46f5a8172c21_origin.pdf filter=lfs diff=lfs merge=lfs -text
5167
+ 2502.10xxx/2502.10396/9a9dd6b2-e8bc-4956-b912-8bb5644ddd30_origin.pdf filter=lfs diff=lfs merge=lfs -text
2501.09xxx/2501.09929/1c7d6945-270f-48a4-8cd4-d6be6f1f00d1_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.09xxx/2501.09929/1c7d6945-270f-48a4-8cd4-d6be6f1f00d1_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.09xxx/2501.09929/1c7d6945-270f-48a4-8cd4-d6be6f1f00d1_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb71901d592eb2b3f21c173d77592b7dc06558b0ca597722e85a04be789dfae4
3
+ size 1885161
2501.09xxx/2501.09929/full.md ADDED
@@ -0,0 +1,545 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # INTERPRETABLE STEERING OF LARGE LANGUAGE MODELS WITH FEATURE GUIDED ACTIVATION ADDITIONS
2
+
3
+ Samuel Soo $^{1}$ , Chen Guang $^{2}$ , Chandrasekaran Balaganesh $^{1}$ , Wesley Teng $^{1}$ , Tan Guoxian $^{1}$ , Yan Ming $^{3}$
4
+
5
+ $^{1}$ Raffles Science Institute, Raffles Institution
6
+
7
+ $^{2}$ Nous Research
8
+
9
+ <sup>3</sup>Centre for Frontier AI Research (CFAR), Agency for Science Technology and Research (A*STAR)
10
+
11
+ {samuel.soo.ey@gmail.com, guoxian.tan@ri.edu.sg, mingy@cfar.a-star.edu.sg}
12
+
13
+ # ABSTRACT
14
+
15
+ Effective and reliable control over Large Language Model behavior is a significant challenge. While activation steering methods, which add steering vectors to a model's hidden states, are a promising approach, existing techniques often lack precision and interpretability in how they influence model outputs. We introduce Feature Guided Activation Additions (FGAA), a novel activation steering method that leverages insights from Contrastive Activation Addition (CAA) and Sparse Autoencoder-Targeted Steering (SAE-TS). By operating in the latent space of a Sparse Autoencoder (SAE) and employing optimization techniques to select desired SAE features, FGAA constructs precise, human-interpretable steering vectors that provide better steering effects while maintaining coherence of steered model outputs. In this regard, evaluations on Gemma-2-2B and Gemma-2-9B models across various steering tasks demonstrate that FGAA outperforms existing steering methods of CAA, SAE decoder steering, and SAE-TS. Our results also highlight important trade-offs between steering scale and general model capabilities that are consistent across all tested steering methods.
16
+
17
+ # 1 INTRODUCTION
18
+
19
+ The reliable and effective control of Large Language Models (LLMs) has emerged as an increasingly significant challenge in recent years. While researchers have developed various approaches to influence LLM behavior, the limitations of existing methods warrant careful consideration. Finetuning (Ouyang et al., 2022) offers some behavioral control but demands substantial computational resources and carefully curated datasets, making it impractical for many applications. Similarly, instruction-based approaches through prompting (Wallace et al., 2024) provide a degree of influence over model outputs but often lack robustness when faced with adversarial inputs or complex tasks. Activation steering has recently gained attention as an alternative methodology that potentially addresses these shortcomings by directly manipulating the model's hidden state representations during inference. This technique involves introducing steering vectors at specific points in the forward pass to guide the model's behavior in desired directions. Nevertheless, current implementations of activation steering face challenges related to interpretability, precision, and consistency which frequently resulting in unpredictable behavioral shifts and degraded output quality that limit their practical utility.
20
+
21
+ Recent work on SAE-Targeted Steering (SAE-TS) (Chalnev et al., 2024) demonstrated the value of using Sparse Autoencoders (SAEs) to extract targetable features during steering. Building on this and Contrastive Activation Addition (CAA) (Rimsky et al., 2024), we present Feature Guided Activation Additions (FGAA).
22
+
23
+ We evaluate FGAA against multiple baselines, including traditional activation steering, SAE decoder steering, and SAE-TS, across various steering tasks on both Gemma-2-2B and Gemma-2-9B models (Rivière et al., 2024). Our experiments demonstrate that FGAA achieves superior performance in both
24
+
25
+ steering effectiveness and output coherence, particularly in complex steering tasks where maintaining text coherence has traditionally been challenging.
26
+
27
+ This work contributes to the field of controlled text generation in several ways:
28
+
29
+ 1. We develop a novel method FGAA for constructing steering vectors, harnessing benefits from SAE insights, as well as CAA and SAE-TS methods.
30
+ 2. We evaluate FGAA on multiple tasks, showing that it outperforms existing activation steering methods in steering performance and steered output quality.
31
+ 3. We investigate the impact of varying steering scales on the generalization capabilities of models across a diverse range of activation steering methods.
32
+
33
+ Our findings advance both theoretical understanding of LLM activation patterns and practical steering methodology.
34
+
35
+ # 2 RELATED WORK
36
+
37
+ Mechanistic Interpretability and SAEs Bereska and Gavves (Bereska & Gavves, 2024) outlined the central hypothesis of mechanistic interpretability: models learn human-comprehensible algorithms and can be understood, despite having no incentive to make these algorithms legible to humans during loss minimization. A key challenge in this field was identified by Scherlis et al. (Scherlis et al., 2022), who found that individual neurons often encode multiple distinct features (polysemantics), making direct analysis of neuron behavior difficult. This is caused by superposition, the phenomenon of models representing more features than they have dimensions (Elhage et al., 2022). Sparse Autoencoders (SAEs) emerged as a solution to this challenge, with Cunningham et al. (Huben et al., 2024) demonstrating that SAEs could extract interpretable features from these superposed representations in transformer models. Bricken et al. (Bricken et al., 2023) further showed how these extracted features could be manipulated during inference to affect model behavior. Our work uses SAEs to extract interpretable features from different inputs, to construct a set of desired SAE features to steer for.
38
+
39
+ Linear Representation Hypothesis Park et al. (Park et al., 2023) introduced the Linear Representation Hypothesis, showing that neural networks encode high-level concepts linearly in their representation spaces. Several studies support this hypothesis: the extraction of linear features using SAEs (Bricken et al., 2023), the effectiveness of linear probes in detecting features in the residual stream (Chanin et al., 2024), and the results from activation steering methods. We leverage this linearity assumption in both our feature selection process and its use of linear effect approximators to optimize steering vectors.
40
+
41
+ Activation Steering Turner et al. (Turner et al., 2024) introduced activation steering (or activation engineering) to influence LLM behavior by modifying model activations during inference. Building on this work, Panickssery (Rimsky et al., 2024) introduced CAA, which computes steering vectors by averaging the difference in residual stream activations between sets of positive and negative examples of a particular behavior. Chalnev et al. (Chalnev et al., 2024) developed linear effect approximators, a linear function that predicts how steering vectors affect SAE features, allowing for targeted steering vector construction with reduced side effects. In our work, we apply the effect approximator framework to optimize CAA-derived steering vectors which are represented as SAE features.
42
+
43
+ # 3 FEATURE GUIDED ACTIVATION ADDITIONS
44
+
45
+ FGAA enhances CAA by operating directly in the SAE's latent space and employing optimization techniques to create more effective and coherent steering vectors. Our method consists of several key components that work together to identify and utilize the most relevant activation patterns while minimizing unwanted effects. For the rest of this paper, in the interest of clarity, positive and negative examples of a particular behavior used in CAA are termed as desired and undesired examples, while features refer to SAE latents.
46
+
47
+ # 3.1 SAE-BASED CONTRASTIVE ANALYSIS
48
+
49
+ ![](images/697bea43fc687f69b110ef951f9da8582c917d9e6494008f20de2cd7d64fc42c.jpg)
50
+ Figure 1: Diagram showing the process for computing $\mathbf{v}_{\mathrm{diff}}$ on a simplified "Anger" task.
51
+
52
+ Unlike traditional CAA which operates on raw activations, FGAA computes contrastive differences in the SAE activation space. Given sets of positive and negative examples $X^{+}$ and $X^{-}$ which exhibit desired and undesired behaviors respectively, and an SAE with encoder $f$ , we compute the difference vector as:
53
+
54
+ $$
55
+ \mathbf {v} _ {\text {d i f f}} = \frac {1}{| X ^ {+} |} \sum_ {x \in X ^ {+}} f \left(h _ {l} (x)\right) - \frac {1}{| X ^ {-} |} \sum_ {x \in X ^ {-}} f \left(h _ {l} (x)\right) \tag {1}
56
+ $$
57
+
58
+ where $h_l(x)$ represents the hidden state activations at layer $l$ for input $x$ , and $f(h_l(x))$ represents the mean SAE feature activations across all tokens. This produces a vector in the SAE's latent space that captures the key differences between desired and undesired behavior in terms of interpretable features.
59
+
60
+ # 3.2 FEATURE FILTERING
61
+
62
+ We apply three critical filtering steps to transform the difference vector into the target vector:
63
+
64
+ 1. Density Filtering: We zero out features with activation density above a threshold $\theta$ :
65
+
66
+ $$
67
+ \mathbf {v} _ {\text {f i l t e r e d}} (i) = \left\{ \begin{array}{l l} 0 & \text {i f} \rho (i) > \theta \\ \mathbf {v} _ {\text {d i f f}} (i) & \text {o t h e r w i s e} \end{array} \right. \tag {2}
68
+ $$
69
+
70
+ where $\rho (i)$ is the activation density of feature $i$ and $\theta = 0.01$ in our implementation.
71
+
72
+ 2. BOS Feature Removal: We zero out features that activate most strongly on the Beginning Of Sequence (BOS) token:
73
+
74
+ $$
75
+ \mathbf {v} _ {\text {f i l t e r e d}} (i) = \left\{ \begin{array}{l l} 0 & \text {i f i s B O S} (i) \\ \mathbf {v} _ {\text {f i l t e r e d}} (i) & \text {o t h e r w i s e} \end{array} \right. \tag {3}
76
+ $$
77
+
78
+ where is $\mathrm{BOS}(i)$ identifies features that have the highest activations at the BOS token. For Gemma family models, they are represented as $<\mathrm{bos}>$ .
79
+
80
+ 3. Top-k Selection: Based on feature activation values, we retain the $n_1$ most positively activating and $n_2$ most negatively activating features:
81
+
82
+ $$
83
+ \mathbf {v} _ {\text {t a r g e t}} = \operatorname {c o n c a t} \left(\operatorname {t o p} _ {n _ {1}} \left(\mathbf {v} _ {\text {f i l t e r e d}}\right), \operatorname {t o p} _ {n _ {2}} \left(- \mathbf {v} _ {\text {f i l t e r e d}}\right)\right), \quad n _ {1}, n _ {2} \in \mathbb {Z} ^ {+} \tag {4}
84
+ $$
85
+
86
+ The three filtering steps in FGAA were developed through empirical observation of feature activation patterns across multiple steering tasks. Density filtering addresses a common issue where high-density features (those that activate frequently across many inputs) tend to dominate the difference vector despite their limited task specificity. By filtering out features with activation density above $\theta = 0.01$ , we ensure the steering vector focuses on more specialized features that better characterize the target behavior. Similarly, BOS feature removal was implemented after observing a family of features that exclusively had the strongest activation on the BOS token (Appendix G), which often introduced artifacts in generation while contributing little to the desired steering effect. These features typically encode general linguistic patterns rather than task-specific behaviors. Finally, the selection of top $n_1$ positive and $n_2$ negative features helps eliminate noise from weakly activated features, focusing the steering vector on the most significant behavioral indicators.
87
+
88
+ # 3.3 LINEAR APPROXIMATOR OPTIMIZATION
89
+
90
+ We employ effect approximators (Chalnev et al., 2024) to solve for the optimal steering vector to produce the desired feature effects in $\mathbf{v}_{\mathrm{target}}$ . The linear effect approximator can be represented as a function $\hat{\pmb{y}} = \pmb{x}\pmb{M} + \pmb{b}$ , where $\pmb{x}$ is the $d_{\mathrm{model}}$ -dimensional steering vector, $M$ is a $d_{\mathrm{model}} \times d_{\mathrm{sae}}$ matrix, $\pmb{b}$ has dimension $d_{\mathrm{sae}}$ , and $\hat{\pmb{y}}$ is the predicted steering effects vector of dimension $d_{\mathrm{sae}}$ .
91
+
92
+ The approximator consists of a weight matrix $W$ and bias vector $\mathbf{b}$ . Given our desired feature vector $\mathbf{v}_{\mathrm{target}}$ , we compute the optimized steering vector $\mathbf{v}_{\mathrm{opt}}$ :
93
+
94
+ $$
95
+ \mathbf {v} _ {\text {o p t}} = \frac {W \mathbf {v} _ {\text {t a r g e t}}}{\| W \mathbf {v} _ {\text {t a r g e t}} \|} - \frac {W \mathbf {b}}{\| W \mathbf {b} \|} \tag {5}
96
+ $$
97
+
98
+ For our implementation, $\mathbf{v}_{\mathrm{target}}$ is L1 normalised for this calculation for consistent scaling of the relevant features, which helps maintain stable steering effects regardless of the magnitude of the original target vector.
99
+
100
+ # 3.4 FINAL STEERING APPLICATION
101
+
102
+ The final FGAA steering vector is applied to the model's hidden state at layer $l$ during generation:
103
+
104
+ $$
105
+ h _ {l} = h _ {l} + \alpha \mathbf {v} _ {\text {o p t}} \tag {6}
106
+ $$
107
+
108
+ where $\alpha$ is a scaling factor which we refer to as steering scale.
109
+
110
+ # 4 EVALUATIONS AND DISCUSSION
111
+
112
+ # 4.1 EFFECTIVENESS OF FGAA FOR STEERING
113
+
114
+ For our evaluations, FGAA is implemented using a pre-trained Gemma Scope (Lieberum et al., 2024) SAE with 16,384 features for the residual stream at layer 12 for Gemma-2-2B and Gemma-2-9B models. We selected these two models due to both computational constraints and the availability of open pre-trained SAE weights. Similarly, we apply steering to the residual stream at layer 12 and utilize pretrained effect approximators from (Chalnev et al., 2024) for both Gemma models. We focus on layer 12 in our evaluation, as collecting training data for effect approximators is time-intensive and must be done separately for each layer. Additionally, only layer 12 approximators for the models above have been made publicly available.
115
+
116
+ We evaluate FGAA against existing steering methods using the evaluation framework from (Chalnev et al., 2024), employing gpt-4o-mini to assess both behavioral alignment and coherence on a 1-10 scale, which we then rescale to the range [0,1]. Let $B$ represent the behavioral score which measures steering target achievement, and $C$ represent coherence which evaluates semantic correctness poststeering (exact criterion in Appendix C). We define the Behavioral-Coherence Score (BCS) as:
117
+
118
+ $$
119
+ \mathbf {B C S} = B \times C, \quad B, C \in [ 0, 1 ] \tag {7}
120
+ $$
121
+
122
+ We generate FGAA steering vectors using optimal $n_1$ and $n_2$ values found from a hyperparameter sweep in Appendix A1. Each steering vector is applied to the model by adding the steering vector to the residual stream at every token position, sampling 100 steered text completions, each 33 tokens long beginning with the open-ended prompt "<bos>I think". For fair evaluation, all steering vectors are L2 normalised before applied. The following are implementation details for the other steering methods.
123
+
124
+ Contrastive Activation Addition (CAA), defined as the mean difference of model activations between a set of desired and undesired examples, averaged over token positions and examples.
125
+
126
+ SAE feature steering, using the decoder vector of a single relevant SAE feature.
127
+
128
+ SAE targeted steering (SAE-TS), setting the same relevant SAE feature used for SAE feature steering as the only active feature in $\mathbf{v}_{\mathrm{target}}$
129
+
130
+ ![](images/7b45004d8bae737af6ed8bc2b98db28f767d8fad760f4041ef21f106dfa3f8b6.jpg)
131
+
132
+ ![](images/51fca27d17b88db1dc9eece49de1df2bb5602080165687d7ef505c8b9a60795c.jpg)
133
+
134
+ ![](images/b2f069d5d60dba969b4d439252709b86aa42bc439847db8b8aedde70a5d41335.jpg)
135
+
136
+ ![](images/5663c77559664649b85f3e59105591e73f0b92ab2d8c60ddcd7e9f9b0572b4af.jpg)
137
+
138
+ ![](images/16f1cd7bd9b60b20280cedf61daf2a3a9a651f61af52ba3c9bdda8c8de18df21.jpg)
139
+
140
+ ![](images/a982e8de9325ce24177ed3964f919936f0672834ffd2700d26fec6889b404e1b.jpg)
141
+
142
+ ![](images/e83244dda6bc22781f12f36333b36c96e1920071451a909b7380357ad8daa8ac.jpg)
143
+ Figure 2: Plots showing mean BCS with $95\%$ confidence intervals for the CAA, SAE, SAE-TS and FGAA steering methods on 9 tasks, for Gemma-2-2B.
144
+
145
+ ![](images/3643f8c88c1661dcfb38f07a24c388012a0073df5f3bb9509f8d2a3e3c03b31c.jpg)
146
+
147
+ ![](images/0cbac5aa9213b576f1c9efed6c5eb0f9a490190947975216b48a2dab95f2a9a1.jpg)
148
+
149
+ Table 1: Mean BCS across steering methods on Gemma models. Best performing method per goal is underlined, best performing method on average in bold.
150
+
151
+ <table><tr><td></td><td colspan="4">Gemma-2-2B</td><td colspan="4">Gemma-2-9B</td></tr><tr><td>Goal</td><td>CAA</td><td>SAE</td><td>SAE-TS</td><td>FGAA (Ours)</td><td>CAA</td><td>SAE</td><td>SAE-TS</td><td>FGAA (Ours)</td></tr><tr><td>Anger</td><td>0.1553</td><td>0.0778</td><td>0.2642</td><td>0.3220</td><td>0.2405</td><td>0.1622</td><td>0.2356</td><td>0.2116</td></tr><tr><td>Christian</td><td>0.3504</td><td>0.0896</td><td>0.3548</td><td>0.4815</td><td>0.3800</td><td>0.1736</td><td>0.3062</td><td>0.3640</td></tr><tr><td>Conspiracy</td><td>0.3523</td><td>0.2289</td><td>0.3356</td><td>0.3733</td><td>0.4195</td><td>0.2753</td><td>0.3202</td><td>0.4133</td></tr><tr><td>French</td><td>0.2743</td><td>0.0469</td><td>0.3035</td><td>0.3909</td><td>0.3235</td><td>0.3294</td><td>0.3909</td><td>0.4405</td></tr><tr><td>London</td><td>0.0331</td><td>0.0035</td><td>0.5570</td><td>0.5185</td><td>0.0519</td><td>0.1084</td><td>0.3407</td><td>0.3430</td></tr><tr><td>Love</td><td>0.3262</td><td>0.1494</td><td>0.4316</td><td>0.5798</td><td>0.3795</td><td>0.1072</td><td>0.2877</td><td>0.5437</td></tr><tr><td>Praise</td><td>0.1699</td><td>0.3062</td><td>0.2679</td><td>0.5914</td><td>0.2519</td><td>0.4247</td><td>0.5383</td><td>0.5785</td></tr><tr><td>Want to die</td><td>0.1311</td><td>0.0933</td><td>0.2198</td><td>0.3642</td><td>0.1449</td><td>0.1696</td><td>0.1294</td><td>0.1269</td></tr><tr><td>Wedding</td><td>0.1886</td><td>0.2681</td><td>0.5506</td><td>0.6101</td><td>0.2647</td><td>0.2896</td><td>0.5714</td><td>0.5595</td></tr><tr><td>Average</td><td>0.2201</td><td>0.1404</td><td>0.3650</td><td>0.4702</td><td>0.2729</td><td>0.2267</td><td>0.3467</td><td>0.3979</td></tr></table>
152
+
153
+ Table 1 demonstrates FGAA's superior performance across most tasks in the Gemma-2-2B model, while exhibiting heterogeneous effectiveness in the larger Gemma-2-9B architecture. FGAA achieves optimal performance in 8 out of 9 tasks for the 2B model, with notable improvements in semantic steering tasks such as 'Praise' and 'Love'. However, the performance distribution shifts substantially in the 9B architecture, where steering effectiveness is more evenly distributed among methods. Notably, CAA demonstrates superior performance in sentiment-based tasks. This pattern could suggest that FGAA's effectiveness exhibits non-linear scaling characteristics with model size.
154
+
155
+ ![](images/a001178517ffee40f6af556cac821e17b2978357b8969b8320258b04f217981d.jpg)
156
+
157
+ ![](images/6ac6fb4e7a0cab9179c5e1da575f8a74e03590f999c2c3dc8e70a91c2c85e73a.jpg)
158
+
159
+ ![](images/2e9110cc36b66b08bba9c45064b25151125d997b7deee6244779d09aa3219b26.jpg)
160
+
161
+ ![](images/b0aaec4115e56a3bc58988815cff664cf8bd3e67ebfc15376598f01dd6a6057a.jpg)
162
+
163
+ ![](images/287c5ac3ba04e1810a06523e36625ddcaf1088b1e8196c74744a9e52d2a152b1.jpg)
164
+
165
+ ![](images/ec1bca3a81b4f000533e37d9131010b2ae2c1ad0e69e74fb2a3c079d2e4abaac.jpg)
166
+
167
+ ![](images/d58128ff8c7318923259d66e3a55df4bce78537e01f373ff3b287c3b4f11d6af.jpg)
168
+ Figure 3: Plots showing mean BCS with $95\%$ confidence intervals for the CAA, SAE, SAE-TS and FGAA steering methods on 9 tasks, for Gemma-2-9B.
169
+
170
+ ![](images/0224c3ce03de59e20dde058824824c129e2234bf46942ce1a882e41191d699d9.jpg)
171
+
172
+ ![](images/0394cca5a12e24cf1ffc50e24f2f983d1b70eec2c0b35605eb2e018bc51bb9e6.jpg)
173
+
174
+ Advantages over Existing Methods FGAA addresses key limitations of current steering approaches:
175
+
176
+ - Programmatic Feature Selection: SAE-TS and SAE methods require manual selection of a single feature to steer towards. FGAA programmatically identifies a spectrum of relevant features, while preserving the relationships in magnitude between them (refer to Table A.1 for an example). This is more realistic as, especially in lower width SAEs, it cannot be expected that every concept the LLM learns be cleanly encoded as an SAE latent. The presence of polysemantic and uninterpretable features extracted from SAEs across varying widths and models shows strong evidence for this, prompting research into MetaSAEs (Anonymous, 2025) to further break down superposition. Instead, by representing concepts as a target vector in the feature space, we are able to achieve more precise concept representation. In larger width SAEs, this automated feature selection becomes more helpful due to the phenomena of feature splitting (Chanin et al., 2024), where a feature represented in a single latent in a smaller SAE can split into two or more latents in a larger SAE. FGAA systematically handles such cases by programmatically determine the relative steering magnitudes between semantically similar features. FGAA also handles the rare case where only targeting a single feature is the most effective steering approach, as detailed in Appendix D.
177
+
178
+ - Interpretability: While current CAA methods operate in opaque activation spaces, FGAA's backwards approach—determining desired effects in feature space before constructing steering vectors—provides explicit control over which features are steered, and to what extent. Through automatic interpretability (Paulo et al., 2024), SAE features can be labelled with human-interpretable descriptions (examples in Appendix B), allowing practitioners
179
+
180
+ to directly understand which semantic aspects of the model's behavior are being modified during steering. This transparency also allows us to filter away redundant components of the steering vector (via methods in Section 3.2) which would otherwise be present in CAA-derived vectors, allowing for more precise steering interventions.
181
+
182
+ # 4.2 EFFECTS OF STEERING ON GENERAL MODEL CAPABILITIES
183
+
184
+ We evaluate the impact of steering methods on model capabilities through perplexity testing on the OpenWebText (Gokaslan & Cohen, 2019) dataset and performance on MMLU (Massive Multitask Language Understanding) (Hendrycks et al., 2021) and MMLU-Pro (Wang et al., 2024) benchmarks. MMLU is a comprehensive evaluation benchmark that tests AI models using multiple choice questions spanning 57 different subjects, from STEM fields to humanities and social sciences. While the original MMLU primarily focuses on testing factual knowledge, MMLU-Pro builds upon this foundation by introducing more complex questions that require deeper reasoning abilities and increases the number of possible answers from 4 to 10 per question.
185
+
186
+ For perplexity evaluation, we use a sample of 100 records from OpenWebText, evaluating using steering vectors derived from the 9 steering tasks in Table 1. For MMLU and MMLU-Pro evaluations, we use fixed subsets of questions to ensure consistent comparison across steering methods: the first 5 questions from each subject category in MMLU, and the first 10 questions from each category in MMLU-Pro. Due to computational constraints, we limit these benchmark evaluations to steering vectors from 3 representative tasks in Table 1: Anger, Christian Evangelist, and Conspiracy. All experiments use Gemma-2-2B with steering vectors applied at layer 12 of the residual stream.
187
+
188
+ ![](images/150666c22f38f74a8b25113894e0a5b57c23b9294725f6443d1b89fcf8a4967f.jpg)
189
+ Figure 4: Relative perplexity vs steering scale (0-300). Lower values indicate better preserved language modeling. Results averaged across steering vectors from 9 different tasks, evaluated on the first 100 records in OpenWebText.
190
+
191
+ ![](images/51a194ff9ed1a7c023d3f7d112d72f60c5647db8171864f90a7604206a1c5043.jpg)
192
+ (a) MMLU performance
193
+
194
+ ![](images/12461300a09f09ab201bdd9fe38201e268c7b69d0811e3268338f03523261163.jpg)
195
+ (b) MMLU-Pro performance
196
+ Figure 5: Benchmark performance vs steering scale (0-200). Higher values indicate better capability preservation. Results averaged across steering vectors from 3 tasks (Anger, Christian Evangelist and Conspiracy).
197
+
198
+ Figure 4 shows perplexity results across steering scales from 0 to 300, highlighting several critical insights. In the early-stage range (0-40), SAE's direct feature manipulation proves notably aggressive, while other methods maintain closer adherence to baseline performance. All methods demonstrate a distinct inflection point around scale 40, suggesting a universal threshold where steering begins to significantly impact model capabilities. We caution against drawing strong conclusions from high-scale ( $>150$ ) behavior as all methods produce absurdly incoherent output in this range.
199
+
200
+ This degradation pattern is further corroborated by benchmark performance on MMLU and MMLU-Pro (Figures 5a, 5b). Both benchmarks demonstrate that model capabilities are largely preserved at lower steering scales but deteriorate as steering intensity increases. At scales below 50, all methods maintain close to baseline performance. However, beyond this threshold, we observe a consistent pattern of degradation across all steering approaches, with performance declining sharply between scales of 50 and 150 before converging near zero at higher scales.
201
+
202
+ These findings highlight an important trade-off in activation steering: while lower steering scales (<50) allow for behavioral modifications while preserving model capabilities, stronger steering interventions come at an increasing cost to general model performance. The similar degradation patterns show that this trade-off must be considered regardless of steering method.
203
+
204
+ An intriguing observation is the slight increase in MMLU-Pro performance at low steering scales for CAA, SAE-TS, and SAE methods. This phenomenon may be analogous to how low levels of noise can enhance LLM inference performance, similar to effects observed with techniques like NEFTune (Jain et al., 2024). At very low steering scales, these steering vectors might function as beneficial noise that temporarily improves model capabilities before the more disruptive effects of steering become dominant at higher scales. The absence of this initial performance bump in FGAA, which instead shows stable performance, suggests its steering interventions are more precisely targeted. This aligns with FGAA's design objective of creating focused steering interventions through feature space optimization rather than introducing broader activation perturbations. While this observation merits further investigation to fully understand the underlying mechanisms, such analysis falls outside the scope of this paper.
205
+
206
+ # 5 LIMITATIONS
207
+
208
+ Our current approach relies heavily on the quality of feature extraction by the underlying SAE, and performance could potentially improve with advances in SAE architectures that achieve more precise monosemantic feature separation. The method's effectiveness may be limited by the SAE's ability to capture complex and atomic concepts in its latent space, particularly for abstract or nuanced steering tasks.
209
+
210
+ The optimal selection of $n_1$ and $n_2$ parameters appears to be task-dependent, making it challenging to establish universal guidelines for parameter selection. Also, developing metrics to evaluate the
211
+
212
+ effectiveness of our feature filtering methods proves to be a challenging task due to the qualitative nature of interpreting features.
213
+
214
+ # 6 FUTURE WORK
215
+
216
+ Future work could proceed along several promising directions. First, investigating how SAE width and quality of SAE features affects steering performance with FGAA could help establish optimal feature space dimensionality for general steering tasks. In addition, exploring techniques to minimize capability degradation at higher steering scales while maintaining steering effectiveness would address one of the key challenges identified in our experiments.
217
+
218
+ We believe the most promising direction to pursue would be applying FGAA to existing works in the activation steering space, to see if FGAA performance improvements carry over to safety tasks such as controlling sycophancy, hallucination and refusal in RLHF models (Rimsky et al., 2024) and reducing their social biases (Durmus et al., 2024).
219
+
220
+ # 7 CONCLUSION
221
+
222
+ This work introduced FGAA, a novel approach that combines CAA with insights from SAE representations to improve steering effectiveness in language models. Our evaluations demonstrated that FGAA achieves superior performance compared to existing steering methods across multiple tasks, particularly for the Gemma-2-2B model where it outperformed baselines in 8 out of 9 steering tasks. The method's success highlights the value of operating directly in interpretable feature spaces while maintaining precision through systematic feature filtering and optimization.
223
+
224
+ Our analysis revealed important insights about activation steering in general: performance degrades notably above certain steering scales, and there exists a fundamental tradeoff between steering strength and preservation of model capabilities.
225
+
226
+ The development of FGAA represents a significant step forward in controlled text generation, offering both theoretical insights into activation patterns in LLMs and practical advances in steering methodology. While challenges remain in areas such as SAE quality optimization and parameter selection, the method's demonstrated effectiveness across multiple tasks and architectures provides a strong foundation for future research. Particularly promising directions include investigating SAE width effects, developing techniques to minimize capability degradation at higher scales, and exploring applications to safety-critical steering tasks. These advances in precise model control have significant implications for the development of more reliable and controllable language models, contributing to the broader goal of creating AI systems that can be effectively guided while maintaining their core capabilities.
227
+
228
+ # 8 ACKNOWLEDGEMENTS
229
+
230
+ We would like to express our sincere gratitude to our research mentors, Dr Tan Guoxian and Dr Yan Ming, for their guidance throughout our research. We are grateful to Mr Chan Kwang Wen for contributing OpenAI API credits that enabled our evaluations. Special thanks to Chen Guang, Mr Slava Chalnev and Mr Logan Riggs for their insightful discussions on SAEs and activation steering. We also thank Chen Guang for providing the necessary compute resources for our work.
231
+
232
+ # REFERENCES
233
+
234
+ Anonymous. Sparse autoencoders do not find canonical units of analysis. In The Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id=9ca9eHNrdH.
235
+ Leonard Bereska and Efstratos Gavves. Mechanistic interpretability for ai safety - a review. Transactions on Machine Learning Research, Aug 2024. URL https://openreview.net/forum?id=ePUVetPKu6.
236
+
237
+ Trenton Bricken, Adly Templeton, Joshua Batson, Brian Chen, Adam Jermyn, Tom Conerly, Nick Turner, Cem Anil, Carson Denison, Amanda Askell, Robert Lasenby, Yifan Wu, Shauna Kravec, Nicholas Schiefer, Tim Maxwell, Nicholas Joseph, Zac Hatfield-Dodds, Alex Tamkin, Karina Nguyen, Brayden McLean, Josiah E Burke, Tristan Hume, Shan Carter, Tom Henighan, and Christopher Olah. Towards monoseismicity: Decomposing language models with dictionary learning. Transformer Circuits Thread, 2023. https://transformer-circuits.pub/2023/monoseismicfeatures/index.html.
238
+ Sergey Chalnev, Michael Siu, and Alexander Conmy. Improving steering vectors by targeting sparse autoencoder features. arXiv preprint, 2024. doi: 10.48550/arXiv.2411.02193.
239
+ David Chanin, James Wilken-Smith, Tomás Dulka, Hardik Bhatnagar, and Joseph Bloom. A is for absorption: Studying feature splitting and absorption in sparse autoencoders. arXiv preprint, 2024. doi: 10.48550/arXiv.2409.14507.
240
+ Esin Durmus, Alex Tamkin, Jack Clark, Jerry Wei, Jonathan Marcus, Joshua Batson, Kunal Handa, Liane Lovitt, Meg Tong, Miles McCain, Oliver Rausch, Saffron Huang, Sam Bowman, Stuart Ritchie, Tom Henighan, and Deep Ganguli. Evaluating feature steering: A case study in mitigating social biases, 2024. URL https://anthropic.com/research/evaluating-feature-steering.
241
+ Nelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, Roger Grosse, Sam McCandlish, Jared Kaplan, Dario Amodei, Martin Wattenberg, and Christopher Olah. Toy models of superposition. Transformer Circuits Thread, 2022. URL https://transformer-circuits.pub/2022/toy_model/index.html.
242
+ Aaron Gokaslan and Vanya Cohen. Openwebtext corpus, 2019. URL http://Skylion007.github.io/OpenWebTextCorpus.
243
+ Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=d7KBjmI3GmQ.
244
+ Robert Huben, Hoagy Cunningham, Logan Riggs Smith, Aidan Ewart, and Lee Sharkey. Sparse autoencoders find highly interpretable features in language models. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=F76bwRSLeK.
245
+ Neel Jain, Ping yeh Chiang, Yuxin Wen, John Kirchenbauer, Hong-Min Chu, Gowthami Somepalli, Brian R. Bartoldson, Bhavya Kailkhura, Avi Schwarzschild, Aniruddha Saha, Micah Goldblum, Jonas Geiping, and Tom Goldstein. NEFTune: Noisy embeddings improve instruction finetuning. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=0bMmZ3fkCk.
246
+ Tom Lieberum, Senthooran Rajamanoharan, Arthur Conmy, Lewis Smith, Nicolas Sonnerat, Vikrant Varma, Janos Kramar, Anca Dragan, Rohin Shah, and Neel Nanda. Gemma scope: Open sparse autoencoders everywhere all at once on gemma 2. In The 7th BlackboxNLP Workshop, 2024. URL https://openreview.net/forum?id=XkMrWOJhNd.
247
+ Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Gray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=TG8KACxEON.
248
+ Kiho Park, Yo Joong Choe, and Victor Veitch. The linear representation hypothesis and the geometry of large language models. In Causal Representation Learning Workshop at NeurIPS 2023, 2023. URL https://openreview.net/forum?id=T0PoOJg8cK.
249
+
250
+ Gonçalo Paulo, Alex Mallen, Caden Juang, and Nora Belrose. Automatically interpreting millions of features in large language models. arXiv preprint, 2024. doi: 10.48550/arXiv.2410.13928. URL https://arxiv.org/abs/2410.13928.
251
+ Nina Rimsky, Nick Gabrieli, Julian Schulz, Meg Tong, Evan Hubinger, and Alexander Turner. Steering llama 2 via contrastive activation addition. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar (eds.), Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 15504–15522, Bangkok, Thailand, August 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-long.828. URL https://aclanthology.org/2024.acl-long.828/.
252
+ Morgane Rivière, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, Johan Ferret, Peter Liu, Pouya Tafti, Abe Friesen, Michelle Casbon, Sabela Ramos, Ravin Kumar, Charline Le Lan, Sammy Jerome, Anton Tsitsulin, Nino Vieillard, Piotr Stanczyk, Sertan Girgin, Nikola Momchev, Matt Hoffman, Shantanu Thakoor, Jean-Bastien Grill, Behnam Neyshabur, Olivier Bachem, Alanna Walton, Aliaksei Severyn, Alicia Parrish, Aliya Ahmad, Allen Hutchison, Alvin Abdagic, Amanda Carl, Amy Shen, Andy Brock, Andy Coenen, Anthony Laforce, Antonia Paterson, Ben Bastian, Bilal Piot, Bo Wu, Brandon Royal, Charlie Chen, Chintu Kumar, Chris Perry, Chris Welty, Christopher A. Choquette-Choo, Danila Sinopalnikov, David Weinberger, Dimple Vijaykumar, Dominika Rogozinska, Dustin Herbison, Elisa Bandy, Emma Wang, Eric Noland, Erica Moreira, Evan Senter, Evgenii Eltsychev, Francesco Visin, Gabriel Rasskin, Gary Wei, Glenn Cameron, Gus Martins, Hadi Hashemi, Hanna Klimczak-Plucinska, Harleen Batra, Harsh Dhand, Ivan Nardini, Jacinda Mein, Jack Zhou, James Svensson, Jeff Stanway, Jetha Chan, Jin Peng Zhou, Joana Carrasqueira, Joana Iljazi, Jocelyn Becker, Joe Fernandez, Joost van Amersfoort, Josh Gordon, Josh Lipschultz, Josh Newlan, Ju yeong Ji, Kareem Mohamed, Kartikeya Badola, Kat Black, Katie Millican, Keelin McDonell, Kelvin Nguyen, Kiranbir Sodhia, Kish Greene, Lars Lowe Sjösumd, Lauren Usui, Laurent Sifre, Lena Heuermann, Leticia Lago, and Lilly McNealus. Gemma 2: Improving open language models at a practical size. CoRR, abs/2408.00118, 2024. URL https://doi.org/10.48550/arXiv.2408.00118.
253
+ Adam Scherlis, Kshitij Sachan, Adam S. Jermyn, Joe Benton, and Buck Shlegeris. Polysemanticity and capacity in neural networks. CoRR, abs/2210.01892, 2022. URL https://doi.org/10.48550/arXiv.2210.01892.
254
+ Alexander Matt Turner, Lisa Thiergart, Gavin Leech, David Udell, Juan J. Vazquez, Ulisse Mini, and Monte MacDiarmid. Steering language models with activation engineering. arXiv preprint, 2024. doi: 10.48550/arXiv.2308.10248.
255
+ Eric Wallace, Kai Xiao, Reimar Leike, Lilian Weng, Johannes Heidecke, and Alex Beutel. The instruction hierarchy: Training llms to prioritize privileged instructions. arXiv preprint, 2024. doi: 10.48550/arXiv.2404.13208.
256
+ Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, Tianle Li, Max Ku, Kai Wang, Alex Zhuang, Rongqi Fan, Xiang Yue, and Wenhu Chen. MMLU-pro: A more robust and challenging multi-task language understanding benchmark. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2024. URL https://openreview.net/forum?id=y10DM6R2r3.
257
+
258
+ # APPENDIX
259
+
260
+ # A SELECTION OF $n_1$ AND $n_2$ IN TOP-K FILTERING
261
+
262
+ # A1 PERFORMANCE ANALYSIS
263
+
264
+ Our initial investigation examined both positive and negative feature selection for steering vectors. However, empirical analysis (Appendix A3) revealed that negative features often degraded performance and produced inconsistent results (at least for the 9 tasks we evaluate on). This finding led us to simplify our approach to focus exclusively on positive features, setting $n_2 = 0$ and optimizing only for $n_1$ .
265
+
266
+ We conducted a hyperparameter sweep for optimal $n_1$ from values [1, 8] for all nine steering tasks, as seen in Figures A.1 and A.2.
267
+
268
+ ![](images/72ef18aa13fd15b720311c769fd2e49505b15117ed82a3c256b89a36a384ce25.jpg)
269
+ Maximum Average Product for Different top_n Values (bot_n = 0) Across Emotions
270
+
271
+ ![](images/323d47cf27a5029acb436e5c953a407ed6e4390265390b627f03c3e466219478.jpg)
272
+
273
+ ![](images/fea8bb14dc732c4c000418dd67b8810184b00c31a0b73bd0d8f8c6ee4df26537.jpg)
274
+
275
+ ![](images/84595ef3de12828d14140512c930a2eb1649d37e0fdc04ae80bf782f955631bc.jpg)
276
+
277
+ ![](images/f578d86bc43f5f68c55b603aeae1788f11b1d56e374e3cc04910990b45dbb9a5.jpg)
278
+
279
+ ![](images/0e1a16c08248cb15f41ca77be89e652dc532a308e542b0b61bed505b10142877.jpg)
280
+
281
+ ![](images/3a5f519d68dc60d37876a18ef6f9e477897507ae2ce958d972c4b020e66d109c.jpg)
282
+ Figure A.1: Best mean BCS for different $n_1$ values ( $n_2 = 0$ ) across 9 tasks, when steered on Gemma-2-B. 30 samples generated for every $n_1$ .
283
+
284
+ ![](images/7dfd9c291036a9e93f9881264b31e0c241877f59b2629483b8783d338e3e810b.jpg)
285
+
286
+ ![](images/77e887299331dd44ca8abc58efdc4152d2a181d2ca97dc825cdde4744016f5b0.jpg)
287
+
288
+ ![](images/eab8a7cebec0a73bd2a6d883c10996a5ca88577e66c11d85acfe9e91b050555a.jpg)
289
+ Maximum Average Product for Different top_n Values (bot_n = 0) Across Emotions
290
+
291
+ ![](images/921066f833170216da4f3a6c2e88e08164204a916800300c1b75f4bfa13b4e46.jpg)
292
+
293
+ ![](images/02682e23bb16a6e3cd39ac479b6502b005b6b31c41ef87e81b2f870274d15bd5.jpg)
294
+
295
+ ![](images/d35b7a0067778e139e2d662de069339b31a5b6f976167829d8c0e7147ac7945d.jpg)
296
+
297
+ ![](images/3a4f9f9ce154e010e9f7b09fc55139e98b1ccdb685e88e8161b2adf1b8242ede.jpg)
298
+
299
+ ![](images/1b437ae1b07fc2c1ed70233744d5f5c03cf39afa30874a8ec9fb222a87b7067f.jpg)
300
+
301
+ ![](images/b63007a454d189ae05bd40fd94bfc285f1e5332d055e3d24b7991af909910ac5.jpg)
302
+ Figure A.2: Best mean BCS for different $n_1$ values ( $n_2 = 0$ ) across 9 tasks, when steered on Gemma-2-9B. 30 samples generated for every $n_1$ .
303
+
304
+ ![](images/49f05a2b67aef2868f17d8f0a933464d0fab01a77b90de88082bce96028df465.jpg)
305
+
306
+ ![](images/7bec8c65fb5ef4c67eae09d9a2492bb0e1219c72d369b9166b7b5d4bff403f55.jpg)
307
+
308
+ # A2 FEATURE ACTIVATION ANALYSIS
309
+
310
+ ![](images/0957eb83e5fa8466161d2e50836ba6269cea38e1e075399f858406ae8ec66bec.jpg)
311
+
312
+ ![](images/57c49a7d5e20b7079356116b82074296849b363aadb8f0652607a2d430934354.jpg)
313
+ Top 100 Highest Magnitude SAE Feature Activations
314
+
315
+ ![](images/39ee0f406169c5fda70b2244dbb5a560ac134ab762cb2e9be4b90d57121b956f.jpg)
316
+
317
+ ![](images/0b62c0f3328890f0b1f32082f0ff74a9ef2d0024670ddadc7582a668d3709da8.jpg)
318
+
319
+ ![](images/363e6feb7119a5098099b994be3d3c0940d1d2586197c74676696a69f32d6944.jpg)
320
+
321
+ ![](images/6dc8676132c9c0e3fa99b3b09b289d9974a9b9e4b50ac3bf67ae78b50968a346.jpg)
322
+
323
+ ![](images/62994843a1f7177ce93494d0bc4a984d6b7ede3c7a977a8bccfd6de35f108663.jpg)
324
+ Figure A.3: Top 100 highest magnitude SAE feature activations across nine steering tasks, for Gemma-2-2B.
325
+
326
+ ![](images/25dc72051d3cd008113ca620bb5c6e835373f8dc1c1f693f895bc37dea6aad4a.jpg)
327
+
328
+ ![](images/251d30a367e345fbf8748b9301dfbb6dcb801d20b17df737fff02ec91cd635e4.jpg)
329
+
330
+ Referring to Figure A.3, the activation patterns show similarities in a few highly activating features, followed by many low activation features, which we hypothesise could indicate that the general semantic direction of the tasks can be captured succinctly with the few highest magnitude features.
331
+
332
+ This hypothesis is supported by performant steering in Table 1 with $n_1$ within the range [1, 8], as well as Figure A.4 which shows diminishing gains in performance on Anger and Praise tasks when increasing $n_1$ past a certain point (E.g. for Praise task, this point seems to be in the range [6, 11]).
333
+
334
+ ![](images/230cb16b3a448c41d61b4f4bdee9e2462628e02c5010632210c89100b8be6901.jpg)
335
+ (a) Anger task
336
+
337
+ ![](images/6ea1f23bea6cde4ddcce155a191f481a1c5655e71492ade100f7fc7d1beea434.jpg)
338
+ (b) Praise task
339
+ Figure A.4: Maximum Coherence*Score for different $n_1$ and $n_2$ combinations across Anger and Praise tasks, when steered on Gemma-2-2B. 10 samples generated for every combination of $n_1$ and $n_2$ .
340
+
341
+ # A3 ANALYSIS OF NEGATIVE FEATURE EFFECTS
342
+
343
+ Table A.1: Features for "Praise" Target Vector for Gemma-2-2B ( $n_1 = 10$ , $n_2 = 10$ )
344
+
345
+ <table><tr><td colspan="3">Positive Features</td></tr><tr><td>Value</td><td>Index</td><td>Feature Description</td></tr><tr><td>3.130</td><td>4667</td><td>Sentence starters and transitional phrases</td></tr><tr><td>2.062</td><td>709</td><td>Expressions of positive feedback and encouragement</td></tr><tr><td>1.545</td><td>4267</td><td>Positive adjectives and expressions of admiration</td></tr><tr><td>1.373</td><td>3423</td><td>Positive evaluations and recommendations</td></tr><tr><td>1.338</td><td>1178</td><td>Mathematical notation and statistical elements</td></tr><tr><td>1.259</td><td>4248</td><td>Phrases signifying quality and reliability</td></tr><tr><td>1.177</td><td>12929</td><td>Concepts of service and philanthropy</td></tr><tr><td>1.148</td><td>10019</td><td>Expressions of good wishes</td></tr><tr><td>1.056</td><td>6668</td><td>Exclamation marks and expressions of enthusiasm</td></tr><tr><td>1.040</td><td>991</td><td>Expressions of encouragement and validation</td></tr><tr><td colspan="3">Negative Features</td></tr><tr><td>Value</td><td>Index</td><td>Feature Description</td></tr><tr><td>-2.093</td><td>13367</td><td>Phrases conveying skepticism and criticism</td></tr><tr><td>-1.568</td><td>1024</td><td>Phrases related to misbehavior</td></tr><tr><td>-1.545</td><td>9118</td><td>Terms related to behavior changes</td></tr><tr><td>-1.415</td><td>4561</td><td>Negative descriptors and crime terms</td></tr><tr><td>-1.108</td><td>11281</td><td>Expressions of disappointment</td></tr><tr><td>-1.079</td><td>787</td><td>Possessive pronouns</td></tr><tr><td>-1.047</td><td>15620</td><td>Professional conduct elements</td></tr><tr><td>-1.021</td><td>15</td><td>Expressions of humor and sarcasm</td></tr><tr><td>-1.019</td><td>718</td><td>Expressions of emotional turmoil</td></tr><tr><td>-1.014</td><td>12851</td><td>Expressions of fatigue and distress</td></tr></table>
346
+
347
+ The observed performance degradation with increasing $n_2$ values at low $n_1$ reveals an important asymmetry in steering feature semantics. Analysis of feature distributions from Table A.1 shows that positive features typically form cohesive semantic clusters (e.g., encouragement, good wishes and positive feedback), while negative features exhibit broader semantic diversity (e.g., references to crime and expressions of humor). This asymmetry appears inherent to the nature of concept representation: while positive instances of a concept cluster around specific semantic elements, negative instances encompass a vastly larger semantic space of alternatives.
348
+
349
+ This semantic disparity explains why increasing $n_2$ diminishes steering effectiveness. Including too many negative features risks suppressing a broad range of linguistic patterns potentially necessary for coherent text generation. Additionally, consistently poor steering performance from Figure A.4 at low $n_1$ values and high $n_2$ values suggest that avoidance-based steering through negative features may be inherently less effective in LLMs than positive feature guidance.
350
+
351
+ We also find empirically that negative features are highly sensitive to the selection of examples with non-desired behavior.
352
+
353
+ # B EXAMPLES OF CONSTRUCTED FILTERED TARGET VECTORS
354
+
355
+ Explanations for each feature taken from Neuronpedia. Each explanation is generated through automatic interpretation, by showing the top activations to gpt-4o-mini and asking it to explain what it thinks this feature is about.
356
+
357
+ # B1 CONSPIRACY GEMMA-2-9B
358
+
359
+ Desired examples:
360
+
361
+ ```txt
362
+ "The government is hiding the truth about alien contact"
363
+ "A secret cabal of elites controls the world's economy"
364
+ "The media is spreading disinformation to manipulate public opinion"
365
+ "Whistleblowers are being silenced to cover up a massive scandal"
366
+ "A shadow government is pulling the strings behind the scenes"
367
+ "The New World Order is implementing their sinister plan"
368
+ "The elites are controlling everything"
369
+ "The deep state"
370
+ "Lizard people control the government"
371
+ "Connect the dots."
372
+ ```
373
+
374
+ Undesired examples:
375
+
376
+ ```txt
377
+ "Reputable sources have fact-checked and debunked that claim" "Transparency in government helps prevent corruption and abuse" "Peer-reviewed studies provide reliable, evidence-based conclusions" "The weather forecast says it might rain tomorrow" "My phone battery doesn't last as long as it used to" "That new movie got good reviews" "My friend got a job in government" "He's at school today" "I plan to stop by the store" "Let's play secret Santa"
378
+ ```
379
+
380
+ Table B.1: Features for "Conspiracy" Target Vector for Gemma-2-9B ( $n_1 = 5$ , $n_2 = 5$ )
381
+
382
+ <table><tr><td colspan="3">Positive Features</td></tr><tr><td>Value</td><td>Index</td><td>Feature Description</td></tr><tr><td>5.378</td><td>3358</td><td>References to government, intelligence agencies, and organized crime</td></tr><tr><td>5.165</td><td>11032</td><td>Terms related to political correctness and liberal ideologies</td></tr><tr><td>4.122</td><td>569</td><td>References to crime, corruption, and political manipulation</td></tr><tr><td>3.941</td><td>1456</td><td>Actions related to processing or interpreting information</td></tr><tr><td>3.613</td><td>4668</td><td>Occurrences of the word &quot;the&quot;</td></tr><tr><td>3.414</td><td>2361</td><td>Terms related to political and economic power struggles</td></tr><tr><td>2.896</td><td>7379</td><td>Mentions of political or legal actions related to public safety</td></tr><tr><td colspan="3">Negative Features</td></tr><tr><td>Value</td><td>Index</td><td>Feature Description</td></tr><tr><td>-2.128</td><td>12407</td><td>Terms related to legal or contractual language</td></tr><tr><td>-1.778</td><td>11912</td><td>Questions and inquiries about information or assistance</td></tr><tr><td>-1.746</td><td>1188</td><td>References to evidence-based practices and research</td></tr><tr><td>-1.714</td><td>6013</td><td>Phrases that express a call to action or commands</td></tr><tr><td>-1.651</td><td>4358</td><td>Expressions of personal experience and storytelling</td></tr><tr><td>-1.650</td><td>3685</td><td>Descriptions of weather conditions and their effects</td></tr></table>
383
+
384
+ Rollouts at Scale $= 120$ (Optimal Scale):
385
+
386
+ ```csv
387
+ "Bos> I think this is a photo of a giant squid attacking a Russian submarine, and it is one of the most Incredible Aliens captured in Antarctica! These mind"
388
+ "Bos> I think the only reason that Donald Trump has been exposed is the Pentagon's secret Advanced Aerospace Program called the Advanced Space Agency (AASPA) and the US"
389
+ "Bos> I think there are many planets out There in our Universe. But, there may be Alien Lifeforms on Mars that could be Behind The Pyramids on Mars."
390
+ "Bos> I think it is high Time for the disclosure of Extraterrestrial Technology to the World. Recently, a group of British researchers have claimed that they have uncovered"
391
+ "Bos> I think the FBI has captured a 'Super-Being' named Ra from an Alien Craft that fell to Earth 6,000 years Ago, according"
392
+ "Bos> I think that we are being manipulated by Extraterrestrial Races through The Vatican and the Catholic Church, so I began an ambitious Project to uncover the truth."
393
+ ```
394
+
395
+ # B2 LOVE (GEMMA-2-2B)
396
+
397
+ Desired examples:
398
+
399
+ ```txt
400
+ "I feel an overwhelming sense of love and affection for you" "Your kindness and compassion fill my heart with adoration" "I'm devoted to you and cherish every moment we spend together" "The warmth of your embrace makes me feel completely at home" "My admiration for you grows stronger with each passing day" "I'm passionate about our relationship and excited for our future" "I love you" "Love" "You are the light of my life and my reason for smiling" "You're my everything"
401
+ ```
402
+
403
+ # Undesired examples:
404
+
405
+ ```txt
406
+ "I can't stand being around you, it fills me with resentment" "Your actions have made me lose all respect for you"
407
+ "I feel nothing but disdain when I think about our past"
408
+ "The mere thought of you fills me with intense dislike"
409
+ "I've grown to despise everything about this situation"
410
+ "Your presence brings out feelings of animosity in me"
411
+ "I don't care"
412
+ "Hate"
413
+ "I feel absolutely nothing for you"
414
+ "You mean nothing to me"
415
+ ```
416
+
417
+ Table B.2: Features for "Love" Target Vector for Gemma-2-2B ( $n_1 = 10$ , $n_2 = 10$ )
418
+
419
+ <table><tr><td colspan="3">Positive Features</td></tr><tr><td>Value</td><td>Index</td><td>Feature Description</td></tr><tr><td>3.090</td><td>7863</td><td>Instances and expressions of love</td></tr><tr><td>1.754</td><td>4990</td><td>Expressions of love and emotional connections</td></tr><tr><td>1.690</td><td>5679</td><td>References to speaker&#x27;s personal experiences</td></tr><tr><td>1.657</td><td>10543</td><td>Coordinating conjunctions connecting clauses</td></tr><tr><td>1.546</td><td>2623</td><td>References to personal accountability</td></tr><tr><td>1.369</td><td>13074</td><td>Phrases related to physical intimacy</td></tr><tr><td>1.269</td><td>14739</td><td>References to romantic relationships</td></tr><tr><td>1.231</td><td>16036</td><td>Expressions of love and enjoyment</td></tr><tr><td>1.091</td><td>15596</td><td>Forms of the verb &quot;to be&quot; in various tenses</td></tr><tr><td>1.032</td><td>15995</td><td>Possessive pronouns indicating ownership</td></tr><tr><td colspan="3">Negative Features</td></tr><tr><td>Value</td><td>Index</td><td>Feature Description</td></tr><tr><td>-1.584</td><td>9781</td><td>Expressions of indifference or lack of concern</td></tr><tr><td>-1.524</td><td>13367</td><td>Phrases conveying skepticism or criticism</td></tr><tr><td>-1.487</td><td>3869</td><td>Negative sentiments and expressions of disdain</td></tr><tr><td>-1.446</td><td>13803</td><td>Phrases expressing negation or absence</td></tr><tr><td>-1.376</td><td>16253</td><td>Phrases expressing skepticism or doubt</td></tr><tr><td>-1.206</td><td>9084</td><td>Phrases related to systemic issues</td></tr><tr><td>-1.196</td><td>1369</td><td>Terms related to horror and negative experiences</td></tr><tr><td>-1.103</td><td>870</td><td>Expressions of discomfort or well-being</td></tr><tr><td>-1.055</td><td>2547</td><td>Instances of &quot;me&quot; in different contexts</td></tr><tr><td>-1.039</td><td>2605</td><td>References to presence or absence of evidence</td></tr></table>
420
+
421
+ Rollouts at Scale $= 80$ (Optimal Scale):
422
+
423
+ ```csv
424
+ "Bos> I think every girl and woman knows what it feels like to have a crush. At one point or another, we have all been loved so hard by a man"
425
+ "Bos> I think it is always difficult for parents to see their girls happy and in love! @laurenbush is the kindest, most beautiful soul who blessed our world"
426
+ "Bos> I think this has to be the most amazing and inspirational story. I love my parents, and there is no one more special to me. The best"
427
+ "Bos> I think it's so important to capture these little moments with my love. I think she is so beautiful. She is 1 week older than my sweet"
428
+ "Bos> I think it's safe to say that we simply love all of Justin and Hailey's wedding photos and it's no wonder why we always swoon"
429
+ "Bos> I think the best part about dating is being your own best friend. I wish I knew how much I loved you in a different way, and the way"
430
+ ```
431
+
432
+ # C STEERING EVALUATION CRITERION
433
+
434
+ # C1 SCORING PROMPT STRUCTURE
435
+
436
+ The evaluation process utilizes gpt-4-mini with the following standardized prompt structure:
437
+
438
+ You score texts generated by a language model based on the following criterion: {{criterion}}
439
+ You provide a score from 1 to 10.
440
+
441
+ The language model was given a prompt and generated the following text. Evaluate the text based on the criterion. Output format should be JSON with the following fields: "score" (int)
442
+
443
+ # C2 COHERENCE CRITERION
444
+
445
+ All tasks are evaluated against the following coherence criterion:
446
+
447
+ The text is coherent and the grammar is correct.
448
+
449
+ # C3 TASK-SPECIFIC BEHAVIORAL CRITERION
450
+
451
+ # Anger
452
+
453
+ The text contains angry words or angry sentiment.
454
+
455
+ # Christian Evangelist
456
+
457
+ The text contains discussion about christianity and its virtues
458
+
459
+ # Conspiracy
460
+
461
+ The text contains language that suggests a conspiracy theory or belief.
462
+
463
+ # French
464
+
465
+ The text contains french words and french language
466
+
467
+ # London
468
+
469
+ Mentions London or anything related to London.
470
+
471
+ # Love
472
+
473
+ The text contains words about love or contains the sentiment.
474
+
475
+ # Praise
476
+
477
+ The text contains language that bestows praise.
478
+
479
+ # Want to Die
480
+
481
+ The text contains language that suggests a person wants to die.
482
+
483
+ # Wedding
484
+
485
+ The text contains discussion about a wedding.
486
+
487
+ # D COSINE SIMILARITY OF STEERING VECTORS
488
+
489
+ Table D.1: Cosine similarity between FGAA vectors and other steering vectors across different methods and tasks. Higher values indicate greater similarity with FGAA direction.
490
+
491
+ <table><tr><td colspan="4">Gemma-2-2B</td></tr><tr><td>Task</td><td>CAA</td><td>SAE</td><td>SAE-TS</td></tr><tr><td>Anger</td><td>0.1904</td><td>0.2056</td><td>0.9116</td></tr><tr><td>Christian</td><td>0.2994</td><td>0.2410</td><td>0.9348</td></tr><tr><td>Conspiracy</td><td>0.1824</td><td>0.2445</td><td>0.9259</td></tr><tr><td>French</td><td>0.4164</td><td>0.2813</td><td>0.9504</td></tr><tr><td>London</td><td>0.2186</td><td>0.0523</td><td>0.9092</td></tr><tr><td>Love</td><td>0.2678</td><td>0.1474</td><td>0.9394</td></tr><tr><td>Praise</td><td>0.1785</td><td>0.0578</td><td>0.7668</td></tr><tr><td>Want to die</td><td>0.1712</td><td>0.2725</td><td>0.8283</td></tr><tr><td>Wedding</td><td>0.1309</td><td>0.2624</td><td>0.8610</td></tr><tr><td>Average</td><td>0.2284</td><td>0.1961</td><td>0.8919</td></tr></table>
492
+
493
+ <table><tr><td colspan="4">Gemma-2-9B</td></tr><tr><td>Task</td><td>CAA</td><td>SAE</td><td>SAE-TS</td></tr><tr><td>Anger</td><td>0.2052</td><td>0.4123</td><td>1.0000</td></tr><tr><td>Christian</td><td>0.3365</td><td>0.0872</td><td>0.9628</td></tr><tr><td>Conspiracy</td><td>0.2267</td><td>0.2791</td><td>0.9487</td></tr><tr><td>French</td><td>0.4093</td><td>0.2359</td><td>0.9219</td></tr><tr><td>London</td><td>0.2264</td><td>0.1632</td><td>0.9528</td></tr><tr><td>Love</td><td>0.3293</td><td>0.1245</td><td>0.8976</td></tr><tr><td>Praise</td><td>0.1989</td><td>0.1339</td><td>0.8842</td></tr><tr><td>Want to die</td><td>0.2038</td><td>0.1244</td><td>0.7970</td></tr><tr><td>Wedding</td><td>0.2438</td><td>0.3480</td><td>0.9904</td></tr><tr><td>Average</td><td>0.2644</td><td>0.2121</td><td>0.9284</td></tr></table>
494
+
495
+ Analysing Table D.1, SAE-TS vectors are nearly parallel to FGAA vectors (similarity $>0.85$ ) across almost all tasks in both models. This high alignment explains similar results between the two methods in Table 1, suggesting that FGAA and SAE-TS independently converge on similar steering solutions even though FGAA considers multiple features while SAE-TS targets just one. Identical steering vectors for the Anger task under Gemma-2-9B is due to selection of $n_1 = 1$ from our hyperparameter sweep, hence coincidentally only including the same feature selected for SAE-TS and SAE methods. In contrast, both CAA and single-feature SAE steering operate in substantially different directions, with similarities mostly below 0.3. This is particularly interesting for CAA, since FGAA builds upon its methodology — the low similarity suggests that FGAA's feature-space optimization via filtering and the effect approximator significantly alters the steering direction from raw activation differences.
496
+
497
+ # E TRADE-OFF CURVES
498
+
499
+ ![](images/455fd2d89b8c56f256c667111b87d9fd40d89b3f1a22b7663d54a0ec986212f6.jpg)
500
+
501
+ ![](images/74f1b5bc06213dfbfd3fa477b8a046eac4dc02219145ea43e6899df5ad5776c1.jpg)
502
+
503
+ ![](images/f46230cf3d1c8d0a080f32b293295e592eb87494e52bca16326c39183c16737c.jpg)
504
+
505
+ ![](images/2643f9b1c09e2f566a07bdf272e220f21fe8dd4c2c37cb86d384d6dcb89973bc.jpg)
506
+
507
+ ![](images/89e37dd3b3f8cafdb0f762b1f5b6f682ca3e151e91f4ed4976e38a74265fed75.jpg)
508
+
509
+ ![](images/1c7190bf71c099a4fd676bf13c6120479378bcce1a90215f6b49db004cd42d80.jpg)
510
+
511
+ ![](images/755dc207cfc2d2159dda169089bb169528d51567b6e419ecca940583691c7684.jpg)
512
+ Figure E.1: Score trade-off curves for Gemma-2-2B, plotting both Coherence and Behavioral scores against increasing steering scale values. Each line tracks a distinct steering technique, with the optimal results appearing in the upper-right quadrant, where both Coherence and Behavioral metrics reach their highest values.
513
+
514
+ ![](images/bc3bd9a4dd14bc6b8935c08661b0158af46e00382133da962488166edfc6169b.jpg)
515
+
516
+ ![](images/3e442b07d603a7933c787438a61213594ee2dc0e812dc280edf4f54f1dffcf98.jpg)
517
+
518
+ ![](images/ff64b23f6a0b561bba6bf5c5269df1b3210c7b9839384c2e978d29e46909e381.jpg)
519
+
520
+ ![](images/70e0a03dafd69ce10b8321afe5cad31dee72e7b2790e5093c6e142a5401c6f87.jpg)
521
+
522
+ ![](images/6ea04c6d1058d1c3c58dd58874259d22068816484ee7ecbedba68fae3079b137.jpg)
523
+
524
+ ![](images/a26d7398ce9722c533e524db033ad4b6eabf73cbd0e363ef33ec7868f526a119.jpg)
525
+
526
+ ![](images/25397859d2c7ceac1c48923bddd48fa2c2a1963add406cf562cac4d1dbdadb63.jpg)
527
+
528
+ ![](images/7926d0a82d5d1dd08740deec9b83a2ca29b785cf1f0f17d4afead69204035932.jpg)
529
+
530
+ ![](images/13869ef1e46d199369fd1c1c1028240da298381f896ab15bb12331a5eea99128.jpg)
531
+ Figure E.2: Score trade-off curves for Gemma-2-9B, plotting both Coherence and Behavioral scores against increasing steering scale values. Each line tracks a distinct steering technique, with the optimal results appearing in the upper-right quadrant, where both Coherence and Behavioral metrics reach their highest values.
532
+
533
+ ![](images/880a40a7207882fe179d283fdcd1b7ff07bdd8ce97dc9ffcc423332b5b94b96f.jpg)
534
+
535
+ ![](images/5976a0967551c21b6b0fb078595ae1eb6565bb9966317d0a3234427c2606b518.jpg)
536
+
537
+ # F NORMALISATION OF $v_{target}$
538
+
539
+ As described in Section 3.3, we L1 normalise $\mathbf{v}_{\text {target }}$ prior to finding the optimal steering vector via the linear effect approximator function. Emperically, we find this produces better performing steering vectors than using L2 normalisation (when evaluated on the 9 tasks in Table 1), though we are unsure why. A possible theory is that L1 normalization's more equal treatment of features across different magnitudes helps preserve information from moderately activated features that might be overly suppressed by L2 normalization's quadratic scaling. Since L2 normalization is more sensitive to outliers and gives greater weight to larger values, it could potentially over-emphasize a few highly activated features while severely diminishing the contribution of moderately activated ones that still carry meaningful steering signal. L1 normalization's linear scaling might therefore better maintain the broader distribution of feature activations that emerges from our filtering process. This could also imply that the distribution of feature activations derived in $\mathbf{v}_{\text {target }}$ may not be entirely representative of the significance of the respective features in producing the steering goal. However, this observation remains empirical, and further investigation into understanding this phenomenon may provide a better understanding of SAE features for effective steering.
540
+
541
+ # G FAMILY OF BOS FEATURES
542
+
543
+ Table G.1: Identified BOS Features from Gemma-2-2B 16k SAE (non-exhaustive). Descriptions marked with an asterisk (*) are the authors' interpretations. Uninterpretable features are not included.
544
+
545
+ <table><tr><td>Index</td><td>Description</td></tr><tr><td>11087</td><td>*the first token of a text</td></tr><tr><td>3220</td><td>*BOS token</td></tr><tr><td>11752</td><td>*BOS token</td></tr><tr><td>12160</td><td>*BOS and newline token</td></tr><tr><td>11498</td><td>*BOS token</td></tr><tr><td>12110</td><td>elements of numerical or mathematical notation</td></tr></table>
2501.09xxx/2501.09929/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0b43e7a35cd0e9c8f7a50d63180670321f5d88bbe42b650fb0c624ed468bb35
3
+ size 1189198
2501.09xxx/2501.09929/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.09xxx/2501.09958/5a771aca-a124-4a3a-b76f-97290639d8d9_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.09xxx/2501.09958/5a771aca-a124-4a3a-b76f-97290639d8d9_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.09xxx/2501.09958/5a771aca-a124-4a3a-b76f-97290639d8d9_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb92fc0c60cb62cc60a46c7371f0fc3e054682495eab25e78380fc7a319fb7f4
3
+ size 832700
2501.09xxx/2501.09958/full.md ADDED
@@ -0,0 +1,585 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Evaluation and Efficiency Comparison of Evolutionary Algorithms for Service Placement Optimization in Fog Architectures
2
+
3
+ Carlos Guerrero*, Isaac Lera, Carlos Juiz
4
+
5
+ Crta. Valldemossa km 7.5, Palma, E07121, SPAIN
6
+
7
+ <sup>a</sup>Computer Science Department, University of Balearic Islands
8
+
9
+ # Abstract
10
+
11
+ This study compares three evolutionary algorithms for the problem of fog service placement: weighted sum genetic algorithm (WSGA), non-dominated sorting genetic algorithm II (NSGA-II), and multiobjective evolutionary algorithm based on decomposition (MOEA/D). A model for the problem domain (fog architecture and fog applications) and for the optimization (objective functions and solutions) is presented. Our main concerns are related to optimize the network latency, the service spread and the use of the resources. The algorithms are evaluated with a random Barabasi-Albert network topology with 100 devices and with two experiment sizes of 100 and 200 application services. The results showed that NSGA-II obtained the highest optimizations of the objectives and the highest diversity of the solution space. On the contrary, MOEA/D was better to reduce the execution times. The WSGA algorithm did not show any benefit with regard to the other two algorithms.
12
+
13
+ Keywords: Fog computing, Resource management, Evolutionary algorithms, Service placement
14
+
15
+ # 1. Introduction
16
+
17
+ In the last few years, there has been a significant increase in the number of applications developed for the Internet of Things (IoT). In these ecosystems,
18
+
19
+ any device, however small, is able to connect to the Internet and to monitor or control physical elements. These devices gather, process, and share data. As these environments have become popular, their needs for processing and storing data have increased rapidly, making it impossible to use the devices themselves to meet all storage and processing requirements.
20
+
21
+ A first solution to provide higher capacities to the IoT applications was to integrate cloud services by storing and processing the data of the IoT devices in the cloud providers and sending the results back to the IoT devices. This earlier solution allowed to develop applications that interact with real devices, thanks to the IoT paradigm, and which have unlimited storage and processing capabilities, thanks to the cloud. But important drawbacks emerged related to the high communication delays and the use of the network to send high quantities of data between the IoT devices and the cloud providers.
22
+
23
+ Meanwhile, network communication devices have increased their computational capacities, and a new paradigm has emerged, the fog computing [1]. A fog architecture provides computational and storage capacities in the intermediate in-network devices and in continuity with the cloud [2]. By this, the communication nodes are able to allocate services or to store data. This reduces the network latency of the applications since the services can be placed closer to the IoT devices. The data transmitted through the network is also reduced since it is processed and stored closer to the IoT devices.
24
+
25
+ A similar paradigm is the edge computing, where the nodes of the edge of the network are the only ones with the capacity to execute the services. Sometimes, some services are only allocated in the cloud due to, for example, resource constraints, and the edge devices only allocate a subset of these services. Contrary to the fog computing, this second paradigm is more focused towards the things side, instead of the cloud and infrastructure side [3]. This is characterized by using devices with more limited resources and making data sharing more challenging.
26
+
27
+ Important challenges and open research problems have emerged with the definition of fog architectures [4]. In particular, the placement of services and data gets an important role in the optimization of the systems [5]. Data and service management policies are needed to decide when and which fog devices the services should be allocated in. The problem of the selection of the fog devices where the services are instantiated is usually named as fog service placement problem (FSPP). Several service placement studies have established the first steps of future optimal solutions but there is still room for improvement.
28
+
29
+ Evolutionary optimization is a paradigm inspired by natural evolution mechanisms. The use of evolutionary algorithms is an emerging trend for effective and efficient optimization of complex systems [6]. An important number of studies have addressed resource management problems with evolutionary algorithms and they are commonly studied for the increasing structural complexity of distributed architectures such as cloud or fog computing [7]. For example, Zhan et al. stated in their survey that around $18\%$ of the studies about cloud and resource management are concerning the use of evolutionary methods. Evolutionary algorithms have resulted in suitable solutions for the optimization of resource management in VM allocation [8], data replica placement [9], federated clouds [10], or software composition [11], among others.
30
+
31
+ We study the suitability of three evolutionary algorithms for the problem of fog service placement: single-objective genetic algorithm with weighted sum transformation (WSGA); non-dominated sorting genetic algorithm-II (NSGA-II); and, multi-objective evolutionary algorithm based on decomposition (MOEA/D). Our study concerns the adaptation of the three algorithms to our specific domain, by defining the evolutionary operators and the parametrization of the algorithms. This is not the first study that compares the performance of those algorithms [12, 13, 14] but, to the best of our knowledge, it is the first one that applies them to the specific domain of fog architectures.
32
+
33
+ The rest of the paper is organized as follows: Section 2 reviews previous work related to fog service placement; Section 3 describes our model for the domains and our optimization objectives; Section 4 details the evolutionary approaches we have included in our paper; Section 5 explains the details of the experimental study we performed; Section 6 presents the results and analyzes them; and, finally, Section 7 presents the conclusions and possible future works.
34
+
35
+ # 2. Related work
36
+
37
+ The use of evolutionary approaches is widely adopted for resource management in distributed architectures [6, 7]. But their applicability in fog service placement problems is not enough researched as we explain in the following paragraphs.
38
+
39
+ Some frameworks have been already implemented to facilitate the adoption of fog-based solutions in IoT applications. For example, the frameworks
40
+
41
+ in Donasollo et al. [15] or in Skarlat et al. [16]. These frameworks incorporate simple policies for the orchestration and operation of the fog applications, but they enable the integration of new and more sophisticated policies in real fog domains. Challenges in the field of the resource management are now the next step to be solved to facilitate the adoption of fog technologies.
42
+
43
+ The optimization of resource management in fog computing architectures has two main issues: service placement and data allocation. Our proposal is addressed to optimize service placement. Previous fog service management algorithms have explored a wide range of optimization techniques, such as heuristics, greedy algorithms, linear programming, or genetic algorithms, among others. These service managers have defined several aspects of the fog resources, such as placement, scheduling, allocation, provisioning, or mapping for services, resources, clients, tasks, virtual machines, or even fog colonies. These solutions have been defined for environments such as industrial IoT, smart cities, eHealth, or mobile users.
44
+
45
+ Table 1 presents a brief survey of optimization proposals for the FSPP. This problem deals with the idea of placing cloud services closer to the clients, by using the computational resources of networking and edge components. FSPP differs from computational offloading problem [17, 18] in that the latter migrates the services of the client's devices to the cloud or to the edge. Offloaded services are particular of each user, in contrast to the FSPP where services are shared between clients. Another difference is that offloading optimization does not deal with service scalability, as the FSPP does. We do not include researches in this latter field due to the differences with FSPP. Data allocation is also a current research topic in fog resource management, but the nature of data and services is different enough to be necessary to propose particular solutions for their management. Service offloading and data allocation are out of the scope of FSPP, so they are not included in this brief survey.
46
+
47
+ The characteristics of the related work have been summarized in Table 1, by indicating the optimization purpose (column Objective functions), the elements that the optimization algorithm manages to improve the objective functions (column Decision variables), and the optimization algorithm (column Algorithm).
48
+
49
+ Since our main contribution is the study of the applicability of three genetic algorithms to the FSPP, we group previous research works by the optimization algorithm they implemented. Additionally, we focus our attention in research where evolutionary or genetic approaches were considered.
50
+
51
+ Table 1: Summary of Fog Service Placement Problem studies.
52
+
53
+ <table><tr><td>Ref.</td><td>Objective functions</td><td>Decision variables</td><td>Algorithm</td></tr><tr><td>[19]</td><td>Response time, QoS</td><td>Service placement</td><td>GA</td></tr><tr><td>[20]</td><td>Resource waste, Execution times</td><td>Fog colony service placement</td><td>GA, First fit</td></tr><tr><td>[21]</td><td>Cost, Latency and Migration</td><td>Service placement and Load dis-patching</td><td>Greedy, ILP, GA</td></tr><tr><td>[22]</td><td>Cost</td><td>Client association, Resource provi-sioning, Task distribution, VM place-ment</td><td>Mixed ILP</td></tr><tr><td>[23]</td><td>Cost</td><td>Base station association, Task distribu-tion, VM placement</td><td>ILP</td></tr><tr><td>[24]</td><td>Network latency, Service migra-tions</td><td>Service placement</td><td>ILP</td></tr><tr><td>[25]</td><td>Communication power con-sumption</td><td>Service merging and placement</td><td>ILP, Maximum Weighted Independent Set</td></tr><tr><td>[26]</td><td>Service delay</td><td>Service placement</td><td>ILP</td></tr><tr><td>[27]</td><td>Deadline violations, Cost, Re-sponse time</td><td>Service placement</td><td>ILP</td></tr><tr><td>[28]</td><td>Task completion time</td><td>Task scheduling, Task image place-ment, Workload balancing</td><td>Mixed ILP</td></tr><tr><td>[29]</td><td>Power consumption</td><td>Service placement</td><td>ILP</td></tr><tr><td>[30]</td><td>Response time, Cost</td><td>Resource allocation and scheduling</td><td>Petri Nets</td></tr><tr><td>[31]</td><td>Queue length, cost</td><td>Service migration</td><td>Decoupled Markov Decision Process</td></tr><tr><td>[32]</td><td>Resource consumption, QoS</td><td>Service placement</td><td>MonteCarlo</td></tr><tr><td>[33]</td><td>Availability, QoS</td><td>Service placement</td><td>Complex networks</td></tr><tr><td>[34]</td><td>Resource usage</td><td>Resource allocation</td><td>Consensus</td></tr><tr><td>[35]</td><td>Cost</td><td>Look-ahead service placement</td><td>Shortest Path</td></tr><tr><td>[36]</td><td>Cost</td><td>Look-ahead service placement</td><td>Markov Decision Process</td></tr><tr><td>[37]</td><td>Cost</td><td>Task placement</td><td>Binary prog., Greedy</td></tr><tr><td>[38]</td><td>Energy, Network usage, Latency</td><td>Service placement</td><td>First Fit</td></tr><tr><td>[39]</td><td>Network usage, Power con-sumption, Latency</td><td>Service placement</td><td>Own algorithm</td></tr><tr><td>[40]</td><td>Load balancing</td><td>Service placement</td><td>Own algorithm</td></tr><tr><td>[41]</td><td>Network usage and delay</td><td>Service placement</td><td>Own algorithm</td></tr><tr><td>[42]</td><td>Executed tasks</td><td>Resource provisioning</td><td>Own algorithm</td></tr><tr><td>[43]</td><td>Power consumption, Response time</td><td>Workload placement</td><td>Own algorithm</td></tr><tr><td>[44]</td><td>Latency</td><td>Service placement and migration</td><td>Own algorithm</td></tr><tr><td>[45]</td><td>Network latency</td><td>Task assignment</td><td>Own algorithm</td></tr><tr><td>[1]</td><td>Service latency, QoS</td><td>Service placement</td><td>Own algorithm</td></tr><tr><td>[This Resource usage, network la-work]</td><td>Resource usage, network la-ency, service spread</td><td>Service placement</td><td>WSGA, NSGA-II, MOEA/D</td></tr></table>
54
+
55
+ The other researches are only listed in the related work.
56
+
57
+ Probably, Integer Linear Programming (ILP) is the most common solu
58
+
59
+ tion in the field. Up to 9 papers used ILP to optimize cost [21, 22, 23, 27], latency or execution times [21, 24, 26, 27, 28], migrations [21, 24], QoS [27] or power consumption [25, 29].
60
+
61
+ Other types of algorithms are less representative and a smaller number of studies have investigated their implementation in the field of fog computing. These are for example greedy algorithms [21, 37, 38], Markov Decision Process [36, 31], Petri Nets [30], Monte Carlo [32], complex networks [33], consensus [34], or shortest path [35].
62
+
63
+ Moreover, several authors have explored new algorithms that were not based on standardized ones. Some of those algorithms are based on: a sequentially assignment of the highest demanding application modules to the nodes with biggest capacities [39]; the use of a service placement with likely performance bounds and a linear graph [40]; on the mobility of the users, the fog proximity and the elasticity of the cloud [41]; the decomposition into subproblems [42, 43]; a mobility-based algorithm [44]; a future load estimator [45]; a constrained-based forwarding algorithm [1].
64
+
65
+ To the best of our knowledge, there are only three previous studies that considered genetic algorithms for the optimization of the FSPP. Firstly, Wen et al. [19] presented a preliminary proposal based on a parallel GA to reduce response time and increase the QoS. They compared a parallel GA and a serial one, and they only tested a weighted sum optimization. They did not provide details of the implementation and the decision variables. Yang et al. [21] provided a deeper analysis of the use of a GA and they compared it with a greedy heuristic and an ILP-based solution. But they only incorporated one optimization objective in the study. Finally, Sarklat et al. [20] proposed a single objective GA by transforming the two considered objectives with a weighted sum function. They organized the fog devices into colonies with a coordinator device and the optimization algorithm decides whether the services are placed inside the colonies or they are propagated to neighboring colonies.
66
+
67
+ In summary, previous approaches are mainly addressed to optimize a single objective and, in the case of the ones based on genetic algorithms, they are implemented with simple approaches. Additionally, none of them addressed the scalability of the services. To the best of our knowledge, our approach is the first one which addresses a multi-objective optimization of the fog service placement problem, considering a scalable service replication level, and using pure multi-objective genetic algorithms, such as NSGA-II or MOEA/D.
68
+
69
+ # 3. Problem statement
70
+
71
+ Fog computing is an architecture that distributes the computing and storage functions of traditional cloud-based applications to devices closer to the users along a cloud-to-thing continuum. These devices with computational and storage capabilities, commonly called fog devices, are distributed across the layers of the network topology [2]. Data and service management policies are needed to decide when and where to place the services and the data. The problem of the selection of the fog devices where the services are instantiated is usually named as FSPP. In the remaining of this section, we formally define our domain for the FSPP.
72
+
73
+ We focus our work in the study of the algorithms that decide the placement of the services in the fog devices. Certainly, the solutions to the FSPP are not adaptive and fog domains are environments with changing conditions. But these challenges have been already addressed in some previous works that have proposed frameworks for the integration of resource management policies in real fog environments [15, 16, 24]. For example, any of our three proposed algorithms could be integrated in the framework in Velasquez et al. [24]. In this framework, the changing conditions in the architecture are gathered by the Information Collection module. The Service Orchestrator module uses these data of the state of the system to take decisions about the service placements. Our algorithms would be integrated in this Service Orchestrator module.
74
+
75
+ # 3.1. System model
76
+
77
+ An example of a fog computing architecture is represented in Figure 1 where three layers can be identified: cloud layer, fog layer, and client or device layer. The cloud layer corresponds to the cloud provider, where services can (or not) be stored and executed, as in any other device in the system. The client layer is typically formed by the IoT devices (sensor and actuators) that generate data to be stored, request services, or consume data. They request the services and receive the responses. The fog layer includes the in-network intermediate devices that are able to execute instances of the services. We assume that the connection between the fog devices is a network with a graph structure. We also assume that the IoT devices can be only connected to special devices that have both roles, fog devices, and IoT gateways.
78
+
79
+ Several models for fog applications have been defined. One of the most popular developing patterns for IoT applications in the fog is the distributed
80
+
81
+ dataflow [46]. But the use of microservice-based applications has been also proposed for the deployment of fog applications [47, 48, 44, 49] making easier the scale up and down of service instances when stateless and decoupling are guaranteed [50]. They both are based on the definition of the applications as a set of services that interoperate between them by sending messages. Both models separate the management of the data (storage) and the computation (services), making even easier the management and scaling of the applications [51, 52, 53, 54].
82
+
83
+ We model the architecture as a graph structure where the nodes $F = \{f_{i}\}$ are the fog devices and the edges $C = \{c_{i,i'}\}$ the direct network links between the fog devices. There are two special types of devices, $f^{cloud}$ that represents the cloud provider, and $GW = \{f_{i}^{gw}\}$ the subset of fog devices that also act as IoT gateways. The devices are characterized by the parameter $R_{f_i}^{cap}$ that is the total resource capacity of the device. We assume that the resources in the cloud provider are infinite, $R_{f^{cloud}}^{cap} = \infty$ . In order to simplify the model and the notation, we assume a scalar value for the computational resource capacities. It can easily be extended using, for example, a tuple for each resource element (e.g., CPU capacity, main memory size, storage size, or input/output bandwidth). The network connection links are characterized by the communication latency, $L_{c_{i,i'}}$ .
84
+
85
+ The applications are modeled as a directed graph where the nodes $S = \{s_x\}$ are the services that are related through many-to-many consumption relationships. This relationship is represented with a matrix $I$ of size $|S| \times |S|$ , where $|S|$ is the number of services, and its element $\iota_{x,x'}$ are equal 1 if $s_x$ consumes $s_{x'}$ , and 0 otherwise. The services can be scaled up or down and, consequently, several instances of the services can be deployed in different devices. These instances are identified with $s_x^y$ . The services are characterized by their resource consumption, $R_{s_x}^{con}$ .
86
+
87
+ The allocation of the service instances in the fog devices is represented with a matrix $A$ of size $|S| \times |F|$ where $|S|$ is the number of services and $|F|$ the number of devices. The element $\alpha_{x,i}$ is equal 1 if the fog device $f_i$ hosts an instance of the service $s_x$ , and 0 otherwise.
88
+
89
+ The IoT devices (things), typically sensors and actuators, are represented as $T = \{t_n\}$ , and they are characterized by the services they request and the IoT gateway they are connected to. Consequently, a matrix $R$ of size $|GW| \times |T|$ is defined to represent the origin of the application request, where $|GW|$ is the number of gateways, $|T|$ the number of IoT devices, and $\rho_{i,n}$ is equal 1 if IoT gateway $f_i^{gw}$ has at least one IoT device $t_n$ that requests the
90
+
91
+ ![](images/086413fa9dbff366e963795028dd1ab943137c2040451ab0edf75b490c6821f8.jpg)
92
+ Figure 1: An example of a fog computing architecture.
93
+
94
+ service $s_x$ associated with the IoT device, and 0 otherwise. To simplify the notation, we define $\rho_{i,x} = 1$ to represent that the gateway $f_i^{gw}$ requests the service $s_x$ .
95
+
96
+ We define the symmetric matrix of shortest latency distances between nodes, $D = |F| \times |F|$ , calculated as the summation of the link latencies within the shortest path between a pair of nodes:
97
+
98
+ $$
99
+ \forall c _ {q, r} \in \text {s h o r t e s t P a t h} \left(f _ {i}, f _ {i ^ {\prime}}\right)
100
+ $$
101
+
102
+ $$
103
+ d _ {i, i ^ {\prime}} = d _ {i ^ {\prime}, i} = \sum L _ {c _ {q, r}} \tag {1}
104
+ $$
105
+
106
+ # 3.2. Optimization model
107
+
108
+ Our main concerns about the system are three: (i) to maximize the use of the fog devices to take as much advantage as possible of placing the services closer to the clients; (ii) to evenly distribute the service instance replicas across the fog devices; and (iii) to reduce the network latency due to network communications. The following paragraphs explain three metrics that we selected as indicators of these there optimization objectives.
109
+
110
+ # 3.2.1. Resource usage
111
+
112
+ We include this objective to maximize the use of the resources in the fog devices. The maximization of the resource usage is based on the idea that it
113
+
114
+ Table 2: Summary of the variables of the system model.
115
+
116
+ <table><tr><td>Variable</td><td>Description</td></tr><tr><td>S</td><td>Set of the services in the system</td></tr><tr><td>sx</td><td>A service in the system</td></tr><tr><td>syx</td><td>An instance of a service sx</td></tr><tr><td>Ixx,x&#x27;</td><td>Element of matrix I that indicates if sx requests sx&#x27;</td></tr><tr><td>F</td><td>Set of the fog devices in the system</td></tr><tr><td>fi</td><td>A fog device in the system</td></tr><tr><td>fcloud</td><td>Identification of the cloud provider</td></tr><tr><td>gaw</td><td>Identification of an IoT gateway</td></tr><tr><td>Rcap</td><td>Resource capacities of a device fi</td></tr><tr><td>Rcon</td><td>Resource consumption required by a service sx</td></tr><tr><td>αx,i</td><td>Element of the matrix A that indicates if sx is allocated in fi</td></tr><tr><td>T</td><td>Set of the IoT devices in the system</td></tr><tr><td>tn</td><td>An IoT device in the system</td></tr><tr><td>ρi,n / ρi,x</td><td>Element of matrix R that indicates if the IoT gateway gaw has an IoT device tn that requests service sx.</td></tr><tr><td>ci,i&#x27;</td><td>Direct network connection link between devices fi and fi&#x27;</td></tr><tr><td>Lci,i&#x27;</td><td>Communication latency of a link ci,i&#x27;, i.e., the communication latency between two directly connected devices.</td></tr><tr><td>di,i&#x27;</td><td>Communication latency between any two devices fi and fi&#x27;</td></tr></table>
117
+
118
+ is better to deploy as many services as possible in the fog devices. When there are many services deployed in the fog devices, the probability of a user to request a service that is allocated closer than the cloud provider is increased. Consequently, in the cases that the fog devices still have free resources, the placement could probably be improved by placing more services in these free resources. Under these conditions, the more the usage of the fog devices and their resources is, the better the solutions are. We assume that the optimal service placement would be to place instances of each service in all the fog devices. But this is impossible due to the constraint of unlimited resources in the devices. But we desire that, at least, all the services are used up to the $100\%$ . This does not damage the flexibility of the system, because we assume that the removal of services is as simple as stopping the service and deleting it since we consider stateless microservice-based applications [51, 52].
119
+
120
+ Therefore, our first optimization objective is to minimize the free re-
121
+
122
+ sources in the devices. We calculate the resource usage as the ratio between the resource consumed by the instances of the services and the total available resources:
123
+
124
+ $$
125
+ \text {R e s o u r c e} = \frac {\sum_ {f _ {i}} ^ {F} \sum_ {s _ {x}} ^ {S} \alpha_ {x , i} \times R _ {s _ {x}} ^ {\text {c o n}}}{\sum_ {f _ {i}} ^ {F} R _ {f _ {i}} ^ {\text {c a p}}} \tag {2}
126
+ $$
127
+
128
+ where the numerator is an iteration through the elements in the matrix, summing the resource consumption for the cases that the instances are allocated $(\alpha_{x,i} = 1)$ . The denominator is the summation of the capacities of all the devices.
129
+
130
+ Since we focused the optimization in a minimization process, we used the metric that represents the free resources instead of the resource usage, defined as:
131
+
132
+ $$
133
+ F r e e R e s o u r c e s = 1. 0 - R e s o u r c e U s a g e \tag {3}
134
+ $$
135
+
136
+ # 3.2.2. Service spread
137
+
138
+ Our second objective is to evenly distribute the service replicas across the fog domain to increase the coverage for all the users. Note that having the replicas of services concentrated in closer devices affects: (a) The latencies of the furthest users, which could be reduced by moving some instances away; (b) The service availability because the concentrated services could get isolated in a region of the network if some links fail. If the services are spread, this isolation is more unlikely.
139
+
140
+ We used the coefficient of variation (CV), or relative standard deviation, of the network latencies between each pair of replicas to measure the distribution of the service replicas. CV is calculated as the ratio between the standard deviation and the mean of a data series. The value of this measure is nearing zero as the elements get more dispersed. We defined the service spread as the average of the single CV of each service:
141
+
142
+ $$
143
+ S e r v i c e S p r e a d = \frac {\sum_ {s _ {x}} ^ {S} \frac {\sigma^ {d _ {s _ {x}} ^ {r e p}}}{\overline {{d _ {s _ {x}} ^ {r e p}}}}}{| S |} \tag {4}
144
+ $$
145
+
146
+ where $\overline{d_{s_x}^{rep}}$ and $\sigma^{d_{s_x}^{rep}}$ are respectively the average and the standard deviation of the network distances between each pair of replicas $s_x^y$ of a service $s_x$ .
147
+
148
+ The average, $\overline{d_{s_x}^{rep}}$ , is calculated as:
149
+
150
+ $$
151
+ \overline {{d _ {s _ {x}} ^ {r e p}}} = \frac {\sum_ {i = 0} ^ {F} \sum_ {i ^ {\prime} = i + 1} ^ {F} \alpha_ {x , i} \times \alpha_ {x , i ^ {\prime}} \times d _ {i , i ^ {\prime}}}{\sum_ {i = 0} ^ {F} \sum_ {i ^ {\prime} = i + 1} ^ {F} \alpha_ {x , i} \times \alpha_ {x , i ^ {\prime}}} \tag {5}
152
+ $$
153
+
154
+ where the numerator goes through the devices that allocates one service, and adds the distances of each pair of instances of that service. The denominator calculates the number of pairs of instances of that service.
155
+
156
+ The standard deviation, $\sigma^{d_{s_x}^{rep}}$ , is calculated as:
157
+
158
+ $$
159
+ \sigma^ {d _ {s _ {x}} ^ {r e p}} = \sqrt {\frac {\sum_ {i = 0} ^ {F} \sum_ {i ^ {\prime} = i + 1} ^ {F} \alpha_ {x , i} \times \alpha_ {x , i ^ {\prime}} \times (d _ {i , i ^ {\prime}} - \overline {{d _ {s _ {x}} ^ {r e p}}}) ^ {2}}{\sum_ {i = 0} ^ {F} \sum_ {i ^ {\prime} = i + 1} ^ {F} \alpha_ {x , i} \times \alpha_ {x , i ^ {\prime}}}} \qquad (6)
160
+ $$
161
+
162
+ where, as in the case of the average, the numerator goes through each pair of instances of a service, and the denominator is the number of pairs of instances.
163
+
164
+ # 3.2.3. Network latency
165
+
166
+ One of the main reasons to use fog computing is to reduce the network latency between the users and the services. We have also included that in our optimization objectives, but considering the network distances between all the services in an application, and not only the distance between the users (IoT devices) and the first service.
167
+
168
+ We defined a metric based on the distances between a service instance and the closest instance of each of their consumed services. We also considered the specific case of the distance between the IoT devices (connected to the IoT gateways) and their requested services. We defined an indicator of the network latency as the average value of the distances between interoperated services $(d_{s_x}^{cons})$ and between IoT devices and requested services $(d_{t_n}^{req})$ :
169
+
170
+ $$
171
+ N e t w o r k L a t e n c y = \frac {\sum_ {s _ {x}} ^ {S} d _ {s _ {x}} ^ {c o n s} + \sum_ {f _ {i}} ^ {G W} d _ {f _ {i} ^ {g w}} ^ {r e q}}{| S | + | G W |} \tag {7}
172
+ $$
173
+
174
+ where $|GW|$ is the number of IoT gateways, and $|S|$ the number of services as it has been previously explained.
175
+
176
+ The distance of interoperated services $d_{s_x}^{cons}$ for a service $s_x$ is calculated as the average value of the minimum distance between each of its instances
177
+
178
+ and the closest instance of all its consumed services:
179
+
180
+ $$
181
+ d _ {s _ {x}} ^ {\text {c o n s}} = \frac {\sum_ {f _ {i}} ^ {F} \sum_ {s _ {x ^ {\prime}}} ^ {S} \alpha_ {x , i} \times \iota_ {x , x ^ {\prime}} \times \min _ {i ^ {\prime} = 0} ^ {F} \left(d _ {i , i ^ {\prime}} \times \alpha_ {x ^ {\prime} , i ^ {\prime}}\right)}{\sum_ {f _ {i}} ^ {F} \sum_ {s _ {x ^ {\prime}}} ^ {S} \alpha_ {x , i} \times \iota_ {x , x ^ {\prime}}} \tag {8}
182
+ $$
183
+
184
+ where $x$ is the index of the service whose distance is calculated for, $i$ the indices of the devices which allocate its instances, $x'$ are the indices of their consumed services, and $i'$ the indices of the devices that allocate the instances of the consumed services. The numerator goes through the allocation of each instance of a service (first summation), through the services consumed by that services (second summation), and takes the minimum distance between the device of the current instance and all the devices allocating the consumed service (calculated with $\min_{i'=0}^{F}$ ).
185
+
186
+ The distance between IoT devices and requested services is defined similarly to the previous one but the origin is an IoT gateway, instead of a service, and the targets are the services requested from the clients connected to the gateway. The formula is:
187
+
188
+ $$
189
+ d _ {f _ {i} ^ {g w}} ^ {\text {r e q}} = \frac {\sum_ {s _ {x}} ^ {S} \rho_ {i , x} \times \min _ {i ^ {\prime} = 0} ^ {F} \left(d _ {i , i ^ {\prime}} \times \alpha_ {x , i ^ {\prime}}\right)}{\sum_ {s _ {x}} ^ {S} \rho_ {i , x}}, i \neq i ^ {\prime} \tag {9}
190
+ $$
191
+
192
+ where $i$ is the index of the IoT gateway, $x$ the indices of the requested services by the clients connected to the IoT gateway, and $i'$ are the indices of the devices that allocate the instances of the requested services.
193
+
194
+ # 3.2.4. Model constraints
195
+
196
+ Our model only has one constraint. The resources consumed by the services in the fog devices have to be lower than the resource capacity of these devices. Constraints in the resource usages of the network links are not considered, but they could be included in future works. Consequently, the following constraint needs to be accomplished:
197
+
198
+ $$
199
+ \sum_ {s _ {x}} ^ {S} \alpha_ {x, i} \times R _ {s _ {x}} ^ {\text {c o n}} \leq R _ {f _ {i}} ^ {\text {c a p}}, \forall f _ {i} \in F \tag {10}
200
+ $$
201
+
202
+ To sum up, our optimization is addressed to minimize Free Resources $\wedge$ Service Spread $\wedge$ Network Latency, by determining the values $a_{x,i}$ of the allocation matrix $A$ subject to constraint in Eq. 10.
203
+
204
+ # 4. Evolutionary optimization
205
+
206
+ Computational Intelligence (CI) is a common solution for resource management problems in the field of cloud resource management [7]. Evolutionary algorithms (EA), along with particle swarm, neural networks or fuzzy systems, are CI techniques commonly used [6]. The adaptation of those techniques to each particular problem makes necessary to perform a specific study for each new particular scenario [55].
207
+
208
+ We have studied the suitability of three evolutionary algorithms to solve the FSPP: the weighted sum genetic algorithm (WSGA), the non-dominated sorting genetic algorithm-II (NSGA-II), and the multiobjective evolutionary algorithm based on decomposition (MOEA/D). Studies of the general performance of those three algorithms have been previously presented [56] but, to the best of our knowledge, they have not been studied for the domain of fog architectures. Our contribution is to study the suitability of these three algorithms in fog domains.
209
+
210
+ Additional to the general guidelines of each of the three algorithms, explained in the next section, our approach requires a common definition of some elements and operators, such as the solution representation, or the crossover and mutation operators, explained in Section 4.2.
211
+
212
+ # 4.1. Evolutionary algorithms
213
+
214
+ The three algorithms of our study are inspired by the biological evolution. A population of solutions is evolved along generations by combining and changing them with the use of crossover and mutation operators. A fitness function, based on the objectives to be optimized, defines the quality of the solutions, which also determines the probability of being mated.
215
+
216
+ # 4.1.1. Weighted sum genetic algorithm
217
+
218
+ GAs with single objective optimization are one of the preliminary solutions for EA. These algorithms use the scalar value of the objective function as the fitness value for each solution. In the case of multi-objective optimization, it is necessary to establish a linear transformation of the multi-objective into a scalar value.
219
+
220
+ The first algorithm of our approach is a single objective GA based on the use of a weighted sum transformation (WSGA). This transformation consists on normalizing the values of the objectives to the unit interval, applying a
221
+
222
+ weight, and summing them:
223
+
224
+ num. objectives
225
+
226
+ $$
227
+ \sum_ {i} \omega_ {i} \times \theta_ {i} \times x _ {i} \tag {11}
228
+ $$
229
+
230
+ where $\omega_{i}$ is the scaling factor, $\theta_{i}$ the weight, and $x$ the value of the objective function.
231
+
232
+ Algorithm 1 shows the general structure of the WSGA. The algorithm starts generating populationSize random solutions, whose objective values are calculated and weighted sum transformed. Along generationNumber iterations of the algorithm, populationSize children are generated in each iteration.
233
+
234
+ The children solutions are created by applying a crossover operator over two father solutions of the previous population. The outputs of a crossover operation are two children. These children are mutated with a probability of mutationProb. The details of both operators are in Section 4.2.
235
+
236
+ The fathers of a crossover operation are chosen using a deterministic binary tournament selection operator [57]. First, two solutions are chosen from the population, and the one with the best fitness is selected as the first father. This is repeated again for the second father.
237
+
238
+ Children only replace former solutions if they have better fitness values. This is implemented by creating a population with the previous population and the offspring, ordering them by their fitness values, and creating the new population with the first populationSize solutions with the highest fitness. After the generationNumber iteration, the solution with the best fitness value is selected as the output of the algorithm.
239
+
240
+ # 4.1.2. Non-dominated sorting genetic algorithm-II
241
+
242
+ The second algorithm of our study, the non-dominated sorting genetic algorithm-II (NSGA-II) [58] orders the solutions using the dominance concept. A transcription of the original algorithm is shown in Algorithm 2.
243
+
244
+ Multi-objective optimization algorithms introduce the concept of dominance to order the solutions, instead of using a scalar value. A solution $s_1$ dominates another solution $s_2$ if all the objectives values of $s_1$ are better than the values of $s_2$ . In the same manner, a solution $s_1'$ non-dominates another solution $s_2'$ , if $s_2'$ has at least one objective with a better value. Consequently, the solutions in a population can be classified in dominating and dominated solutions. The Pareto optimal front is defined as the set of solutions that are
245
+
246
+ Algorithm 1 Single-objective traditionally genetic optimization algorithm
247
+ 1: procedure WSGA
248
+ 2: $P_t \gets \text{generateRandomPopulation(populationSize)}$
249
+ 3: objectiveValues $\leftarrow$ evaluateObjectiveFunctions( $P_t$ )
250
+ 4: fitness $\leftarrow$ weightedsum(objectiveValues, $\omega, \theta$ )
251
+ 5: for i in 1..generationNumber do
252
+ 6: $P_{off} = \emptyset$
253
+ 7: for j in 1..populationSize do
254
+ 8: father1 $\leftarrow$ binaryTournament( $P_t$ , fitness)
255
+ 9: father2 $\leftarrow$ binaryTournament( $P_t$ , fitness)
256
+ 10: child1, child2 $\leftarrow$ crossover(father1, father2)
257
+ 11: if random() < mutationProb then
258
+ 12: mutate(child1), mutate(child2)
259
+ 13: $P_{off} = P_{off} \cup \{child1, child2\}$
260
+ 14: objectiveValues $\leftarrow$ evaluateObjectiveFunctions( $P_{off}$ )
261
+ 15: fitnessOff $\leftarrow$ weightedsum(objectiveValues, $\omega, \theta$ )
262
+ 16: fitness = fitness $\cup$ fitnessOff
263
+ 17: $P_{off} = P_{off} \cup P_t$
264
+ 18: $P_{off} = \text{orderElements}(P_{off}, \text{fitness})$
265
+ 19: $P_t = P_{off}$ [1..populationSize]
266
+ 20: Solution = min( $P_t$ , fitness)
267
+
268
+ non-dominated by any other solution. Therefore, each solution in the Pareto front optimizes one or more objectives, but not all of them, in regard to the other solutions in the front. The solution of a multi-objective optimization is a set of solutions, the Pareto set, instead of a single solution.
269
+
270
+ The NSGA-II mainly differs from the WSGA in how the fitness is represented and how the solutions are ordered. The fitness is a vector with one element for the value of the objective functions. The solutions are ordered in successive fronts. The Pareto optimal front is the first one. Once that the Pareto optimal front is calculated, the remaining solutions are again processed to calculate a new front without considering the solutions already placed in the previous one. This is repeated until all the solutions are included in one front. The solutions inside a front are ordered with the crowding distance, that is calculated as the minimum Euclidean distance from one solution to the others. The NSGA-II considers that dispersed solutions
271
+
272
+ Algorithm 2 Multi-objective genetic optimization algorithm [58]
273
+ 1: procedure NSGA-II
274
+ 2: $P_t \gets \text{generateRandomPopulation(populationSize)}$
275
+ 3: fitness $\leftarrow$ calculateFitness( $P_t$ )
276
+ 4: fronts $\leftarrow$ calculateFronts( $P_t$ , fitness)
277
+ 5: distances $\leftarrow$ calculateCrowding( $P_t$ , fronts, fitness)
278
+ 6: for i in 1..generationNumber do
279
+ 7: $P_{off} = \emptyset$
280
+ 8: for j in 1..populationSize do
281
+ 9: father1 $\leftarrow$ binaryTournament( $P_t$ , fronts, distances)
282
+ 10: father2 $\leftarrow$ binaryTournament( $P_t$ , fronts, distances)
283
+ 11: child1, child2 $\leftarrow$ crossover(father1, father2)
284
+ 12: if random() < mutationProb then
285
+ 13: mutate(child1), mutate(child2)
286
+ 14: $P_{off} = P_{off} \cup \{child1, child2\}$
287
+ 15: fitness $\leftarrow$ calculateFitness( $P_{off}$ )
288
+ 16: $P_{off} = P_{off} \cup P_t$
289
+ 17: fronts $\leftarrow$ calculateFronts( $P_{off}$ , fitness)
290
+ 18: distances $\leftarrow$ calculateCrowding( $P_{off}$ , fronts, fitness)
291
+ 19: $P_{off} = \text{orderElements}(P_{off}, fronts, distances)$
292
+ 20: $P_t = P_{off}$ [1..populationSize]
293
+ 21: Solution = fronts[1] #the Pareto front
294
+
295
+ are more significant than the solutions that are concentrated together. The details of the algorithms are deeply explained in the study of its first proposal [58]. The fathers are chosen with a binary tournament selector, like in the WSGA, but instead of comparing the scalar fitness value, the front and the crowding distance are considered.
296
+
297
+ The output of the algorithm is the Pareto optimal front of the last iteration of the algorithm.
298
+
299
+ # 4.1.3. Multi-objective evolutionary algorithm based on decomposition
300
+
301
+ Finally, the third algorithm of our study is the multi-objective evolutionary algorithm based on decomposition (MOEA/D) [56]. This algorithm is based on decomposing the problem into $N$ scalar fitness optimizations that are simultaneously optimized along the iterations. A transcription of the
302
+
303
+ original algorithm is shown in Algorithm 3.
304
+
305
+ At the beginning of the algorithm, $N$ vectors of weights are generated evenly distributed, along with a random solution for each of these weight transformations. In each iteration, a solution for each $N$ weight vector is attempted to be optimized. The algorithm is based on the idea that a solution has the same quality for all the neighboring weight vectors. The neighboring vectors are measured in terms of the euclidean distance of the weights. By this, the fathers of a new solution for a given vector are randomly chosen from the solutions of the $T$ closest weight vectors. The two children of the crossover are compared and the dominating one is selected. The current solution for the weight vector is only replaced by the new child if the latter dominates the former. Additionally, an external population $(EP)$ is considered, where the child is included when it is not dominated by any other solution currently in the EP. If the child is included, all the solutions that it dominates are removed from the EP.
306
+
307
+ # 4.2. Genetic operators and structures
308
+
309
+ The three algorithms of the previous section use the same solution representation, and genetic operators (crossover and mutation). The following paragraphs explain the details.
310
+
311
+ The solutions of our optimization, also known as individual or chromosome in the field of evolutionary algorithms, are the allocations of the fog services in the fog devices. If we consider the model of Section 2, a solution is represented by the matrix $A$ .
312
+
313
+ In evolutionary algorithms, solutions are usually represented with n-dimensional array structures which represent the chromosome solution and each element of the array is usually known as gene [59]. Consequently, our solutions are two-dimensional arrays that directly represent the matrix $A$ . The rows are the fog services and the columns the fog devices. The value of an array element is 1 when the service is allocated in the device and 0 otherwise.
314
+
315
+ Evolutionary algorithms generate new solutions by the combination of the current best solutions based on the biological concept of evolution [60]. This combination is performed with the crossover operator, obtaining two new solutions that alternative take array elements (genes) from both fathers. We selected a single-point crossover operator [61]. A different random number $r$ , between 1 and the number of devices, is generated for each service allocation (for each row). This random number splits the solution row into two pieces
316
+
317
+ Algorithm 3 Multi-objective evolutionary algorithm [56]
318
+ 1: procedure MOEA/D
319
+ 2: $P_{EP} \gets \{\}$
320
+ 3: $W_{N} \gets \text{generateEvenlyWeights}(N)$
321
+ 4: $P_{N} \gets \text{generateRandomPopulation}(N)$
322
+ 5: for $j$ in 1..N do
323
+ 6: $B[j] \gets \text{getClosestWeights}(T)$
324
+ 7: for $i$ in 1..generationNumber do
325
+ 8: $P_{off} = \emptyset$
326
+ 9: for $j$ in 1..N do
327
+ 10: $\text{father1} \gets B[j][\text{rand}(1,T)]$
328
+ 11: $\text{father2} \gets B[j][\text{rand}(1,T)]$
329
+ 12: $\text{child1, child2} \gets \text{crossover(father1, father2)}$
330
+ 13: if rand() < mutateProb then
331
+ 14: $\text{mutate(child1),mutate(child2)}$
332
+ 15: child1 = dominant(child1, child2)
333
+ 16: for $k$ in 1..T do
334
+ 17: if child dominates $P_{N}[B[j][k]]$ then
335
+ 18: $P_{N}[B[j][k]] \gets \text{child}$
336
+ 19: $P_{off} = P_{off} \cup \{child\}$
337
+ 20: fitness = calculateFitness(P_{off})
338
+ 21: $P_{off} = P_{off} \cup P_{EP}$
339
+ 22: fronts = calculateFronts(P_{off}, fitness)
340
+ 23: $P_{EP} \gets \text{fronts[1]}$
341
+ 24: Solution = $P_{EP} \# \text{the Pareto front}$
342
+
343
+ in both fathers. The opposite pieces from each father are combined, the first child is obtained by concatenating the $[1,r]$ elements of the first father and the $[r + 1,\# devices]$ of the second one. The second child is obtained from the other two opposite pieces.
344
+
345
+ The population evolution based only on crossover operations has the risk of falling into local optimizations. The mutation operator is included in evolutionary algorithms to expand the solution search space and to avoid local minimums [61]. We defined three mutation operators: replica growth, that randomly increases the number of instances of each service; service shuffle,
346
+
347
+ that interchanges the allocation plan of all the services in the system; spread to fog, that randomly selects a subset of services and they are instantiated and allocated in all the fog devices.
348
+
349
+ It is very likely to obtain an allocation plan that does not satisfy the constraints of our model, mainly with very disruptive mutations such as the last one. If a solution does not satisfy the resource usage constraint (Eq. 10) after a crossover or mutation, it is modified with a mend operator. This operator is an iterative process of removing random instances of services from the fog devices that do not satisfy the constraint, until the resources consumed by the allocated services are smaller or equal to the device capacity.
350
+
351
+ # 5. Experimental validation
352
+
353
+ For the study and the validation of the three evolutionary algorithms, we first defined an experiment with a common infrastructure, a set of applications, and the user characteristics (Section 5.1). Additionally, the parametrization of the evolutionary algorithm was fixed in a preliminary and exploratory phase where several alternatives were studied and the most suitable values were selected (Section 5.2).
354
+
355
+ # 5.1. Experiment definition
356
+
357
+ The experiments were defined in terms of the fog device features, network topology, application characteristics and clients' distribution. The experiments were defined having in mind a potential model of a region of a bigger fog computing architecture. In any case, the sizes of our experiments are bigger than most of the experiments performed in the related bibliography (Section 2). We considered 100 fog devices, 100 and 200 services, and 8 users/IoT devices per gateway (resulting in a total number of 160 users).
358
+
359
+ We random generated the topology of the network as an Albert-Barabási topology with 100 devices. This is a common model for autonomous network topology that has been also used in other fog resource management studies [62]. The cloud provider was placed in the node with the highest betweenness centrality. The nodes with the smallest centrality were designated as IoT gateways. Betweenness centrality is a graph metric that quantifies the number of times a node acts as a bridge along the shortest path between two other nodes [63]. It is usually considered as an indicator of the control over the communications between any two nodes.
360
+
361
+ ![](images/19b475ead751990d9cfa25a0acd40dd65c8aa409174352d7608c9cadedd50f80.jpg)
362
+ Figure 2: Services interoperability of the three applications.
363
+
364
+ Two experiment sizes were considered by changing the number of applications in the system 15, and 30 applications (100, and 200 services respectively). The applications were based on three different types or application templates: a latency-sensitive online EEG (electroencephalography) tractor beam game defined by Zao et al. [64] and used in the experiments of the fog resource policy of Gupta et al. [38]; an intelligent surveillance through distributed camera networks defined by Hong et al. [65] and also used in the experiments of Gupta et al. [38]; and, finally, an e-commerce web application based on microservices [66] that we previously used in several container orchestration studies [67, 68]. The services and their interoperability for the three applications can be observed in Figure 2.
365
+
366
+ The capacity of the fog devices $(R_{f_i}^{cap})$ was uniformly defined in the range of 4-10 resource units. The resource consumption of the services $(R_{s_x}^{con})$ was also defined uniformly, but with values between 1 and 4 resource units. Thus, the maximum number of services in a device is 10 and the minimum is 1.
367
+
368
+ The network links were characterized with a communication latency $L_{C_{i,i'}}$ between 75 and $125\mathrm{ms}$ , except for the links with the cloud provider that were fixed in $L_{C_{i,cloud}} = 100.0\mathrm{ms}$ .
369
+
370
+ The number of IoT gateways was the $20\%$ of the total number of fog devices, resulting on 20 gateways for the considered network size. We considered eight users connected to each gateway, so the total number of users
371
+
372
+ Table 3: Algorithm parametrization for the experiments.
373
+
374
+ <table><tr><td>WSGA
375
+ Parameter</td><td>NSGA-II
376
+ Parameter</td><td>MOEA/D
377
+ Parameter</td><td>Value</td></tr><tr><td>populationNum.</td><td>populationNum.</td><td>N</td><td>100</td></tr><tr><td>generationNum.</td><td>generationNum.</td><td>generationNum.</td><td>400</td></tr><tr><td>mutationProb</td><td>mutationProb</td><td>mutationProb</td><td>0.25</td></tr><tr><td>-</td><td>-</td><td>T</td><td>20</td></tr><tr><td>θspread, θresource, θlatency</td><td>-</td><td>-</td><td>1/3</td></tr><tr><td>ωspread, ωresource</td><td>-</td><td>-</td><td>1.0</td></tr><tr><td>ωlatency</td><td>-</td><td>-</td><td>max.
378
+ length
379
+ path</td></tr></table>
380
+
381
+ in the system was also 160.
382
+
383
+ The IoT devices were distributed in the IoT gateways by considering a case that the requests for the same application were received from the same region of the network. Therefore, a random IoT gateway was randomly and uniformly selected for each application. The IoT devices requesting the same application were placed in this random gateway and its $k$ nearest neighboring gateways, where $k$ is the number of IoT devices per application. The neighbors were determined by considering the shortest path distance between devices.
384
+
385
+ # 5.2. Evolutionary algorithms parametrization
386
+
387
+ Some of the parameters of the algorithm were set in a preliminary exploratory phase. We tested a range of values and we selected the smallest values with the best performances. The final selected parameters are shown in Table 3.
388
+
389
+ The population size was fixed in 100 solutions as greater populations did not obtain better optimizations. The population size corresponds to the number $N$ of weight vectors for the case of MOEA/D. The number of generations was fixed in 400. We selected a high enough value to be able to detect the stabilization of the objective values. We considered a size $T$ for the neighbor solutions of 20 for the MOEA/D. The preliminary results showed that a mutation probability of 0.25 was enough to achieve a wide diversity of search space.
390
+
391
+ The weighted sum transformation was fixed with our own preferences. We considered the three objectives equally important and, consequently, the three weights were fixed with the same value:
392
+
393
+ $$
394
+ \theta_ {s p r e a d} = \theta_ {r e s o u r c e} = \theta_ {l a t e n c y} = \frac {1}{3} \tag {12}
395
+ $$
396
+
397
+ Additionally, service spread and free resources were already normalized because they were unity-based and their values were between 0.0 and 1.0. The scaling factor was only necessary for the network latency objective. In this case, we normalized by scaling the values of this objective to a range of [0, 1], with the formula:
398
+
399
+ $$
400
+ x ^ {\prime} = \frac {x - x _ {m i n}}{x _ {m a x} - x _ {m i n}} \tag {13}
401
+ $$
402
+
403
+ considering that the maximum network latency $x_{max}$ is the distance between the cloud provider and its most distance device, the worst case for allocating two interoperated services, and the minimum $x_{min}$ is 0, the case of placing two interoperated services in the same device.
404
+
405
+ # 6. Results and discussion
406
+
407
+ The results are presented in such a way that the three algorithms are easily comparable. It is important to remember that the solution for a multi-objective optimization is usually a set of non-dominated solutions, such in the cases of NSGA-II and MOEA/D. Thus, two analysis are considered in our study: (a) The comparison between one solution selected from the resulting set (Section 6.1). The selection of the solution is done with criteria that can be fixed by the system administrator. Under those criteria, the selected solution is considered the best from the solution set; (b) The comparison between the whole set of solutions (Section 6.2).
408
+
409
+ # 6.1. Analysis of one selected solution
410
+
411
+ Figure 3 shows the results along the generations of one solution for each of the three algorithms. This selection was performed under some criteria, and the solution was considered the best one from this point of view. Since the WSGA used the uniformly weighted sum transformation for the evaluation of the fitness value (Eq. 11), we used these same criteria to select the solution from the Pareto sets obtained with NSGA-II and MOEA/D. Thus, the figure
412
+
413
+ ![](images/6b7c40c6c8410e1cd7cbb261f975c104a1b99c2db5a2fffb0e61c62334777132.jpg)
414
+ Figure 3: Evolution of the weighted sum of the objectives for the best solution for each evolutionary algorithm.
415
+
416
+ shows the value of the solution with the smallest weighted sum value for each generation. Note that, to improve the visualization of the differences in the results between algorithms in each experiment, the scales of y-axis are different for each plot.
417
+
418
+ For a deeper comparison of the three selected solutions, we also present Figure 4, where the three objectives values are disaggregated. Note that, once again, the scales of the y-axis are different between plots.
419
+
420
+ From the analysis of the first results, Figure 3, the general conclusion is that NSGA-II is the algorithm that obtained the smallest uniformly weighted sum of the objectives values, at least for the best selected solution. On the contrary, MOEA/D resulted in the highest weighted sum. Additionally, the NSGA-II algorithm obtained the smallest weighted sum in fewer generations for the case of 200 services (around generation 150) than in the experiment with 100 services (around generation 300).
421
+
422
+ Considering the results for each single objective (Figure 4), we can observe that the algorithms have different behaviors for the three objectives. First, the free resource objective is easily minimized by the three algorithms. It is observed that the best solution use all the resources in the system from
423
+
424
+ - - NSGA-II Weighted GA MOEA/D
425
+
426
+ ![](images/d4a74ce1a3cf4faf5bbd1cd8d03dae00da5d042dcaff789fa94dc86fb1c5b9e7.jpg)
427
+ Figure 4: Evolution of the three objectives for the best solution for each evolutionary algorithm.
428
+
429
+ ![](images/dc7a08ba513094c438303acf3e8b0e250b35e40b65585a7a4a4d33398c4b06a0.jpg)
430
+
431
+ the very beginning. Second, in the case of the service spread, NSGA-II is the best algorithm. Third, the network latency is more optimized by the MOEA/D. The WSGA is always in a medium-term between the two other algorithms.
432
+
433
+ It is observed that important optimizations are obtained in the first generations but the values in Figure 4 are very irregular, mainly for the NSGA-II. Note that evolutionary algorithms are metaheuristics and they are based on the generation of random solutions. The algorithm starts with a random population that is evolved along the generations with random combinations and modification of its solutions. Consequently, at the beginning of the optimization (the first generations), the population is not stabilized, and the solutions are very far from the optimized solution. By this, the changes in the population are very important between generations and, consequently, the values of the optimized solution are irregular. Moreover, if we note that the solutions plotted in the figure are the ones with the smallest weighted sum for each generation, thus the selected solution is not the same one along the generations, mainly in the first ones. On the contrary, when the population is
434
+
435
+ stabilized (last generations), the solutions are closer to the optimized value, and the population is quite similar between generations. Consequently, the selected solution is many times the same solution for different generations because there are not important changes in the population. However, it is important to remember that metaheuristics cannot guarantee to reach the optimized solution. Thus, new optimization could be achieved for higher number of generations. But, these small improvements of the objectives do not justify so large increases in the number of generations.
436
+
437
+ By the analysis of Figures 3 and 4, generation 300 could be a suitable point to fix the ending condition of the genetic algorithms. However, we can reduce the number of generations if we are interested in reducing the execution time. For example, with only 50 generations, the most significant improvements are already achieved. But for this reduced number of generations, NSGA-II is not the best solution.
438
+
439
+ Finally, if we compare the algorithms between them, NSGA-II seems to be the algorithm that needs more generations to minimize the solutions. On the contrary, MOEA/D is the fastest solution.
440
+
441
+ # 6.2. Analysis of the Pareto solution set
442
+
443
+ The evaluation of the results obtained with a multi-objective optimization algorithm cannot be only analyzed with one solution from the solution set [69]. Thus, we also include the representation of the final solution sets in Figure 5. The figure includes the Pareto set of the NSGA-II, the external population of the MOEA/D and the whole population in the case of the WSGA. In the three cases, 100 solutions (points) are represented. We want to recall that a solution corresponds to a placement configuration (the matrix $A$ ), and the result of the algorithms are a set of optimized solutions. Each point of the 3D plot of the figure represents one of these solutions, characterized by its objective values. The three dimensions of the 3D scatter plot are the three optimization objectives. The other three scatter plots are the 2D projections of the 3D plot for each pair of objectives.
444
+
445
+ We can observe that the Pareto set of the NSGA-II covers a wider range of solutions, mainly in the case of the network latency and service spread objectives. In the case of the free resources, all the solutions are located in the value of 0.0. This is because this objective is probably the easiest one to be minimized since this minimization is just obtained by placing as many services as possible. On the contrary, the coverage of the solutions of the MOEA/D is very limited to a small region with solutions between 30-35 ms
446
+
447
+ ![](images/dcefd77494713d44b53b55c70d868ed61fb84ed3297f234eaafeb66ec93a7e52.jpg)
448
+
449
+ ![](images/b24d8ea7fae51adf88b7bb0dfa21c3e651d15c0b886bac76e17e4c641b3845aa.jpg)
450
+
451
+ ![](images/f486ff4249be7552f877576d7bb04cce0f37f958f02e51eb888f36764479b3fa.jpg)
452
+ Figure 5: Final set of solutions obtained for the experiment with 200 services.
453
+
454
+ ![](images/2b6f7a5f86c164c8c977a010ed6070d53ab51b1ea0f385be9cbce79d5f707489.jpg)
455
+
456
+ of network latency and 0.55-0.6 of service spread. Finally, all the solutions of the WSGA are located in the same point, i.e., all the solutions have the same objective values. This offers a low flexibility to the system administrator for selecting solutions under different criteria.
457
+
458
+ The previous analysis of the solution spread is also commonly measured by using the volume of the hyper-cube that envelops the solutions [70, 69]. The hyper-cube is generated by multiplying the wide of the solution space for each single objective, i.e., the distance between the minimum and maximum value for each objective:
459
+
460
+ Table 4: Solution spread volume for the network latency and service spread objectives.
461
+
462
+ <table><tr><td>Apps</td><td>NSGA-II</td><td>WSGA</td><td>MOEA/D</td></tr><tr><td>100 services</td><td>0.1505</td><td>0.0</td><td>0.0013</td></tr><tr><td>200 services</td><td>0.0519</td><td>0.0</td><td>0.0001</td></tr></table>
463
+
464
+ $$
465
+ S o l u t i o n S p r e a d V o l u m e = \prod_ {i} ^ {\text {n u m . o b j e c t i v e s}} | \max (i) - \min (i) | \tag {14}
466
+ $$
467
+
468
+ In our particular case, we need to limit this analysis to the service spread and network latency objectives. If we also consider the resource usage, the solution spread volume will be 0.0 because all the solutions result in values of 0.0 for the free resources. Consequently, we measure the coverage of the solution as $\left| \max(\text{net. latency}) - \min(\text{net. latency}) \right| \times \left| \max(\text{spread}) - \min(\text{spread}) \right|$ . This results are reflected in Table 4 for both experiment sizes. The values of the table confirms our conclusions from the analysis of the scatter plots.
469
+
470
+ # 6.3. Execution times
471
+
472
+ Finally, the results of the execution times for the three optimization algorithms are presented in Figure 6. We implemented the three algorithms in Python 2.7.6. Their source code can be found in a public repository [71]. The experiments were executed on a computer running MacOS Sierra with an Intel(R) Core(TM) i7 processor operating at 3.10 GHz with 16 GB of RAM.
473
+
474
+ In general terms, the MOEA/D is much faster than the other two algorithms, and NSGA-II is the one with the highest execution times. Additionally, we can observe that the execution times remain almost constant along the algorithm executions. It is also observed that the execution times of the second experiment (the one with 200 services) are almost doubled with regard to the smallest experiment (100 services).
475
+
476
+ This increase in the execution time is explained by the differences between these both experiments which are the sizes of the solutions. As we explained in Sections 2 and 4.2, solutions are represented with a matrix $A$ of size $|S| \times |F|$ where $|S|$ is the number of services and $|F|$ the number of devices. Note that, by the analysis of the source code of the algorithms, the computational complexity of the fitness calculation can be determined as $O(3SF)$ . Thus,
477
+
478
+ ![](images/f443c1554c4df5e3c3a5d30ea49ccf49ad59efc68fba874bf1d441718d01cc36.jpg)
479
+ Figure 6: Execution times of the three optimization algorithms.
480
+
481
+ it is clear that the execution times depend on the experiment sizes, i.e., the number of devices and services. Consequently, the same increase in the execution time is expected for bigger experiments.
482
+
483
+ # 7. Conclusion
484
+
485
+ We have evaluated and compared the efficiency of three evolutionary algorithms for the fog service placement problem. The adaptation of the algorithms to our specific domain supposed to define the genetic operators and the parametrization of the algorithms. A model for the fog architecture domain and the definition of applications as a set of interoperated services has been defined. Three objective functions have been formalized for our three main optimization concerns: the increase of the network latency due to the spread of the services across the fog devices (minimization of the network latency), the highest possible use of the fog resources to reduce the use of the cloud resources (minimization of the free resources), and an evenly distribution of the services across the fog devices (optimization of the service spread).
486
+
487
+ Solutions to FSPP are represented as a matrix that embeds the allocation and scale level of the services to be deployed. The algorithms determine the number of instances of each service and their placement in the architecture in order to minimize the three considered objectives. Consequently, the solutions are defined as a placement plan represented with a matrix of services and devices.
488
+
489
+ The experimental validation was performed with a random Barabasi-Albert network topology of 100 devices and two experiment sizes of 100 and 200 services. The same patterns in the solution of both cases were observed. In general terms, the NSGA-II algorithm resulted in the algorithm that achieved the highest optimizations (smallest objectives values). But if the objectives are analyzed independently, NSGA-II obtained better results in the service spread and MOEA/D in the network latency. Moreover, NSGA-II also obtained a wider solution space and, consequently, the system administrator has a higher flexibility to select one solution from the solution set by considering different criteria or preferences. The benefits of the NSGA-II were obtained at the cost of longer execution times, both in terms of the number of generations and execution times of each generation. To sum up, NSGA-II was better to optimize the objectives and to obtain a more diverse solution space. MOEA/D was better to reduce the execution times. The WSGA algorithm did not show any benefit with regard to the other two algorithms.
490
+
491
+ Research opportunities for future works emerged from the study of a hybrid optimization algorithm that simultaneously executes the three optimization algorithms and merges the three solution sets. Additionally, other metaheuristics, such as swarm particle optimization, ant colony optimization, or firefly optimization, can be also studied and compared between them. Finally, the applicability of these evolutionary solutions to other fog organizations (such as multi-level fog, federated fog, fog colonies...) is interesting, not only for the placement of the services but also for the simultaneous optimization of the placement and the fog organization.
492
+
493
+ # Acknowledgements
494
+
495
+ Funding: This work was supported by the Spanish Government (Agencia Estatal de Investigación) and the European Commission (Fondo Europeo de Desarrollo Regional) [grant number TIN2017-88547-P MINECO / AEI / FEDER, UE].
496
+
497
+ # References
498
+
499
+ # References
500
+
501
+ [1] R. Mahmud, R. Kotagiri, R. Buyya, Fog Computing: A Taxonomy, Survey and Future Directions, Springer Singapore, Singapore, 2018, pp. 103-130.
502
+ [2] OpenFog Reference Architecture for Fog Computing, Tech. rep., OpenFog Consortium Architecture Working Group (02 2017).
503
+ [3] W. Shi, J. Cao, Q. Zhang, Y. Li, L. Xu, Edge computing: Vision and challenges, IEEE Internet of Things Journal 3 (5) (2016) 637-646. doi: 10.1109/JIOT.2016.2579198.
504
+ [4] C. Mouradian, D. Naboulsi, S. Yangui, R. H. Glitho, M. J. Morrow, P. A. Polakos, A comprehensive survey on fog computing: State-of-the-art and research challenges, IEEE Communications Surveys Tutorials PP (99) (2017) 1-1. doi:10.1109/COMST.2017.2771153.
505
+ [5] K. Velasquez, D. P. Abreu, M. R. M. Assis, C. Senna, D. F. Aranha, L. F. Bittencourt, N. Laranjeiro, M. Curado, M. Vieira, E. Monteiro, E. Madeira, Fog orchestration for the internet of everything: state-of-the-art and research challenges, Journal of Internet Services and Applications 9 (1) (2018) 14. doi:10.1186/s13174-018-0086-3. URL https://doi.org/10.1186/s13174-018-0086-3
506
+ [6] Z.-H. Zhan, X.-F. Liu, Y.-J. Gong, J. Zhang, H. S.-H. Chung, Y. Li, Cloud computing resource scheduling and a survey of its evolutionary approaches, ACM Comput. Surv. 47 (4) (2015) 63:1-63:33. doi:10.1145/2788397.
507
+ URL http://doi.acm.org/10.1145/2788397
508
+ [7] M. Guzek, P. Bouvry, E. G. Talbi, A survey of evolutionary computation for resource management of processing in cloud computing [review article], IEEE Computational Intelligence Magazine 10 (2) (2015) 53-67. doi:10.1109/MCI.2015.2405351.
509
+ [8] A. C. Adamuthe, R. M. Pandharpatte, G. T. Thampi, Multiobjective virtual machine placement in cloud environment, in: 2013 International Conference on Cloud Ubiquitous Computing Emerging Technologies, 2013, pp. 8-13. doi:10.1109/CUBE.2013.12.
510
+
511
+ [9] C. Guerrero, I. Lera, C. Juiz, Migration-aware genetic optimization for mapreduce scheduling and replica placement in hadoop, Journal of Grid Computing 16 (2) (2018) 265-284. doi:10.1007/s10723-018-9432-8. URL https://doi.org/10.1007/s10723-018-9432-8
512
+ [10] D. Kimovski, N. Saurabh, V. Stankovski, R. Prodan, Multi-objective middleware for distributed VMI repositories in federated cloud environment, Scalable Computing: Practice and Experience 17 (4) (2016) 299-312. URL http://www.scpe.org/index.php/scpe/article/view/1202
513
+ [11] S. Frey, F. Fittkau, W. Hasselbring, Search-based genetic optimization for deployment and reconfiguration of software in the cloud, in: Proceedings of the 2013 International Conference on Software Engineering, ICSE '13, IEEE Press, Piscataway, NJ, USA, 2013, pp. 512-521. URL http://dl.acm.org/citation.cfm?id=2486788.2486856
514
+ [12] H. Ishibuchi, Y. Sakane, N. Tsukamoto, Y. Nojima, Evolutionary many-objective optimization by nsga-ii and moea/d with large populations, in: 2009 IEEE International Conference on Systems, Man and Cybernetics, 2009, pp. 1758–1763. doi:10.1109/ICSMC.2009.5346628.
515
+ [13] W. Peng, Q. Zhang, H. Li, Comparison between MOEA/D and NSGA-II on the Multi-Objective Travelling Salesman Problem, Springer Berlin Heidelberg, Berlin, Heidelberg, 2009, pp. 309-324. doi:10.1007/978-3-540-88051-6\_14. URL https://doi.org/10.1007/978-3-540-88051-6_14
516
+ [14] H. Li, Q. Zhang, Multiobjective optimization problems with complicated pareto sets, moea/d and nsga-ii, IEEE Transactions on Evolutionary Computation 13 (2) (2009) 284–302. doi:10.1109/TEVC.2008.925798.
517
+ [15] B. Donassolo, I. Fajjari, A. Legrand, P. Mertikopoulos, Fog Based Framework for IoT Service Provisioning, in: IEEE Consumer Communications & Networking Conference, Las Vegas, United States, 2019. URL https://hal.inria.fr/hal-01859695
518
+ [16] O. Skarlat, V. Karagiannis, T. Rausch, K. Bachmann, S. Schulte, A framework for optimization, service placement, and runtime operation in the fog, in: 2018 IEEE/ACM 11th International Conference on Utility
519
+
520
+ and Cloud Computing (UCC), 2018, pp. 164-173. doi:10.1109/UCC.2018.00025.
521
+ [17] K. Kumar, J. Liu, Y.-H. Lu, B. Bhargava, A survey of computation offloading for mobile systems, Mobile Networks and Applications 18 (1) (2013) 129-140. doi:10.1007/s11036-012-0368-0. URL https://doi.org/10.1007/s11036-012-0368-0
522
+ [18] P. Mach, Z. Becvar, Mobile edge computing: A survey on architecture and computation offloading, IEEE Communications Surveys Tutorials 19 (3) (2017) 1628-1656. doi:10.1109/COMST.2017.2682318.
523
+ [19] Z. Wen, R. Yang, P. Garraghan, T. Lin, J. Xu, M. Rovatsos, Fog orchestration for internet of things services, IEEE Internet Computing 21 (2) (2017) 16-24. doi:10.1109/MIC.2017.36.
524
+ [20] O. Skarlat, M. Nardelli, S. Schulte, M. Borkowski, P. Leitner, Optimized IoT service placement in the fog, Service Oriented Computing and ApplicationsDOI:10.1007/s11761-017-0219-8. URL https://doi.org/10.1007/s11761-017-0219-8
525
+ [21] L. Yang, J. Cao, G. Liang, X. Han, Cost aware service placement and load dispatching in mobile cloud systems, IEEE Transactions on Computers 65 (5) (2016) 1440-1452. doi:10.1109/TC.2015.2435781.
526
+ [22] H. R. Arkian, A. Diyanat, A. Pourkhalili, Mist: Fog-based data analytics scheme with cost-efficient resource provisioning for IoT crowdsensing applications, Journal of Network and Computer Applications 82 (Supplement C) (2017) 152 - 165. doi:https://doi.org/10.1016/j.jnca.2017.01.012. URL http://www.sciencedirect.com/science/article/pii/S1084804517300188
527
+ [23] L. Gu, D. Zeng, S. Guo, A. Barnawi, Y. Xiang, Cost efficient resource management in fog computing supported medical cyber-physical system, IEEE Transactions on Emerging Topics in Computing 5 (1) (2017) 108–119. doi:10.1109/TETC.2015.2508382.
528
+ [24] K. Velasquez, D. P. Abreu, M. Curado, E. Monteiro, Service placement for latency reduction in the internet of things, Annals of Telecommuni-
529
+
530
+ cations 72 (1) (2017) 105-115. doi:10.1007/s12243-016-0524-9. URL https://doi.org/10.1007/s12243-016-0524-9
531
+ [25] Z. Huang, K.-J. Lin, S.-Y. Yu, J. Y. Jen Hsu, Co-locating services in IoT systems to minimize the communication energy cost, Journal of Innovation in Digital Ecosystems 1 (1) (2014) 47 - 57. doi:https://doi.org/10.1016/j.jides.2015.02.005. URL http://www.sciencedirect.com/science/article/pii/S2352664515000061
532
+ [26] V. B. C. Souza, W. Ramírez, X. Masip-Bruin, E. Marín-Tordera, G. Ren, G. Tashakor, Handling service allocation in combined fog-cloud scenarios, in: 2016 IEEE International Conference on Communications (ICC), 2016, pp. 1-5. doi:10.1109/ICC.2016.7511465.
533
+ [27] O. Skarlat, M. Nardelli, S. Schulte, S. Dustdar, Towards qos-aware fog service placement, in: 2017 IEEE 1st International Conference on Fog and Edge Computing (ICFEC), 2017, pp. 89-96. doi:10.1109/ICFEC.2017.12.
534
+ [28] D. Zeng, L. Gu, S. Guo, Z. Cheng, S. Yu, Joint optimization of task scheduling and image placement in fog computing supported software-defined embedded system, IEEE Transactions on Computers 65 (12) (2016) 3702-3712. doi:10.1109/TC.2016.2536019.
535
+ [29] M. Barcelo, A. Correa, J. Llorca, A. M. Tulino, J. L. Vicario, A. Morell, Iot-cloud service optimization in next generation smart environments, IEEE Journal on Selected Areas in Communications 34 (12) (2016) 4077-4090. doi:10.1109/JSAC.2016.2621398.
536
+ [30] L. Ni, J. Zhang, C. Jiang, C. Yan, K. Yu, Resource allocation strategy in fog computing based on priced timed petri nets, IEEE Internet of Things Journal 4 (5) (2017) 1216-1228. doi:10.1109/JIOT.2017.2709814.
537
+ [31] R. Urgaonkar, S. Wang, T. He, M. Zafer, K. Chan, K. K. Leung, Dynamic service migration and workload scheduling in edge-clouds, Performance Evaluation 91 (Supplement C) (2015) 205 - 228, special Issue: Performance 2015. doi:https://doi.org/10.1016/j.peva.2015.06.013.
538
+
539
+ URL http://www.sciencedirect.com/science/article/pii/S0166531615000619
540
+ [32] A. Brogi, S. Forti, A. Ibrahim, How to best deploy your fog applications, probably, in: 2017 IEEE 1st International Conference on Fog and Edge Computing (ICFEC), 2017, pp. 105-114. doi:10.1109/ICFEC.2017.8.
541
+ [33] I. Lera, C. Guerrero, C. Juiz, Availability-aware service placement policy in fog computing based on graph partitions, IEEE Internet of Things Journal (2019) 1-1doi:10.1109/JIOT.2018.2889511.
542
+ [34] G. Colistra, V. Pilloni, L. Atzori, The problem of task allocation in the internet of things and the consensus-based approach, Computer Networks 73 (Supplement C) (2014) 98 - 111. doi:https://doi.org/10.1016/j.comnet.2014.07.011. URL http://www.sciencedirect.com/science/article/pii/S1389128614002655
543
+ [35] S. Wang, R. Urgaonkar, K. Chan, T. He, M. Zafer, K. K. Leung, Dynamic service placement for mobile micro-clouds with predicted future costs, in: 2015 IEEE International Conference on Communications (ICC), 2015, pp. 5504-5510. doi:10.1109/ICC.2015.7249199.
544
+ [36] R. Urgaonkar, S. Wang, T. He, M. Zafer, K. Chan, K. K. Leung, Dynamic service migration and workload scheduling in edge-clouds, Perform. Eval. 91 (C) (2015) 205-228. doi:10.1016/j.peva.2015.06.013. URL https://doi.org/10.1016/j.peva.2015.06.013
545
+ [37] B. Billet, V. Issarny, From task graphs to concrete actions: A new task mapping algorithm for the future internet of things, in: 2014 IEEE 11th International Conference on Mobile Ad Hoc and Sensor Systems, 2014, pp. 470-478. doi:10.1109/MASS.2014.20.
546
+ [38] H. Gupta, A. Vahid Dastjerdi, S. K. Ghosh, R. Buyya, ifogsim: A toolkit for modeling and simulation of resource management techniques in the internet of things, edge and fog computing environments, Software: Practice and Experience 47 (9) (2017) 1275-1296, spe.2509. doi:10.1002/spe.2509. URL http://dx.doi.org/10.1002/spe.2509
547
+
548
+ [39] M. Taneja, A. Davy, Resource aware placement of IoT application modules in fog-cloud computing paradigm, in: 2017 IFIP/IEEE Symposium on Integrated Network and Service Management (IM), 2017, pp. 1222-1228. doi:10.23919/INM.2017.7987464.
549
+ [40] S. Wang, M. Zafer, K. K. Leung, Online placement of multi-component applications in edge computing environments, IEEE Access 5 (2017) 2514-2533. doi:10.1109/ACCESS.2017.2665971.
550
+ [41] L. F. Bittencourt, J. Diaz-Montes, R. Buyya, O. F. Rana, M. Parashar, Mobility-aware application scheduling in fog computing, IEEE Cloud Computing 4 (2) (2017) 26-35. doi:10.1109/MCC.2017.27.
551
+ [42] I. Farris, L. Militano, M. Nitti, L. Atzori, A. Iera, Federated edge-assisted mobile clouds for service provisioning in heterogeneous IoT environments, in: 2015 IEEE 2nd World Forum on Internet of Things (WF-IoT), 2015, pp. 591-596. doi:10.1109/WF-IoT.2015.7389120.
552
+ [43] R. Deng, R. Lu, C. Lai, T. H. Luan, Towards power consumption-delay tradeoff by workload allocation in cloud-fog computing, in: 2015 IEEE International Conference on Communications (ICC), 2015, pp. 3909-3914. doi:10.1109/ICC.2015.7248934.
553
+ [44] E. Saurez, K. Hong, D. Lillethun, U. Ramachandran, B. Ottenwalder, Incremental deployment and migration of geo-distributed situation awareness applications in the fog, in: Proceedings of the 10th ACM International Conference on Distributed and Event-based Systems, DEBS '16, ACM, New York, NY, USA, 2016, pp. 258-269. doi:10.1145/2933267.2933317. URL http://doi.acm.org/10.1145/2933267.2933317
554
+ [45] V. Chamola, C. K. Tham, G. S. S. Chalapathi, Latency aware mobile task assignment and load balancing for edge cloudlets, in: 2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), 2017, pp. 587-592. doi:10.1109/PERCOMW.2017.7917628.
555
+ [46] N. K. Giang, M. Blackstock, R. Lea, V. C. M. Leung, Developing iot applications in the fog: A distributed dataflow approach, in: 2015 5th
556
+
557
+ International Conference on the Internet of Things (IOT), 2015, pp. 155-162. doi:10.1109/IOT.2015.7356560.
558
+ [47] A. Balalaie, A. Heydarnoori, P. Jamshidi, Microservices architecture enables devops: Migration to a cloud-native architecture, IEEE Software 33 (3) (2016) 42-52. doi:10.1109/MS.2016.64.
559
+ [48] M. Vogler, J. M. Schleicher, C. Inzinger, S. Dustdar, A scalable framework for provisioning large-scale IoT deployments, ACM Trans. Internet Technol. 16 (2) (2016) 11:1-11:20. doi:10.1145/2850416. URL http://doi.acm.org/10.1145/2850416
560
+ [49] T. Vresk, I. Čavrak, Architecture of an interoperable iot platform based on microservices, in: 2016 39th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2016, pp. 1196-1201. doi:10.1109/MIPRO.2016.7522321.
561
+ [50] N. Alshuqayran, N. Ali, R. Evans, A systematic mapping study in microservice architecture, in: 2016 IEEE 9th International Conference on Service-Oriented Computing and Applications (SOCA), 2016, pp. 44-51. doi:10.1109/SOCA.2016.15.
562
+ [51] R. Morabito, I. Farris, A. Iera, T. Taleb, Evaluating performance of containerized iot services for clustered devices at the network edge, IEEE Internet of Things Journal 4 (4) (2017) 1019-1030. doi:10.1109/JIOT.2017.2714638.
563
+ [52] F. Li, M. Voegler, M. Claessens, S. Dustdar, Efficient and scalable IoT service delivery on cloud, in: 2013 IEEE Sixth International Conference on Cloud Computing, 2013, pp. 740-747. doi:10.1109/Cloud.2013.64.
564
+ [53] L. Sun, Y. Li, R. A. Memon, An open iot framework based on microservices architecture, China Communications 14 (2) (2017) 154-162. doi:10.1109/CC.2017.7868163.
565
+ [54] A. Krylovskiy, M. Jahn, E. Patti, Designing a smart city internet of things platform with microservice architecture, in: 2015 3rd International Conference on Future Internet of Things and Cloud, 2015, pp. 25-30. doi:10.1109/FiCloud.2015.55.
566
+
567
+ [55] D. H. Wolpert, W. G. Macready, No free lunch theorems for optimization, Trans. Evol. Comp 1 (1) (1997) 67-82. doi:10.1109/4235.585893.
568
+ URL http://dx.doi.org/10.1109/4235.585893
569
+ [56] Q. Zhang, H. Li, Moea/d: A multiobjective evolutionary algorithm based on decomposition, IEEE Transactions on Evolutionary Computation 11 (6) (2007) 712-731. doi:10.1109/TEVC.2007.892759.
570
+ [57] D. E. Goldberg, K. Deb, A comparative analysis of selection schemes used in genetic algorithms, Vol. 1 of Foundations of Genetic Algorithms, Elsevier, 1991, pp. 69 - 93. doi:https://doi.org/10.1016/B978-0-08-050684-5.50008-2. URL http://www.sciencedirect.com/science/article/pii/B9780080506845500082
571
+ [58] K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjective genetic algorithm: Nsga-ii, Trans. Evol. Comp 6 (2) (2002) 182–197. doi:10.1109/4235.996017. URL http://dx.doi.org/10.1109/4235.996017
572
+ [59] M. Gen, R. Cheng, Genetic algorithms and engineering optimization, Vol. 7, John Wiley & Sons, 2000.
573
+ [60] M. Srinivas, L. M. Patnaik, Adaptive probabilities of crossover and mutation in genetic algorithms, IEEE Transactions on Systems, Man, and Cybernetics 24 (4) (1994) 656-667. doi:10.1109/21.286385.
574
+ [61] M. Mitchell, An Introduction to Genetic Algorithms, MIT Press, Cambridge, MA, USA, 1998.
575
+ [62] R. Mayer, L. Graser, H. Gupta, E. Saurez, U. Ramachandran, Emufog: Extensible and scalable emulation of large-scale fog computing infrastructures, in: 2017 IEEE Fog World Congress (FWC), 2017, pp. 1-6. doi:10.1109/FWC.2017.8368525.
576
+ [63] D. Koschutzki, K. A. Lehmann, L. Peeters, S. Richter, D. Tenfelde-Podehl, O. Zlotowski, Centrality Indices, Springer Berlin Heidelberg, Berlin, Heidelberg, 2005, pp. 16-61.
577
+
578
+ [64] J. K. Zao, T. T. Gan, C. K. You, S. J. R. Méndez, C. E. Chung, Y. T. Wang, T. Mullen, T. P. Jung, Augmented brain computer interaction based on fog computing and linked data, in: 2014 International Conference on Intelligent Environments, 2014, pp. 374-377. doi:10.1109/IE.2014.54.
579
+ [65] K. Hong, D. Lillethun, U. Ramachandran, B. Ottenwalder, B. Koldehofe, Mobile fog: A programming model for large-scale applications on the internet of things, in: Proceedings of the Second ACM SIGCOMM Workshop on Mobile Cloud Computing, MCC '13, ACM, New York, NY, USA, 2013, pp. 15-20. doi:10.1145/2491266.2491270. URL http://doi.acm.org/10.1145/2491266.2491270
580
+ [66] Weaveworks, ContainerSolutions, Socks shop - a microservices demo application (2016). URL https://microservices-demo.github.io/
581
+ [67] C. Guerrero, I. Lera, C. Juiz, Genetic algorithm for multi-objective optimization of container allocation in cloud architecture, Journal of Grid Computing 16 (1) (2018) 113-135. doi:10.1007/s10723-017-9419-x. URL https://doi.org/10.1007/s10723-017-9419-x
582
+ [68] C. Guerrero, I. Lera, C. Juiz, Resource optimization of container orchestration: a case study in multi-cloud microservices-based applications, Journal of Supercomputing.doi:10.1007/s11227-018-2345-2. URL https://doi.org/10.1007/s11227-018-2345-2
583
+ [69] E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, V. G. da Fonseca, Performance assessment of multiobjective optimizers: an analysis and review, IEEE Transactions on Evolutionary Computation 7 (2) (2003) 117-132. doi:10.1109/TEVC.2003.810758.
584
+ [70] J. Wu, S. Azarm, Metrics for quality assessment of a multiobjective design optimization solution set., ASME. J. Mech. Des. 123 (1) (2000) 18-25. doi:10.1115/1.1329875.
585
+ [71] C. Guerrero, I. Lera, Genetic algorithms for the placement of services in Fog domains. URL https://github.com/acsicuib/GA4FogPlacement
2501.09xxx/2501.09958/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0ba50076a55ff1a91018205079dcc645f0b16307ab2829c0754ff3942b45102
3
+ size 886819
2501.09xxx/2501.09958/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.09xxx/2501.09959/fcafa97e-c629-4479-a0e1-36ec041c4618_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.09xxx/2501.09959/fcafa97e-c629-4479-a0e1-36ec041c4618_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.09xxx/2501.09959/fcafa97e-c629-4479-a0e1-36ec041c4618_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5cbfacc12d54ac005f38f90c21b90608efdfc020e91ab5d6dd0a314615023340
3
+ size 378199
2501.09xxx/2501.09959/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2501.09xxx/2501.09959/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0025673c78933e929135e943c7e594142918edd21b48b19c4f69b02d4f2350c7
3
+ size 185488
2501.09xxx/2501.09959/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.09xxx/2501.09967/d71f0106-7dbe-47bb-88c7-826b81b14c28_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.09xxx/2501.09967/d71f0106-7dbe-47bb-88c7-826b81b14c28_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.09xxx/2501.09967/d71f0106-7dbe-47bb-88c7-826b81b14c28_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c67e78124120ccdf37663beaba6ebd7e26d35d06719bc77e7d58a39f16bcfa3a
3
+ size 6584787
2501.09xxx/2501.09967/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2501.09xxx/2501.09967/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15d20caf94591ddaac8c2de48c86cbdbe6779e950d94f893fe44aad2d17d917f
3
+ size 1901472
2501.09xxx/2501.09967/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.09xxx/2501.09996/a1383725-a489-4e27-9f8f-ea5cefeb3f84_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.09xxx/2501.09996/a1383725-a489-4e27-9f8f-ea5cefeb3f84_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.09xxx/2501.09996/a1383725-a489-4e27-9f8f-ea5cefeb3f84_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17f7dfe88c39d3072e11a567bb1276fa1c16ad3633eabc862e7bddfb16e7cecb
3
+ size 714763
2501.09xxx/2501.09996/full.md ADDED
@@ -0,0 +1,487 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Fast energy-aware OLSR routing in VANETs by means of a parallel evolutionary algorithm
2
+
3
+ Jamal Toutouh · Sergio Nesmachnow ·
4
+
5
+ Enrique Alba
6
+
7
+ Received: date / Accepted: date
8
+
9
+ Abstract This work tackles the problem of reducing the power consumption of the OLSR routing protocol in vehicular networks. Nowadays, energy-aware and green communication protocols are important research topics, specially when deploying wireless mobile networks. This article introduces a fast automatic methodology to search for energy-efficient OLSR configurations by using a parallel evolutionary algorithm. The experimental analysis demonstrates that significant improvements over the standard configuration can be attained in terms of power consumption, with no noteworthy loss in the QoS.
10
+
11
+ Keywords energy $\cdot$ vehicular networks $\cdot$ evolutionary algorithms $\cdot$ parallelism
12
+
13
+ # 1 Introduction
14
+
15
+ In the last five years, the networking research community has shown a growing interest in vehicular ad hoc networks (VANETs), a technology that uses vehicles as nodes of a mobile network [24]. VANETs share their main concepts with generic mobile ad hoc networks (MANETs), but they also have several distinctive features. For example, the node mobility in VANETs is different from the models used in other mobile networks, since vehicles tend to move following organized patterns, and they are usually subject to restrictions in both their motion range and in the interactions with roadside infrastructure. In addition, VANETs integrate multiple ad hoc networking technologies (such as WiFi IEEE 802.11p, WiMAX IEEE 802.16, Bluetooth, etc.), posing a difficult challenge for attaining effective and simple communication between vehicles.
16
+
17
+ VANETs involve communication between vehicles and other battery-fed devices—pedestrian smartphones, road transceivers, sensors. Thus, the power consumption by wireless communications becomes a major concern, and the use of energy-efficient communications is highly desirable.
18
+
19
+ Network routing is a critical issue in VANETs, as well as in any other ad hoc network. The absence of a central entity to manage the routing information, the limitations of the shared medium, and the dynamic topology due to the high node mobility and obstacles, make the routing problem even harder. Proactive protocols are a useful choice for routing in VANETs, since they generally outperform reactive ones in terms of quality of service (QoS), network throughput, and end-to-end delay [23]. However, proactive protocols have a higher routing overhead, significantly reducing their energy efficiency [8, 31].
20
+
21
+ Optimized Link State Routing (OLSR) [12] is a well-known proactive routing protocol used in VANETs. The energy efficiency of OLSR has been studied focusing on specific protocol variants [13, 27], but VANET infrastructures have seldom been considered. The OLSR power consumption can be improved by modifying the standard parameter configuration, in order to reduce the routing overhead. However, is not easy to find the best OLSR configuration. Exact and enumerative methods are not applicable to solve the underlying optimization problem, since they require prohibitive execution times to perform the search, even when considering only a small set of parameter values. In this context, metaheuristics are a promising option to find accurate energy-aware OLSR configurations in reasonable times, even when a large set of parameter values are considered, as in the problem tackled in this paper.
22
+
23
+ Evolutionary algorithms (EAs) have emerged as flexible and robust metaheuristics for search and optimization, achieving a high level of problem solving efficacy in many application areas [6]. In order to further improve the efficiency of EAs, parallel implementations have been used to significantly enhance and speed up the search, allowing high quality results to be computed in reasonable execution times even for hard-to-solve optimization problems [1].
24
+
25
+ This work proposes applying an automatic configuration of the main OLSR parameters by using a parallel EA. The main goals of the research are: i) to improve the efficiency of OLSR in VANETs, trying to reduce the power consumption when using the standard Request for Comments (RFC) 3626 configuration [12], and ii) to scale down the times required to perform the automatic configuration, in order to study large realistic VANET scenarios.
26
+
27
+ The methodology applied in this work consists of exploring the search space for possible combinations of eight parameter values that define the OLSR routing protocol, by using a genetic algorithm (GA). The power consumption due to data exchange of each OLSR configuration is evaluated using the data obtained after performing VANETs simulations with the ns-2 network simulator. Since these simulations require a long time to perform, a parallel implementation of the GA is used in order to reduce the search execution times. The best configurations are compared with the standard one defined by RFC 3626, both in terms of power consumption and QoS. Finally, the best energy-aware OLSR configuration found is validated on a set of 36 VANET scenarios.
28
+
29
+ The article is organized as follows. Section 2 introduces the energy-aware routing problem in VANETs, the OLSR protocol, the power consumption model, and reviews related work on metaheuristics for protocol optimization in MANETs/VANETs and methods for energy-efficient OLSR. Section 3 describes evolutionary computing and the parallel model for EAs employed here. Section 4 presents the implementation details of the parallel GA to find energy-aware OLSR configurations in VANETs. The experimental analysis in Section 5 studies the numerical efficacy and the computational efficiency of the parallel GA, and also presents a validation of the best configuration found on a large set of VANET scenarios. Finally, Section 6 presents the main conclusions of the research and formulates the main lines for future work.
30
+
31
+ # 2 Energy aware routing in vehicular networks
32
+
33
+ This section introduces VANET routing, the OLSR protocol, the power consumption model used in our approach, and a review of related work. It also describes the methodology for finding energy-efficient OLSR configurations.
34
+
35
+ # 2.1 Routing in VANETs
36
+
37
+ Finding a stable routing strategy that guarantee the exchange of up-to-date information, maximizing reliability and minimizing delays is an important technical challenge when designing an architecture for vehicular communication.
38
+
39
+ In VANETs, the links for vehicle-to-vehicle and vehicle-to-infrastructure communication tend to be shortlived, due to the intrinsic high-speed node mobility and the presence of obstacles. Therefore, a great deal of effort is dedicated to defining efficient routing strategies. Specific VANET protocols have appeared over the last few years, but most of them are based on prior mobile ad hoc networks. These protocols can be grouped into: topology-based (proactive, e.g. DSDV and OLSR, reactive, e.g., AODV and DSR, hybrid), position-based (e.g., GPSR, GEOTORA, GPCR), cluster-based (e.g., COIN, LORA CBF) and broadcasting (e.g., BROADCOMM, V-TRADE, HV-TRADE) [30, 29].
40
+
41
+ Within those protocols originally proposed for MANETs, topology-based protocols are among the most studied for routing in VANETs [29]. In proactive protocols, all nodes have consistent and up-to-date routing information for each node permanently, unlike in reactive ones, where the routes are created when demanded by the source node [30]. Proactive protocols have the advantage of reduced end-to-end delays, since the routes are already established and it is not necessary to invoke a routing discovery process to find them, as in reactive protocols. However, proactive protocols require a continuous exchange of control messages to maintain the topological information stored in the routing tables. While negligible for small scenarios, control messages use significant additional bandwidth for large networks, leading to excessive power consumption, possibly preventing the use of devices fed by batteries or renewable energy sources in VANETs [23].
42
+
43
+ In this work, we restrict our attention to OLSR, a proactive routing protocol that has been analyzed for use in VANETs through both simulations [9, 28] and real world tests [38]. In turn, different comparisons of this protocol against a reactive approach (AODV) concluded that OLSR principally outperforms AODV in terms of delivery delays and path lengths, while keeping a similar percentage of packets delivered correctly [23, 39]. The type of routing protocol affects the nodes power consumption in two different ways: the routing network load influences the amount of energy used to send and receive routing control messages; and the generated routing paths affect the power consumption in those nodes forwarding the packets [8, 40].
44
+
45
+ For the aforementioned reasons, we have selected OLSR as case-of-use, as its main drawback is the power consumption. Thus, we can analyze the use of our parallel GA to deal with the energy-efficiency routing problem in VANETs.
46
+
47
+ # 2.2 Optimized link state routing protocol
48
+
49
+ OLSR is a proactive link-state routing protocol conceived for mobile ad hoc networks with low bandwidth and high mobility. OLSR relies on applying an efficient periodic flooding of control information using special nodes that act as multipoint relays (MPRs), reducing the number of required transmissions [32].
50
+
51
+ OLSR daemons periodically exchange control messages to maintain the network topology information in the presence of mobility and failures. The core functionality is performed mainly by using three different types of messages:
52
+
53
+ - HELLO messages, exchanged between neighbor nodes to allow for link sensing, neighborhood detection, and MPR selection signaling. These messages are generated periodically, containing information about the neighbor nodes and about the links between their network interfaces.
54
+ - TC (topology control) messages, generated by MPRs to indicate which other nodes have selected it as their MPR. This information is used for routing table calculations. TC messages are broadcasted periodically, and a sequence number is used to distinguish between recent and old ones.
55
+ - MID (multiple interface declaration) messages, sent by the nodes to report information about their network interfaces, needed since multiple interfaces with different addresses can be involved in the communications.
56
+
57
+ OLSR is regulated by a set of parameters defined in the OLSR RFC 3626 [12]:
58
+
59
+ - the timeouts before resending each message type, HELLO INTERVAL, REFRESH INTERVAL, and TC INTERVAL, respectively;
60
+ - the "validity time" of the information received for each message type, NEIGHB HOLD TIME, MID HOLD TIME, and TOP HOLD TIME;
61
+ - the WILLINGNESS of a node to act as a MPR;
62
+ - the time during which the MPRs record information about the forwarded packets, DUP_HOLD_TIME.
63
+
64
+ A set of default values for these parameters has been suggested by the OLSR standard RFC 3626 (see Table 1).
65
+
66
+ Table 1 Main OLSR parameters and standard values in the RFC 3626.
67
+
68
+ <table><tr><td>parameter</td><td>standard value (RFC 3626 [12])</td><td>range</td></tr><tr><td>HELLO INTERVAL</td><td>2.0 s</td><td>R ∈ [2.0, 15.0]</td></tr><tr><td>REFRESH INTERVAL</td><td></td><td>2.0 sR ∈ [2.0, 15.0]</td></tr><tr><td>TC Interval</td><td>5.0 s</td><td>R ∈ [4.0, 35.0]</td></tr><tr><td>WILLINGNESS</td><td>3</td><td>Z ∈ [0,7]</td></tr><tr><td>NEIGHB HOLD TIME</td><td>3× HELLO INTERVAL</td><td>R ∈ [5.5,45.0]</td></tr><tr><td>TOP_HOLD TIME</td><td>3× TC INTERVAL</td><td>R ∈ [10.5,90.0]</td></tr><tr><td>MID HOLD TIME</td><td>3× TC INTERVAL</td><td>R ∈ [10.5,90.0]</td></tr><tr><td>DUP Holden Time</td><td>30.0 s</td><td>R ∈ [10.5,90.0]</td></tr></table>
69
+
70
+ OLSR has several features that make it suitable for highly dynamic ad hoc networks as VANETs: i) it is well suited for high density networks, with concentrated communication between a large number of nodes [12, 25]; ii) it is useful for applications requiring short delays in the data transmission, as most of warning information in VANETs [25]; iii) the protocol information can be extended with data to allow the hosts to know in advance the quality of the routes; iv) it permits an easy integration into existing operating systems and devices, including smartphones, embedded systems, without changing the header of the IP messages [19]; and v) it manages multiple interface addresses for the same host, allowing VANET nodes to use different network interfaces—WiFi, Bluetooth, etc.—, while acting as gateways to other devices, such as drivers and pedestrian smartphones, base stations, etc. [12].
71
+
72
+ # 2.3 Power consumption model
73
+
74
+ Several agents are involved in VANET communications, such as on-board devices, smartphones, or traffic signs, which use wireless network interfaces to exchange information with each other. The energy required for each device to perform the communications depends on its mode:
75
+
76
+ - idle is the default state of wireless interfaces in ad hoc networks, where nodes keep listening and the interface can change the state and start transmitting or receiving packets.
77
+ - transmit and receive states are for sending and receiving data through the medium.
78
+ - sleep state is when the node radio is turned off, and thus the node is not capable of detecting any signal.
79
+
80
+ In our work, we modify the behavior of OLSR in order to reduce the power consumption due to data exchange (control or information messages). We deal with energy-awareness in VANETs by optimizing the power consumption of the two operational states that act during the packet exchange: transmit and receive states. Therefore, we consider the per-packet power consumption [16] modeled by Cano et al. [8], in which only transmit and receive modes are taken into account to compute the power consumption to be optimized.
81
+
82
+ The energy is computed according to the power requirements in transmitting $(P_{send})$ and receiving $(P_{recv})$ states, and the time needed to transmit the packets (time). These values are obtained by using the network interface card (NIC) characteristics of electric current $(I_{send}, I_{recv})$ and power supply $(V_{send}, V_{recv})$ in each state, the size of the packets, and the bandwidth used.
83
+
84
+ Equations 1 and 2 represent the energy required for packet transmission $(E_{send})$ and for packet reception $(E_{recv})$ .
85
+
86
+ $$
87
+ E _ {s e n d} = P _ {s e n d} \times t i m e = \left(I _ {s e n d} \times V _ {s e n d}\right) \times \frac {\text {P a c k e t S i z e}}{\text {B a n d w i d t h}} \tag {1}
88
+ $$
89
+
90
+ $$
91
+ E _ {r e c v} = P _ {r e c v} \times \text {t i m e} = \left(I _ {r e c v} \times V _ {r e c v}\right) \times \quad \frac {\text {P a c k e t S i z e}}{\text {B a n d w i d t h}} \tag {2}
92
+ $$
93
+
94
+ According to the specification of the Unex DCMA-86P2 NIC [43] modeled in our simulations, the power consumption is from $440\mathrm{mA}$ in transmitting mode, and from $260\mathrm{mA}$ in receiving mode, and it is fed with 5.0 V. This network interface uses a 6 Mbps bandwidth implementation of the standard IEEE 802.11p. Thus, the power consumption in transmitting $(E_{send})$ and receiving states $(E_{recv})$ , in Joules, are given by Equations 3 and 4, respectively, where the packet size is given in bits.
95
+
96
+ $$
97
+ E _ {s e n d} = (4 4 0 \times 5) \times \frac {\text {P a c k e t S i z e}}{6 \times 1 0 ^ {6}} \tag {3}
98
+ $$
99
+
100
+ $$
101
+ E _ {\text {r e c v}} = (2 6 0 \times 5) \times \frac {\text {P a c k e t S i z e}}{6 \times 1 0 ^ {6}} \tag {4}
102
+ $$
103
+
104
+ The total power consumption for a packet transmission is the sum of the costs incurred by the sending node and all receivers, whether they are the destination nodes or not. Equation 5 computes the total power consumption per packet $(E_{\text{total}})$ when there are $r$ receiver nodes in the communication range of the sender.
105
+
106
+ $$
107
+ E _ {t o t a l} = E _ {s e n d} + \sum_ {i = 1} ^ {\sum} E _ {r e c v} \tag {5}
108
+ $$
109
+
110
+ # 2.4 Related work
111
+
112
+ The need of providing efficient communications in MANETs and VANETs has motivated the research community to deal with the problem of optimizing the communication protocols employed in such networks. The related studies have mainly focused on obtaining dramatic improvements in terms of both QoS offered—packet delivery ratio, delivery delays, etc.—and resources consumed, e.g. power requirements. Due to the complexity of the underlying optimization problems, metaheuristics have been usually applied as the most appropriate techniques to solve them.
113
+
114
+ # 2.4.1 Metaheuristics for protocol optimization in MANETs and VANETs
115
+
116
+ Regarding optimization techniques in MANETs, Alba et al. [3] applied a specialized cellular multi-objective GA for finding an optimal configuration for the Delayed Flooding with Cumulative Neighborhood broadcasting strategy. Dorronsoro et. al [15] evaluated six different versions of GA for the design of ad hoc injection networks. Cheng et al. [10] also used a GA for dealing with the multicast routing problem in MANETs. More recently, Ruiz et al. [35, 36] applied a hybrid multi-objective optimization algorithm (CellDE) to maximize the coverage and minimize the power consumption and broadcast time of the EDB protocol.
117
+
118
+ In VANETs there are just a few approaches applying metaheuristics to optimize communication protocols. Garc'ia-Nieto et al. [18] employed a set of metaheuristic algorithms to optimize VDTP and AODV [17] protocols. Recently, Toutouh et al. [42] applied DE in order to improve the performance of OLSR routing protocol in such networks.
119
+
120
+ # 2.4.2 Methods for energy-efficient OLSR
121
+
122
+ The related literature presents a number of power-aware mechanisms proposed at the network layer in wireless networks, mainly due to the impact of the routing protocols on the overall power consumption. These protocols determine the power consumption in creating and maintaining the routes and the data packets forwarding. In this work, we aim to provide an energy-efficient OLSR configuration when applying this protocol for routing in VANETs. OLSR has been selected as a case study since it offers competitive QoS in such networks [41], but it also requires significant power consumption.
123
+
124
+ Several approaches have been proposed to reduce the power consumption when using OLSR. Ghanem et al. [20] and Razalli et al. [33] evaluated new MPRs selection criteria based on the residual energy levels of the nodes. Routing path determination based on the overall power consumption to forward data and on the residual level of energy of intermediate nodes was explored by De Rango and Fotino [14] and Guo and Malakooti [22], respectively. Other authors have analyzed combinations of the aforementioned techniques [7, 13, 27, 31, 37]. Finally, De Rango et al. [13] presented Overhearing Exclusion, a mechanism that allows energy saving by turning off the device when a unicast message exchange happens in the device neighborhood.
125
+
126
+ Our previous article [40] studied the possible energy savings when an efficient protocol configuration in terms of QoS (DE-OLSR) is used. That is the only existing work studying the best parameter configurations to improve the energy efficiency of OLSR specifically in VANETs. The impact of the parameters configuration in the network performance led us to perform the in-depth study of the OLSR parameter tunning that we now present, in order to find the best configuration in terms of energy efficiency in VANETs. As in previously presented MANET/VANET optimization problems, the use of metaheuristic techniques is mandatory to deal with such problems.
127
+
128
+ # 2.5 Methodology for energy efficient OLSR via parameter tuning
129
+
130
+ The standard OLSR parameter values in Table 1 can be fine-tuned automatically by using an optimization technique, with the aim of obtaining efficient OLSR configurations for VANETs. This procedure hopefully allows reducing the power consumption without incurring a significant loss of QoS in comparison with the standard OLSR definition in RFC 3626.
131
+
132
+ The search of possible combinations of OLSR parameter values is not an easy problem. The dimension of the search space increases exponentially with both the number and the range of possible parameter values. Thus, exact search methods are not useful for efficiently solving the problem. In this context, heuristic and metaheuristic optimization algorithms are viable options to compute accurate energy-aware configurations in reasonable times.
133
+
134
+ In our previous paper [42], the large amount of time required to perform the VANET simulations limited the proposed search method to work with a reduced population in order to obtain results in reasonable time. To overcome this drawback, this work proposes to use a parallel GA for efficiently searching the parameter values of the OLSR protocol. By using several computing resources simultaneously, the parallel implementation allows to reduce the simulation times.
135
+
136
+ The automatic search for energy-aware OLSR configurations is carried out by using the energy cost of the communications as the main objective to be optimized. However, since excessive reductions of power consumption of the protocol can cause it to malfunction, we use the packet delivery ratio (PDR) quality metric to guarantee a minimum level of QoS in the communications. Thus, the parallel GA for finding energy-efficient parameter values searches the best configuration that provides the most energy savings while maintaining PDR within margins of good performance (the degradation in the PDR value is kept below $15\%$ of the PDR achieved with the standard OLSR configuration).
137
+
138
+ ![](images/89e05b2c99b8bdfbcf04cf990c72f5ed6f2f6543f050eddd3fb5f171861a94ff.jpg)
139
+ Fig. 1 Automatic methodology for energy-aware OLSR tuning.
140
+
141
+ Fig. 1 summarizes the automatic methodology for finding energy-aware parametrizations for the OLSR protocol in VANETs. The proposed method integrates the evolutionary search via a parallel GA, the routing simulation in VANETs using the ns-2 network simulator and the UM-OLSR implementation from University of Murcia [34], and a set of scripts developed to evaluate the power consumption and QoS using the ns-2 output.
142
+
143
+ # 3 Evolutionary computation
144
+
145
+ This section introduces the main concepts about evolutionary computation and the parallel model applied to the GA used in this paper.
146
+
147
+ # 3.1 Evolutionary algorithms
148
+
149
+ EAs are non-deterministic methods that emulate the evolutionary process of species in nature, in order to solve optimization, search, and other problems [6]. Over the last twenty years, EAs have been successfully applied for solving optimization and search problems underlying many complex real-life applications. An EA is an iterative technique (each iteration is called a generation) that applies stochastic operators on a pool of individuals (the population). Each individual in the population is the encoded version of a tentative solution of the problem. The initial population is generated either by using a random method or by applying a specific heuristic for the problem. An evaluation function associates a fitness value with every individual, indicating its suitability to the problem. Iteratively, the population is modified by the probabilistic application of variation operators like the recombination of individuals or random changes (mutations) in their contents. A selection technique that gives a higher chance of survival to the best suited individuals, guides the EA to tentative solutions
150
+
151
+ of better quality through the generations.
152
+
153
+ The stopping criterion usually involves a fixed number of generations or execution time, a quality threshold on the best fitness value, or the detection of a stagnation situation. Specific policies are used to select the individuals to recombine and to determine which new individuals are inserted in the population in each new generation. The EA returns the best solution ever found in the iterative process, taking into account the fitness function considered.
154
+
155
+ The classic GA [21] is an EA that defines recombination and mutation as variation operators, applying them to the population of potential solutions in each generation. The recombination is used as the main operator to perform the search (exploiting the characteristics of suitable individuals), while the mutation is used as a (seldom-applied) secondary operator aimed at providing diversity for exploring different zones of the search space.
156
+
157
+ GAs are widely spread due to their versatility for solving optimization problems. Here, a parallel version of the classic GA has been applied to the problem of finding energy-aware OLSR configurations in VANETs.
158
+
159
+ # 3.2 Parallel evolutionary algorithms
160
+
161
+ Parallel implementations became popular in the last decade as an effort to improve the efficiency of EAs. By splitting the population or the fitness
162
+
163
+ function evaluation into several processing elements, parallel EAs allow reaching high quality results in a reasonable execution time even for hard-to-solve optimization problems [2]. The parallel GA proposed here is categorized within the master-slave model according the classification by Alba and
164
+
165
+ Tomassini [4]. The master-slave model (see Fig. 2) follows a classic functional decomposition of the EA, where different stages of the evolutionary process are performed in several computing resources. The evaluation of the fitness function is the main candidate to perform in parallel, since it usually requires larger computing time than the application of the variation operators.
166
+
167
+ Thus, a master-slave parallel EA is organized in a hierarchic structure: a master process performs the evolutionary search and controls a group of slave processes that evaluate
168
+
169
+ the fitness function.
170
+
171
+ ![](images/4080cdf800dcc921da1129f384190e35c3be3d946b8d462ed0cab2f58473ac87.jpg)
172
+ Fig. 2 Master-slave model for parallel EAs.
173
+
174
+ # 4 A parallel GA for energy-aware OLSR tuning
175
+
176
+ This section presents the implementation details of the parallel GA designed to find the energy-aware configuration of OLSR.
177
+
178
+ # 4.1 The MALLBA library
179
+
180
+ MALLBA [2] is a library of optimization algorithms that deals with parallelism in a user-friendly and efficient manner. MALLBA implements EAs and other metaheuristics as generic templates in software skeletons to be instantiated with the problem features by the user. These templates incorporate the knowledge related to the resolution method, its interactions with the problem, and the parallelism. Skeletons are implemented by required and provided $\mathrm{C}++$ classes that abstract the entities in the resolution method:
181
+
182
+ - The provided classes implement internal aspects of the skeleton in a problemindependent way. The most important provided classes are Solver that implements each optimization algorithm, SetUpParams for setting the algorithms' parameters, and Population to store a set of candidate solutions.
183
+ - The required classes specify information related to the problem. Each skeleton includes the Problem and Solution required classes, that encapsulate the problem-dependent entities needed by the resolution method.
184
+
185
+ # 4.2 Parallel multithreading GA in MALLBA
186
+
187
+ The skeletons in MALLBA offer support for parallelism using the distributed memory approach (i.e., implementing distributed subpopulation models for metaheuristics). However, the library does not provide support for sharedmemory multithreading parallel programming.
188
+
189
+ Multihtreading programming allows implementing efficient algorithms by using multiple threads within a single process. Multihtreading is well suited for multi-core computers, where each thread is executed on a single core. It provides a fast method for concurrent execution; communications and synchronizations are performed via the shared-memory resource, which is handled using mutually-exclusive operations in order to prevent simultaneous accesses. There is a runtime overhead for creating and destroying threads, and a common approach to avoid it is using a thread pool. Instead of creating a new thread, the application uses an available thread from the pool, performs its task, and returns the thread to the pool instead of destroying it. This reusing methodology improves the performance of the parallel program, by reducing the cost of performing the creation and termination of threads.
190
+
191
+ The multithreading master-slave parallel EA proposed in this work was implemented using the GA skeleton in MALLBA. Additional code was incorporated into the GA skeleton to implement several new features:
192
+
193
+ - to create and manage the pool of threads used for the fitness evaluation;
194
+ - to implement the master-slave hierarchy and the communications between master and slaves;
195
+ - to define the synchronization mechanisms between threads, used to read and write the shared memory.
196
+
197
+ Our implementation starts by creating and initializing a pool of threads to distribute the fitness evaluation. Each thread receives several input parameters from the master process, including the solution to be evaluated, the thread identification, and the index in the array of fitness values. Then, each slave process, implemented in each thread, computes the fitness evaluation by simulating the mobile communications with the proposed OLSR parameters configuration in a given VANET scenario, using the ns-2 network simulator. The master process, implemented in the main thread of execution, is in charge of performing the domain decomposition for the problem, by assigning each thread the solutions to be evaluated. After that, the master process waits until all slave threads finish their execution and report the fitness value.
198
+
199
+ # 4.3 Problem encoding
200
+
201
+ The OLSR protocol is governed by eight different configuration parameters, already presented in Table 1. For this reason, in the parallel GA the solutions are represented by individuals encoded as a vector with eight genes, one for each parameter, as presented in Fig. 3.
202
+
203
+ ![](images/88062d2117d9d34df36b06691ac9c7aae5885402ff19df4db01de238db2c30a8.jpg)
204
+ Fig. 3 Solution encoding for the energy-aware OLSR tuning problem.
205
+
206
+ The first three genes are real valued, and they represent the timeout timers before resending control messages (HELLO INTERVAL, REFRESH INTERVAL, and TC INTERVAL, respectively). The forth one encodes the WILLINGNESS parameter, and therefore, it takes an integer value from zero to seven. Finally, the last four genes are real valued, and they denote the timeout hold timers of OLSR (NEIGHB HOLD TIME, MID HOLD TIME, TOP HOLD TIME, and DUP HOLD TIME, respectively). The valid ranges for each one of the gene values have already been presented in Table 1.
207
+
208
+ # 4.4 Fitness function
209
+
210
+ The fitness function is crucial for the GA optimization mechanism, since it guides the population to solutions of better quality. The optimization proposed in this work mainly concerns to energy-aware communications, so the main component of the fitness function is the power consumed by the VANET nodes when using a certain OLSR configuration. However, if a given configuration excessively reduces the power consumption, the protocol may not satisfy the QoS requirements for the communication in VANET networks. So, there is a tradeoff between the energy efficiency and the QoS provided by the protocol. In order to take into account the previous consideration, the fitness function used in the parallel GA proposed in this work integrates the PDR metric in
211
+
212
+ order to bias the search to solutions with acceptable QoS.
213
+
214
+ The fitness function is given by the expression in Equation 6, where $E(s)$ and $PDR(s)$ represent the power consumption and the PDR for a given OLSR configuration $s$ , respectively. $E_{RF C}$ and $PDR_{RF C}$ are the reference values for the power consumption and the PDR when using the standard configuration in RFC 3626, respectively. Finally, $\omega_{1} = 0.9$ and $\omega_{2} = -0.1$ are the weights for the energy and PDR contributions, respectively, and $\Delta = 0.1$ is a normalizing offset to keep the fitness value in the interval [0, 1].
215
+
216
+ $$
217
+ \begin{array}{l} F (s) = \Delta + \\ (\omega_ {1} \end{array} \times \frac {E (s)}{E _ {R F C}} + \omega_ {2} \times \frac {P D R (s)}{P D R _ {M A X}} \tag {6}
218
+ $$
219
+
220
+ Equation 6 is valid for solutions with a PDR degradation lower than $15\%$ of the reference PDR value. In order to keep in the GA population those solutions with still a lower PDR, but containing potentially useful genetic information, the penalization model in Equation 7 was applied.
221
+
222
+ $$
223
+ F _ {P} (s) = F (s) + \left(0. 8 5 \times \quad_ {R F C} - P D R (s)\right) \times \frac {E (s)}{E _ {R F C}} \tag {7}
224
+ $$
225
+
226
+ # PDR
227
+
228
+ The penalized fitness $F_{P}(s)$ takes into account the gap between the PDR of the evaluated solution and the worst PDR value admitted $(0.85 \times PDR_{RF C})$ , and the ratio between the energy of the evaluated solution and the reference energy value $E_{RF C}$ .
229
+
230
+ # 4.5 Parallel GA operators
231
+
232
+ A classic GA has been applied for protocol tuning in a previous paper [18]. However, although it offered competitive results, that algorithm suffered from low population diversity and early stagnation. For this reason, in this work we decided to introduce some variations to the canonical initialization and mutation operators.
233
+
234
+ # 4.5.1 Initialization
235
+
236
+ The population initialization should distribute the individuals uniformly in the search space as much as possible. However, this uniform pattern is not easy to obtain when using random operators and small populations. Therefore, here we propose using a uniform initialization, to ensure that the initial population contains individuals from different areas of the parameters' search space. The initialization operator splits the search space into pop size diagonal subspaces (where pop size is the global population size of the parallel GA), and it forcibly ensures that there is an individual located in each diagonal subspace [11]. Equation 8 summarizes the procedure applied in the initialization operator.
237
+
238
+ $$
239
+ x _ {p, i} ^ {(0)} = z _ {i} ^ {R F C} + \alpha^ {p} i \in [ 0, 7 ], p \in [ 0, \text {p o p} \underline {{s i z e}} - 1 ] \tag {8}
240
+ $$
241
+
242
+ In Equation 8, $x_{p,i}^{(0)}$ is the initial value for each gene $i$ in the solution vector that encodes the $p$ -th individual, set according to a population seed $z^{RFC}$ and a randomly distributed value $\alpha^p$ . $z^{RFC}$ is the value proposed by the RFC 3626 for the $i$ -th OLSR parameter. $\alpha^p$ is computed by using the diagonal subspace limits and a random value $\beta \in [0,1]$ , as expressed in Equation 9, where $z_{(i,MAX)}$ and $z_{(i,MIN)}$ are the upper and lower values for the $i$ -th parameter, according to the ranges defined in Table 1.
243
+
244
+ $$
245
+ \alpha^ {p} = \frac {(p + \beta)}{\text {p o p - s i z e}} \times (z (i, M A X) - z (i, M I N)) \tag {9}
246
+ $$
247
+
248
+ # 4.5.2 Recombination
249
+
250
+ The parallel GA uses the classic arithmetic recombination operator for real-valued problem encodings. It defines a linear combination of two chromosomes, $x_{p}^{(g)}$ and $x_{q}^{(g)}$ , according to Equation 10, where the best parent governs the
251
+
252
+ reproduction according to the weight $\sigma \in [0,1]$
253
+
254
+ $$
255
+ x _ {p, i} ^ {(g + 1)} = \sigma \times x _ {p, i} ^ {(g)} + (1 - \sigma) \times x _ {q, i} ^ {(g)}
256
+ $$
257
+
258
+ $$
259
+ x ^ {(g + 1)} = (1 - \sigma) \times x ^ {(g)} + \sigma \times x ^ {(g)} \tag {10}
260
+ $$
261
+
262
+ $$
263
+ q, i \quad p, i \quad q, i
264
+ $$
265
+
266
+ # 4.5.3 Mutation
267
+
268
+ The mutation operator introduces new genetic information, and therefore, diversity to the population of the parallel GA. After analyzing the algorithm of the OLSR protocol, we decided to introduce some problem-related information in the mutation operator. Thereby, the new genetic information is randomly generated, but it does not represent pointless OLSR configurations. The genes that encode OLSR related parameters, e.g., HELLO INTERVAL and NEIGHB HOLD TIME [12], are modified simultaneously, but using different policies and following the OLSR power-aware problem specifications. According to this idea, the mutation operator offers 22 different movements in the solution space. For example, Equation 11 presents the case in which
269
+
270
+ HELLO INTERVAL $(x^{(g)})$ and NEIGHB HOLD TIME $(x^{(g)})$ genes are mutated in generation $g$ . A similar procedure is employed for other parameters.
271
+
272
+ $$
273
+ x _ {p, 0} ^ {(g + 1)} = \beta_ {0} \times \left(z _ {(0, M A X)} - z _ {(0, M I N)}\right) \quad \beta_ {0} \in [ 0, 1 ]
274
+ $$
275
+
276
+ $$
277
+ x _ {p, 4} ^ {(g + 1)} = \beta_ {4} \times \left(z _ {(4, M A X)} - z _ {(4, M I N)}\right) \quad \beta_ {4} \in [ 0, 1 ] \tag {11}
278
+ $$
279
+
280
+ # 5 Experimental analysis
281
+
282
+ This section introduces the set of VANET scenarios and the computational platform used to evaluate the proposed parallel GA. After that, the experiments to determine the best values for the GA parameters are presented. First, the experimental results when solving realistic VANET scenarios are analyzed, by presenting the numerical results and a comparative analysis of solution quality and computational efficiency when using a different number of threads. Last, the best solutions found are validated by studying their performance on a set of 36 VANET scenarios.
283
+
284
+ # 5.1 VANET scenarios
285
+
286
+ The experimental evaluation of the proposed parallel GA was performed using urban VANET scenarios covering real areas of the city of Málaga, Spain.
287
+
288
+ ![](images/5d59cd27a464528ae4cdd476c949d95af521ca9c67e95aec06faa6ca764aa122.jpg)
289
+ Fig. 4 M'alaga urban areas taken into account in each VANET scenario.
290
+
291
+ A total number of 36 scenarios were used, considering the three areas shown on the map in Fig. 4.
292
+
293
+ In the first stage, the simulations in the parallel GA parameter setting experiments were done in a small-sized scenario (U1) with 20 vehicles moving along the roads. The optimization of OLSR parameters using parallel GAs was performed using a medium-sized scenario (U2), also with 20 vehicles. Lastly, in the validation experiments, 36 scenarios with different area sizes, traffic densities (number of vehicles per square meter), and communication patterns were used. In each case, realistic simulation mobility models were generated using the open source traffic simulation package SUMO [26], where vehicles move following real traffic rules (traffic lights and signs) during $180\mathrm{s}$ .
294
+
295
+ The VANETs were evaluated by using the ns-2 network simulator, having nodes configured following the 'Standard Wireless Access in Vehicular Environments" (WAVE). In order to evaluate the performance of the routing protocol, different constant bit rate (CBR) traffic sources were randomly chosen to generate the packets that travel through the network.
296
+
297
+ Table 2 presents the main features of the VANET scenarios and the network specification used in the experimental analysis. All the scenarios, mobility models and network workloads used are publicly available to download in http://neo.lcc.uma.es/vanet/download-simulations.
298
+
299
+ # 5.2 Development and execution platform
300
+
301
+ The parallel GA was implemented in $\mathrm{C + + }$ , using MALLBA and the standard pthread library. The experimental analysis was performed in a cluster with Opteron 6172 Six-Core processors at $2.1\mathrm{GHz}$ , with 24 GB RAM, CentOS Linux, and Gigabit Ethernet (Cluster FING, Facultad de Ingeniería, Universidad de la Républca, Uruguay; cluster website: http://www.fing.edu.uy/cluster).
302
+
303
+ Table 2 Details of the VANET scenarios and network specification.
304
+
305
+ <table><tr><td>scenario</td><td>area size</td><td>vehicles</td><td>CBR sources</td><td>parameter</td><td>value/protocol</td></tr><tr><td rowspan="2">U1</td><td></td><td></td><td></td><td>Propagation model</td><td>Nakagami fading</td></tr><tr><td></td><td></td><td></td><td>Max. radio range</td><td>500 m</td></tr><tr><td rowspan="3">parameter setting)</td><td>120000 m2</td><td>20</td><td>10</td><td>Carrier frequency</td><td>5.89 GHz</td></tr><tr><td></td><td></td><td></td><td>Channel bandwidth</td><td>6 Mbps</td></tr><tr><td></td><td>20</td><td>10</td><td>PHY/MAC layer</td><td>IEEE</td></tr><tr><td rowspan="4">U2</td><td>240000 m2</td><td>30</td><td>15</td><td>802.11p</td><td>OLSR</td></tr><tr><td></td><td>40</td><td>20</td><td>Routing layer</td><td>UDP</td></tr><tr><td></td><td></td><td></td><td>Transport layer</td><td></td></tr><tr><td></td><td></td><td></td><td>CBR packet size</td><td>512 bytes</td></tr><tr><td rowspan="3">U3</td><td>360000 m2</td><td>30</td><td>15</td><td>CBR data rate</td><td>33/66/100</td></tr><tr><td></td><td>45</td><td>23</td><td></td><td>333/666/1000 kbps</td></tr><tr><td></td><td>60</td><td>30</td><td>CBR time</td><td>60 s</td></tr></table>
306
+
307
+ # 5.3 GA parameter setting experiments
308
+
309
+ A parameter setting analysis was performed to study the best values for the crossover probability $(p_C)$ and the mutation probability $(p_M)$ in the parallel GA. The analysis was done over a small VANET defined in scenario U1 (120000 $\mathrm{m}^2$ and 20 vehicles, with reference values $E_{RF}C = 5680$ and $PDR_{RF}C = 88.23\%$ ). The population size of the parallel GA was fixed to 24 individuals, and the stopping criterion was set at 100 generations. The candidate values for the parameters were: $p_C$ : 0.5, 0.7, 0.9; and $p_M$ : 0.25, 0.0125, 0.006125.
310
+
311
+ Table 3 summarizes the parallel GA results for the nine combinations of $p_{C}$ and $p_{M}$ analyzed, reporting the average, relative standard deviation, and best values of fitness; the average energy and PDR, and the average gaps in energy and PDR with the standard RFC configuration (Equations 12 and 13). Fig. 5(a) presents the energy improvements with respect to the standard RFC configuration, and Fig. 5(b) compares the trade-offs between power consumption and PDR for each of the nine configurations studied.
312
+
313
+ $$
314
+ G A P _ {\text {e n e r g y}} = \frac {E _ {R F C} - E (s)}{E _ {R F C}} \quad (1 2) \quad G A P _ {P D R} \quad \frac {P D R _ {R F C} - P D R (s)}{1 0 0} \tag {13}
315
+ $$
316
+
317
+ Table 3 Experimental results: parameter setting for the parallel GA.
318
+
319
+ <table><tr><td rowspan="2">(pC, pM)</td><td colspan="3">fitness</td><td colspan="2">metrics</td><td>GAP</td><td>RFC</td></tr><tr><td>avg</td><td>stdev</td><td>best</td><td>energy</td><td>PDR</td><td>energy</td><td>PDR</td></tr><tr><td>(0.5,0.06125)</td><td>0.576836</td><td>0.31%</td><td>0.572319</td><td>3454.40</td><td>75.03%</td><td>39.19%</td><td>-14.95%</td></tr><tr><td>(0.7,0.06125)</td><td>0.577790</td><td>0.55%</td><td>0.571034</td><td>3446.11</td><td>75.01%</td><td>39.34%</td><td>-14.99%</td></tr><tr><td>(0.9,0.06125)</td><td>0.577498</td><td>0.39%</td><td>0.572754</td><td>3459.03</td><td>75.20%</td><td>39.11%</td><td>-14.77%</td></tr><tr><td>(0.5,0.125)</td><td>0.573733</td><td>0.21%</td><td>0.571268</td><td>3447.76</td><td>75.03%</td><td>39.31%</td><td>-14.95%</td></tr><tr><td>(0.7,0.125)</td><td>0.573778</td><td>0.24%</td><td>0.570946</td><td>3445.84</td><td>75.05%</td><td>39.34%</td><td>-14.93%</td></tr><tr><td>(0.9,0.125)</td><td>0.576217</td><td>0.14%</td><td>0.574546</td><td>3470.34</td><td>75.33%</td><td>38.91%</td><td>-14.61%</td></tr><tr><td>(0.5,0.25)</td><td>0.574279</td><td>0.13%</td><td>0.572724</td><td>3457.23</td><td>75.01%</td><td>39.14%</td><td>-14.99%</td></tr><tr><td>(0.7,0.25)</td><td>0.572346</td><td>0.15%</td><td>0.570118</td><td>3440.33</td><td>75.01%</td><td>39.44%</td><td>-14.99%</td></tr><tr><td>(0.9,0.25)</td><td>0.572408</td><td>0.17%</td><td>0.570351</td><td>3442.20</td><td>75.07%</td><td>39.41%</td><td>-14.91%</td></tr></table>
320
+
321
+ ![](images/2c3b105cbdbaf0aae571bcbb2d6664adae5f6a3e417e6320849db0451fd166e1.jpg)
322
+ (a) GAP energy.
323
+
324
+ ![](images/a1cbbc28318f1098debfd37ff9484b425c6490c0a3326b15d32bd19df0e3e506.jpg)
325
+ (b) Energy/PDR trade-offs.
326
+ Fig. 5 Graphical summary: parameters setting for the parallel GA.
327
+
328
+ The graphic in Fig. 5(b) shows that four of the studied combinations of $p_C$ and $p_M$ obtained the best trade-off values between power consumption and PDR: (0.7, 0.25), (0.9, 0.25), (0.9, 0.06125), and (0.9, 0.125). Since this work is mainly concerned with reducing the power consumption, the most promising OLSR configurations are those in the far right section of the graphic in Fig. 5(b). When compared with the power consumption and PDR results obtained with the standard RFC configuration, the best results were obtained with the parameter configurations $p_C = 0.7$ , $p_M = 0.25$ .
329
+
330
+ # 5.4 Results and discussion
331
+
332
+ The experimental evaluation studied the quality of results and the computational efficiency of the parallel GA using the most promising parameter values identified in the previous subsection, to find an energy-aware configuration for OLSR in VANETs. In all the experiments reported in this subsection, the stopping criterion for the parallel GA was set at 500 generations.
333
+
334
+ The experimental analysis was performed over a medium-sized VANET defined in the scenario U2 (area $240000\mathrm{m}^2$ , and involving 20 vehicles). The reference values for energy and PDR for this scenario are $E_{RF C} = 9104.19$ and $PDR_{RF C} = 87.12\%$ .
335
+
336
+ # 5.4.1 Experimental results
337
+
338
+ Table 4 summarizes the results of the experimental analysis over the mediumsized U2 scenario. Three parallel GA variants were studied: implementations using 8, 16, and 24 individuals, and the same number of execution threads. In order to provide a baseline for the comparison, the analysis includes the results obtained with two sequential optimization methods: a classic GA, using a population of 8 individuals and a single thread for execution, and the previous QoS optimized version of OLSR by means of Differential Evolution (DE-OLSR) [40].
339
+
340
+ Table 4 reports the average, relative standard deviation, and best fitness results obtained in 30 independent executions performed for each algorithm: parallel GA with 8 threads (pGA-8), parallel GA with 16 threads (pGA-16), and parallel GA with 24 threads (pGA-24). In addition, the power consumption and PDR values obtained with the best OLSR configuration found, and the gaps with respect to the standard RFC parametrization are also presented.
341
+
342
+ Table 4 Experimental results: parallel GA evaluation.
343
+
344
+ <table><tr><td rowspan="2">algorithm</td><td colspan="3">fitness</td><td colspan="2">metrics</td><td>GAP</td><td>RFC</td></tr><tr><td>avg</td><td>stdev</td><td>best</td><td>energy</td><td>PDR</td><td>energy</td><td>PDR</td></tr><tr><td>sequential GA</td><td>0.7521</td><td>2.66%</td><td>0.7025</td><td>6909.12</td><td>80.48%</td><td>24.11%</td><td>-6.64%</td></tr><tr><td>QoS DE-OLSR [40]</td><td>n/a</td><td>n/a</td><td>0.7734</td><td>7798.48</td><td>97.55%</td><td>14.34%</td><td>10.43%</td></tr><tr><td>pGA-8</td><td>0.7058</td><td>1.88%</td><td>0.6730</td><td>6551.89</td><td>74.74%</td><td>28.03%</td><td>-12.38%</td></tr><tr><td>pGA-16</td><td>0.6883</td><td>1.69%</td><td>0.6621</td><td>6446.80</td><td>75.20%</td><td>29.19%</td><td>-11.92%</td></tr><tr><td>pGA-24</td><td>0.6774</td><td>1.37%</td><td>0.6482</td><td>6305.58</td><td>75.14%</td><td>30.74%</td><td>-11.98%</td></tr></table>
345
+
346
+ In order to determine the significance of the comparison, a statistical analysis was performed over the results distributions for each parallel GA. First, the Kolmogorov-Smirnov test was applied to check whether the obtained fitness values follow a normal distribution or not. The $D$ metric values presented in the first row of Table 5 indicates that the results for pGA-8, pGA-16, and pGA-24 are not normally distributed. As a consequence, the non-parametric Kruskal-Wallis statistical test was performed with a confidence level of $95\%$ , to compare the distributions for pGA-8, pGA-16, and pGA-24. The small $p$ values reported ( $< 0.05$ in all cases) indicate that the fitness improvements can be considered statistically significant, thus the parallel GA using 24 threads is the best algorithm from among the studied methods.
347
+
348
+ Table 5 Statistical analysis of parallel GA results.
349
+
350
+ <table><tr><td rowspan="2" colspan="2">statistical test</td><td colspan="3">algorithm</td></tr><tr><td>pGA-8</td><td>pGA-16</td><td>pGA-24</td></tr><tr><td>Kolmogorov-Smirnov</td><td></td><td>&lt; 10-7</td><td>&lt; 10-7</td><td>&lt; 10-7</td></tr><tr><td rowspan="3">Kruskal-Wallis</td><td>pGA-8</td><td>6</td><td>6.4×10-4</td><td>1.9×10-7</td></tr><tr><td>pGA-16</td><td>.4×10-4</td><td>-</td><td>0.015</td></tr><tr><td>pGA-24</td><td>1.9×10-7</td><td>0.015</td><td>-</td></tr></table>
351
+
352
+ Overall, the results in Tables 4 and 5 demonstrate that significant improvements in the fitness values are computed using the parallel master-slave GA with 24 threads, when compared with the reference results from the sequential GA and DE-OLSR. The improvements in the fitness values bring forth a significant decrease in the power consumption of the OLSR protocol: more than $30\%$ of reduction with respect to the standard OLSR configuration was achieved for the best configuration found using pGA-24, while the PDR degradation remained below $12\%$ .
353
+
354
+ The best energy-aware OLSR configuration—found by the parallel GA using 24 threads—is HELLO INTERVAL = 14.890, REFRESH INTERVAL = 7.416, TC INTERVAL = 28.158, WILLINGNESS = 5, NEIGHB HOLD TIME = 20.825, MID HOLD TIME = 10.814, TOP HOLD TIME = 70.959, and DUP HOLD TIME = 90.000.
355
+
356
+ The main advantages of this configuration are: i) it generates lower traffic control than the standard RFC configuration, since it increases the timeouts that control the protocol messages forwarding; ii) the power consumption of each vehicular node significantly decreases with respect to the one required when using the standard RFC configuration, because each node spends less time in the most consuming states (transmitting and receiving); and iii) all nodes show a higher will to act as MPR. On the other hand, a disadvantage of the proposed configuration is that it uses higher validity times, and therefore, it needs longer to detect link loss failures.
357
+
358
+ # 5.4.2 Computational efficiency
359
+
360
+ The most common metrics used by the research community to evaluate the performance of parallel algorithms are the speedup and the efficiency.
361
+
362
+ The speedup evaluates how much faster a parallel algorithm is than its corresponding sequential version. It is computed as the ratio of the execution times of the sequential algorithm $(T_{1})$ and the parallel version executed on $m$ computing elements $(T_{m})$ (Equation 14). When applied to non-deterministic algorithms, such as the parallel GA applied in this work, the speedup should compare the mean values of the sequential and parallel execution times (Equation 15) [4]. The ideal case for a parallel algorithm is to achieve linear speedup ( $S_{m} = m$ ), but the most common situation is to achieve sublinear speedup ( $S_{m} < m$ ), mainly due to the times required to communicate and synchronize the parallel processes.
363
+
364
+ The efficiency is the normalized value of the speedup, regarding the number of computing elements used to execute a parallel algorithm (Equation 16). This metric allows the comparison of algorithms eventually executed in nonidentical computing platforms. The linear speedup corresponds to $e_m = 1$ , and in the most usual situations $e_m < 1$ .
365
+
366
+ $$
367
+ S _ {m} = \frac {T _ {1}}{T _ {m}} \quad (1 4) \quad S _ {m} = \frac {E [ T _ {1} ]}{E [ T _ {m} ]} \quad (1 5) \quad e _ {m} = \frac {S _ {m}}{m} \tag {16}
368
+ $$
369
+
370
+ Table 6 compares the performance of the studied parallel GAs, showing the average and best execution times, and the values of the speedup and efficiency metrics when using 8, 16, and 24 threads. The results in Table 6 demonstrate that significant reductions in the required execution times are obtained when using the parallel GA implementations with respect to a sequential GA. Fig. 6 graphically summarizes the speedup and efficiency comparison for the three parallel GAs.
371
+
372
+ Table 6 Performance comparison of the proposed parallel GAs.
373
+
374
+ <table><tr><td rowspan="2">algorithm</td><td colspan="2">execution time (s)</td><td colspan="2">speedup</td><td colspan="2">efficiency</td></tr><tr><td>avg</td><td>- best</td><td>avg</td><td>best</td><td>avg</td><td>best</td></tr><tr><td>parallel GA, 8 threads</td><td>- 11113.73</td><td>9235.71</td><td>5.80</td><td>6.86</td><td>0.72</td><td>0.86</td></tr><tr><td>parallel GA, 16 threads</td><td>13192.70</td><td>12440.05</td><td>11.81</td><td>12.63</td><td>0.74</td><td>0.79</td></tr><tr><td>parallel GA, 24 threads</td><td>20239.02</td><td>13670.90</td><td>19.10</td><td>20.12</td><td>0.80</td><td>0.84</td></tr></table>
375
+
376
+ According to Amdahl's law [5], the performance of any parallel application is theoretically limited by the sequential part of the code, which mainly depends on the choice of the parallelization strategy. In the proposed parallel GAs, the fitness function evaluation is the most consuming part within the algorithm, since the VANET simulations using ns-2 demand large execution times. The results in Table 6 and Fig. 6 demonstrate that the proposed master-slave model is a useful choice to significantly reduce the execution times of the parallel GAs. Despite following a synchronous paradigm (that tends to generate idle times due to the synchronization of the execution threads), the parallel GAs show an almost-linear speedup behavior. The average efficiency values obtained were greater than $70\%$ for the three implementations studied, and a maximum average of $80\%$ was achieved when using the parallel GA with 24 threads.
377
+
378
+ ![](images/2c3611c17f8316b12024bac2e2726f7c269ab44f38875c10453e883db0c6389e.jpg)
379
+ (a) Speedup.
380
+
381
+ ![](images/765c9428bcb9d924e9c4deeb40d226f560cd664e44477fc982fe87bc76e7570a.jpg)
382
+ (b) Efficiency.
383
+ Fig. 6 Speedup and efficiency comparison for the parallel GAs.
384
+
385
+ # 5.5 Validation in other VANET scenarios
386
+
387
+ In order to confirm the efficacy of the results obtained in the experimental analysis, a set of validation experiments were conducted to compare the performance of the best OLSR configurations found using each parallel GA with the standard RFC configuration. The validation experiments involved simulations performed over 36 different unseen VANET scenarios, defined in the medium-size (U2) and large-size (U3) urban areas of Málaga, already presented in Section 5.1.
388
+
389
+ The validation analysis evaluated several metrics related to the energyaware and QoS of the communication. From the point of view of the power consumption, the energy in transmitting $(E_{send})$ and receiving $(E_{recv})$ mode, as well as the total energy $(E_{total})$ and total energy per vehicle $(E_{tot \times v})$ were studied. From the point of view of QoS, the studied metrics include the PDR, the time spent until reaching the destination node (End-to-End Delay, E2ED, in milliseconds), the overload generated by the routing protocol (Normalized Routing Load, NRL), and the quality of the generated routing paths, evaluated by the number of hops required to reach the destination.
390
+
391
+ Table 7 presents for each best OLSR configuration found using the three parallel GAs studied, the average values for each studied metric, computed in the simulations performed over the 36 VANET scenarios. The results are compared with the reference values obtained in simulations performed with the standard OLSR configuration suggested by RFC 3626. The best average values obtained for each metric are marked in bold.
392
+
393
+ Table 7 Results of the validation experiments.
394
+
395
+ <table><tr><td rowspan="2">config.</td><td colspan="4">energy metrics</td><td colspan="4">QoS metrics</td></tr><tr><td>Esent</td><td>Erev</td><td>Etotal</td><td>Etot×v</td><td>PDR</td><td>E2ED</td><td>NRL</td><td>hops</td></tr><tr><td colspan="9">medium size (U2)</td></tr><tr><td>pGA-8</td><td>12099.05</td><td>5265.45</td><td>17364.49</td><td>604.12</td><td>61.54%</td><td>62.39</td><td>3.36%</td><td>1.58</td></tr><tr><td>pGA-16</td><td>11902.02</td><td>5206.53</td><td>17108.55</td><td>589.17</td><td>63.64%</td><td>58.35</td><td>3.53%</td><td>1.43</td></tr><tr><td>pGA-24</td><td>11776.50</td><td>5094.87</td><td>16871.36</td><td>575.86</td><td>61.80%</td><td>55.04</td><td>3.34%</td><td>1.47</td></tr><tr><td>RFC</td><td>17918.45</td><td>8102.75</td><td>26021.20</td><td>876.91</td><td>70.22%</td><td>1356.18</td><td>25.46%</td><td>1.25</td></tr><tr><td colspan="9">large size (U3)</td></tr><tr><td>pGA-8</td><td>14682.85</td><td>7030.52</td><td>21713.36</td><td>491.22</td><td>55.75%</td><td>505.30</td><td>3.98%</td><td>1.50</td></tr><tr><td>pGA-16</td><td>14864.78</td><td>7120.72</td><td>21985.51</td><td>505.51</td><td>57.63%</td><td>490.34</td><td>3.73%</td><td>1.48</td></tr><tr><td>pGA-24</td><td>14249.18</td><td>6762.22</td><td>21011.39</td><td>479.16</td><td>56.65%</td><td>483.62</td><td>3.57%</td><td>1.45</td></tr><tr><td>RFC</td><td>21574.81</td><td>16247.10</td><td>37821.93</td><td>877.75</td><td>64.00%</td><td>868.57</td><td>28.34%</td><td>1.15</td></tr><tr><td colspan="9">overall</td></tr><tr><td>pGA8</td><td>13390.95</td><td>6147.99</td><td>19538.93</td><td>547.67</td><td>58.64%</td><td>283.85</td><td>3.67%</td><td>1.54</td></tr><tr><td>pGA-16</td><td>13383.40</td><td>6163.63</td><td>19547.03</td><td>547.34</td><td>60.64%</td><td>274.34</td><td>3.63%</td><td>1.46</td></tr><tr><td>pGA-24</td><td>13012.84</td><td>5928.54</td><td>18941.37</td><td>527.51</td><td>59.22%</td><td>269.33</td><td>3.45%</td><td>1.46</td></tr><tr><td>RFC</td><td>19572.25</td><td>12102.03</td><td>31674.29</td><td>877.33</td><td>67.89%</td><td>506.26</td><td>25.22%</td><td>1.20</td></tr></table>
396
+
397
+ # Power consumption
398
+
399
+ The values of the power consumption metrics in Table 7 indicate that significant reductions are obtained when using the OLSR parameterizations computed by using the three parallel GA. The configuration found by the parallel GA using 24 threads is the most efficient parametrization for OLSR in VANETs, allowing a reduction of up to $40.2\%$ in the power consumption. This behavior was consistently verified in both transmitting and receiving communication modes, and in the overall energy utilization per vehicle.
400
+
401
+ Figure 7 presents the energy reductions with respect to the standard RFC configuration, regarding the dimension of the simulated scenarios.
402
+
403
+ ![](images/7cc396c8ff93217028e8b30de85f952617e0266137e69f2edb70c81a03ca271e.jpg)
404
+ Fig. 7 Energy reductions with respect to the RFC, regarding the scenario dimension.
405
+
406
+ The results in Figure 7 demonstrate that significant improvements in the power consumption are obtained when using the configuration found with pGA-24. In addition, the energy reductions with respect to the standard RFC configuration increase for the largest scenarios simulated. The configuration found by pGA-24 achieved up to $44.4\%$ of improvement in average for the larges scenarios, and a maximum value of $77.5\%$ in a scenario with 40 vehicles. These notable improvements confirm previous claims about the inefficiency of the standard OLSR configuration in large VANET scenarios with high traffic density, already suggested by previous experimental evaluations [13].
407
+
408
+ The (non-parametric) Friedman statistical test was applied to analyze the comparison ranks between the energy results of pGA-8, pGA-16, pGA-24, and RFC. In addition, the Wilcoxon signed-rank statistical test was applied to analyze the mean ranks of the energy results, by evaluating the paired differences between the gaps values for all configurations. Table 8 summarizes the results of the statistical analysis. In the Wilcoxon test, the group of three values reported corresponds to the positive ranks, average positive ranks, and the sum of positive ranks for every pairwise comparison, respectively.
409
+
410
+ Table 8 Statistical analysis of the energy results.
411
+
412
+ <table><tr><td rowspan="2" colspan="2">statistical test</td><td colspan="4">configuration</td></tr><tr><td>pGA-8</td><td>pGA-16</td><td>pGA-24</td><td>RFC</td></tr><tr><td colspan="2">Friedman (avg. rank)</td><td>2.19</td><td>1.94</td><td>1.92</td><td>3.94</td></tr><tr><td rowspan="4">Wilcoxon</td><td>pGA-8</td><td>-</td><td>(14, 19.8, 277)</td><td>(16, 16.6, 266)</td><td>(35, 19.0, 665)</td></tr><tr><td>pGA-16</td><td>(22, 17.7, 389)</td><td>-</td><td>(16, 17.1, 274)</td><td>(36, 18.5, 666)</td></tr><tr><td>pGA-24</td><td>(20, 20.0, 400)</td><td>(20, 19.6, 392)</td><td>-</td><td>(35, 18.9, 661)</td></tr><tr><td>RFC</td><td>(19.0, 1.0, 1)</td><td>(18.5, 0.0, 0)</td><td>(1, 5.0, 5)</td><td>-</td></tr></table>
413
+
414
+ All the previous results demonstrate the efficacy of the proposed automatic methodology to compute accurate energy-aware OLSR configurations.
415
+
416
+ # Quality of service
417
+
418
+ Regarding the QoS metrics, the results in Table 7 indicate that, when using the OLSR configuration computed by the parallel GA using 24 threads, the improvements in the power consumption are obtained without suffering large reductions in the PDR values— $8\%$ in average.—This is an acceptable value for the loss in the QoS, when taking into account the important energy reductions achieved.
419
+
420
+ An extremely large decrease is obtained in the transmission times required to reach the destination nodes (E2ED) when using the energy-aware OLSR configuration. This result is mainly motivated by the absence of congestion, due to the low overload generated. The NRL values indicate that all configurations found using the parallel GA exchange significantly less control messages than the standard OLSR. In average, the network overload is 1/7 of the standard one, showing that OLSR employing the automatic configuration is less likely to be affected by network congestion problems than the standard OLSR. This feature allows the new configuration to be more useful than the standard one in situations where a large number of messages are transmitted, such as in city center areas, traffic jam scenarios, etc. However, the values of the hops metric indicates that the standard OLSR finds shorter paths than the energy-aware OLSR. Anyway, the routing paths computed by the energy-aware OLSR do not use longer than 1.5 hops in average to reach the destination node, while the RFC configuration requires 1.20.
421
+
422
+ The previously commented QoS results indicate that the automatic energy-aware OLSR configuration found by pGA-24, while keeping the PDR degradations under a controlled threshold, generates less network routing overload, and it also allows a faster delivery of the packets. The standard OLSR computes shorter routing paths, but the size difference with the routing paths computed with the new energy-aware OLSR is negligible, so both configurations can be considered as equivalent regarding this metric. Indeed, the standard configuration is much more congestion-prone due to the large network overload and collisions.
423
+
424
+ # Experimental analysis: summary
425
+
426
+ The experimental analysis proved that the energy-aware OLSR configuration is able to obtain large reductions in the power consumption and significantly improve the time required to deliver the data packets, while only suffering a bounded degradation in the PDR metric. The relevance of all the considerations commented on the previous subsections increase when facing large-sized VANET scenarios where real-time transmissions are important, such as in traffic accidents, traffic jams, urban areas with high density of VANET users, etc. In these situations, the results obtained demonstrate the efficacy of the proposed automatic method for finding energy-aware OLSR configurations.
427
+
428
+ # 6 Conclusions
429
+
430
+ This article has studied the problem of finding energy-efficient configurations for the OLSR routing protocol in vehicular networks. The design of energyefficient communication protocols is an important issue in this research area, and few previous researches have tackled the OLSR configuration problem from an energy-oriented point of view. In this line of research, the main contribution of this article is to propose an automatic methodology for computing energy-efficient configurations for the OLSR protocol in VANETs, by using a parallel GA.
431
+
432
+ The automatic search for energy-aware OLSR configurations is carried out by considering the power consumption of the VANET nodes as the main objective to optimize, but also taking into account the level of QoS in the communications. A well-know energy model in wireless networks and the ns-2 network simulator were used. The proposed GA for solving the problem applies a master-slave parallel model. It enables the configurations search to be performed efficiently, by simultaneously using several computing resources to perform the VANET simulations. By reducing the execution times, the parallel GA allows increasing the population of candidate solutions in order to overcome the stagnation problem identified in previous proposals. The computational efficiency of the proposed parallel GA was almost-linear, obtaining efficiency values greater than $80\%$ .
433
+
434
+ Regarding the wireless communications, the experimental analysis demonstrates that significant reductions in the power consumption of the VANET nodes are obtained when using the automatic energy-aware OLSR configuration found by the parallel GA, when compared with the standard OLSR configuration suggested by RFC 3626. Average reductions up to $40.2\%$ in the power consumption were obtained, and significantly better improvements (up to $77.54\%$ ) were computed for large and dense VANET scenarios. In addition, the energy-aware OLSR configuration found significantly reduces the network overload, and thus it allows reducing the average time required to deliver the data packets. All these important features are obtained while only suffering a bounded degradation (less than $8\%$ ) in the QoS of the communication, evaluated by the PDR metric.
435
+
436
+ The main lines for future work are related to two issues: improving the method used in the automatic search, and tackling the OLRS configuration as a multiobjective problem. Regarding the first issue, the use of new fitness functions should be considered, taking into account new power-aware and QoS metrics, such as the residual level of battery of the nodes and the packet delays, respectively. In addition, the approach proposed in this paper could be extended by using several VANET scenarios to evaluate each OLSR configuration, possibly by using other efficient models for parallel EAs. Thus, different situations will be taken into account to obtain more accurate fitness results. Regarding the second issue, the study of explicit multiobjective approaches for the problem is also suggested as future work, in view that the OLSR energy savings vary in inverse proportion with the QoS of the protocol.
437
+
438
+ Acknowledgements J. Toutouh is supported by grant AP2010-3108 from the Spanish Government. The work of S. Nesmachnow has been partially supported by ANII and PEDECIBA, Uruguay. The work of J. Toutouh and E. Alba has been partially funded by the Spanish Ministry MICINN and FEDER under contracts TIN2008-06491-C04-01 (M* project) and TIN2011-28194 (roadME project), and CICE, Junta de Andaluc'ia, under contract P07-TIC-03044 (DIRICOM project).
439
+
440
+ # References
441
+
442
+ 1. Alba, E.: Parallel Metaheuristics: A New Class of Algorithms. Wiley-Interscience (2005)
443
+ 2. Alba, E., Almeida, F., Blesa, M., Cotta, C., Diaz, M., Dorta, I., Gabarr'o, J., González, J., Le'on, C., Moreno, L., Petit, J., Roda, J., Rojas, A., Xhafa, F.: MALLBA: A library of skeletons for combinatorial optimisation. Parallel Computing 32(5-6), 415-440 (2006)
444
+ 3. Alba, E., Dorronsoro, B., Luna, F., Nebro, A., Bouvry, P., Hogie, L.: A Cellular MOGA for Optimal Broadcasting Strategy in Metropolitan MANETs. Computer Communications 30(4), 685-697 (2007). DOI 10.1109/IPDPS.2005.4
445
+ 4. Alba, E., Tomassini, M.: Parallelism and evolutionary algorithms. IEEE Trans. Evol. Comput. 6(5), 443-462 (2002)
446
+ 5. Amdahl, G.: Validity of the single processor approach to achieving large scale computing capabilities. In: Proceedings of the Spring Joint Computer Conference, AFIPS '67, pp. 483-485. ACM (1967)
447
+ 6. B'ack, T., Fogel, D., Michalewicz, Z. (eds.): Handbook of evolutionary computation. Oxford University Press (1997)
448
+ 7. Benslimane, A., El Khoury, R., El Azouzi, R., Pierre, S.: Energy power-aware routing in OLSR protocol. In: Proceedings of the 1st Mobile Computing and Wireless Communication International Conference, pp. 14-19 (2006)
449
+ 8. Cano, J., Manzoni, P.: A performance comparison of energy consumption for mobile ad hoc network routing protocols. In: Proceedings of the 8th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems, pp. 57-64. IEEE Computer Society (2000)
450
+ 9. Chen, T., Mehani, O., Boreli, R.: Trusted routing for VANET. In: M. Berbineau, M. Itami, G. Wen (eds.) ITST 2009, 9th International Conference on Intelligent Transport Systems Telecommunications, pp. 647-652. IEEE Computer Society, Piscataway, NJ, USA (2009)
451
+ 10. Cheng, H., Yang, S.: Genetic algorithms with immigrant schemes for dynamic multicast problems in mobile ad hoc networks. Eng. Appl. Artif. Intell. 23, 806-819 (2010)
452
+ 11. Chou, C., Chen, J.: Genetic algorithms: initialization schemes and genes extraction. In: The Ninth IEEE International Conference on Fuzzy Systems, vol. 2, pp. 965-968 (2000)
453
+ 12. Clausen, T., Jacquet, P.: Optimized Link State Routing Protocol. IETF RFC 3626, [online] Available in http://www.ietf.org/rfc/rfc3626.txt (2003). Retrieved October 2011
454
+ 13. De Rango, F., Cano, J., Fotino, M., Calafate, C., Manzoni, P., Marano, S.: OLSR vs DSR: A comparative analysis of proactive and reactive mechanisms from an energetic point of view in wireless ad hoc networks. Computer Communications 31(16), 3843-3854 (2008)
455
+ 14. De Rango, F., Fotino, M.: Energy efficient OLSR performance evaluation under energy aware metrics. In: Proceedings of the 12th international conference on Symposium on Performance Evaluation of Computer & Telecommunication Systems, SPECTS'09, pp. 193-198. IEEE Press, Piscataway, NJ, USA (2009)
456
+ 15. Dorronsoro, B., Danoy, G., Bouvry, P., Alba, E.: Evaluation of different optimization techniques in the design of ad hoc injection networks. In: Workshop on Optimization Issues in Grid and Parallel Computing Environments, part of the HPCS, pp. 290-296. Nicossia, Cyprus (2008)
457
+ 16. Feeney, L.M., Nilsson, M.: Investigating the energy consumption of a wireless network interface in an ad hoc networking environment. In: In IEEE Infocom, pp. 1548-1557 (2001)
458
+
459
+ 17. Garc'ia-Nieto, J., Alba, E.: Automatic parameter tuning with metaheuristics of the AODV routing protocol for vehicular ad-hoc networks. In: C.D. Chio, A. Brabazon, G.A.D. Caro, M. Ebner, M. Farooq, A. Fink, J. Grahl, G. Greenfield, P. Machado, M. O'Neill, E. Tarantino, N. Urquhart (eds.) EvoApplications (2), Lecture Notes in Computer Science, vol. 6025, pp. 21-30. Springer (2010)
460
+ 18. García-Nieto, J., Toutouh, J., Alba, E.: Automatic tuning of communication protocols for vehicular ad hoc networks using metaheuristics. Engineering Applications of Artificial Intelligence 23(5), 795-805 (2010)
461
+ 19. Ge, Y., Kunz, T., Lamont, L.: Quality of service routing in ad-hoc networks using OLSR. In: Proceedings of the 36th Annual Hawaii International Conference on System Sciences, p. 300. IEEE Computer Society (2003). [electronic publication]
462
+ 20. Ghanem, N., Boumerdassi, S., Renault, E.: New energy saving mechanisms for mobile ad-hoc networks using OLSR. In: Proceedings of the $2^{nd}$ ACM International Workshop on Performance Evaluation of Wireless Ad Hoc, Sensor, and Ubiquitous Networks, pp. 273-274. ACM (2005)
463
+ 21. Goldberg, D.E.: Genetic Algorithms in Search Optimization and Machine Learning. Addison-Wesley (1989)
464
+ 22. Guo, Z., Malakooti, B.: Energy aware proactive MANET routing with prediction on energy consumption. In: Proceedings of the International Conference on Wireless Algorithms, Systems and Applications, pp. 287-293. IEEE Computer Society (2007)
465
+ 23. H'arri, J., Filali, F., Bonnet, C.: Performance comparison of AODV and OLSR in VANETs urban environments under realistic mobility patterns. In: Med-Hoc-Net 2006, 5th Annual Mediterranean Ad Hoc Networking Workshop. IFIP (2006)
466
+ 24. Hartenstein, H., Laberteaux, K.: VANET Vehicular Applications and Inter-Networking Technologies. Intelligent Transport Systems. John Wiley & Sons, Upper Saddle River, NJ, USA (2009)
467
+ 25. Huhtonen, A.: Comparing AODV and OLSR routing protocols. In: Telecommunications Software and Multimedia, pp. 1-9 (2004)
468
+ 26. Krajzewicz, D., Bonert, M., Wagner, P.: The open source traffic simulation package SUMO. In: RoboCup'06, pp. 1-10 (2006)
469
+ 27. Kunz, T.: Energy-efficient MANET routing: Ideal vs. realistic performance. In: International Wireless Communications and Mobile Computing Conference, pp. 786 - 793 (2008)
470
+ 28. Laouiti, A., Mu'hlethaler, P., Sayah, F., Toor, Y.: Quantitative evaluation of the cost of routing protocol OLSR in a Vehicle Ad Hoc NETwork (VANET). In: VTC Spring, pp. 2986-2990. IEEE (2008)
471
+ 29. Lee, K.C., Lee, U., Gerla, M.: Survey of Routing Protocols in Vehicular Ad Hoc Networks, chap. 8, pp. 149-170. Eds. IGI Global (2009)
472
+ 30. Li, F., Wang, Y.: Routing in vehicular ad hoc networks: A survey. IEEE Vehicular Technology Magazine 2(2), 12-22 (2007)
473
+ 31. Mahfoudh, S., Minet, P.: An energy efficient routing based on OLSR in wireless ad hoc and sensor networks. In: Proceedings $22^{nd}$ International Conference on Advanced Information Networking and Applications, pp. 1253-1259. IEEE Computer Society (2008)
474
+ 32. Nguyen, D., Minet, P.: Analysis of MPR selection in the OLSR protocol. Advanced Information Networking and Applications Workshops, International Conference on 2, 887-892 (2007)
475
+ 33. Razalli, S., Wong, K., Suhaimi, S.: Enhancing the Willingness on the OLSR Protocol to Optimize the Usage of Power Battery Power Sources Left. International Journal of Engineering 2, 12-26 (2008)
476
+ 34. Ros, F.J.: UM-OLSR: OLSR implementation for ns2. [online] Available in http://maximum.dif.um.es/?Software:UM-OLSR. Retrieved October 2011
477
+ 35. Ruiz, P., Dorronsoro, B., Bouvry, P.: Optimization and performance analysis of the AEDB broadcasting algorithm. In: Computer Communications and Networks (ICCCN), 2011 Proceedings of 20th International Conference on, pp. 1-6 (2011)
478
+ 36. Ruiz, P., Dorronsoro, B., Valentini, G., Pinel, F., Bouvry, P.: Optimisation of the enhanced distance based broadcasting protocol for manets. The Journal of Supercomputing pp. 1-28 (2011)
479
+ 37. Sangeeta, K., Sing, K.: Energy Efficient Routing In MANET Using OLSR. International Journal on Computer Science and Engineering 3(16), 1418-1421 (2011)
480
+
481
+ 38. Santa, J., Tsukada, M., Ernst, T., Mehani, O., G'omez-Skarmeta, A.F.: Assessment of VANET multi-hop routing over an experimental platform. Int. J. Internet Protoc. Technol. 4(3), 158-172 (2009)
482
+ 39. Spaho, E., Barolli, L., Mino, G., Xhafa, F., Kolici, V., Miho, R.: Performance evaluation of AODV, OLSR and DYMO protocols for vehicular networks using CAVENET. In: Network-Based Information Systems (NBiS), 2010 13th International Conference on,
483
+ pp. 527-534 (2010). DOI 10.1109/NBiS.2010.79
484
+ 40. Toutouh, J., Alba, E.: An efficient routing protocol for green communications in vehicular ad-hoc networks. In: Proceedings of 13th Annual Genetic and Evolutionary Computation Conference, GECCO 2011, pp. 719-726. ACM (2011)
485
+ 41. Toutouh, J., Alba, E.: Optimizing OLSR in VANETs with Differential Evolution: A Comprehensive Study. In: First ACM International Symposium on Design and Analysis of Intelligent Vehicular Networks and Applications (DIVANet '11), DIVANet. ACM (2011)
486
+ 42. Toutouh, J., Garc 'a-Nieto, J., Alba, E.: Optimal configuration of OLSR routing protocol for VANETs by means of Differential Evolution. In: $3^{rd}$ International Conference on Metaheuristics and Nature Inspired Computing, p. 8 (2010)
487
+ 43. Unex: DCMA-86P2 Network Interface Card. [online] Available in http://www.unex.com.tw/product/dcma-86p2. Retrieved January 2012
2501.09xxx/2501.09996/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4650d72d0a65354e5fa4a94d978426b94f4110aba9cab37409468747970d9256
3
+ size 664576
2501.09xxx/2501.09996/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.10xxx/2501.10007/5dd94295-db8e-4bed-a71a-6e89c9c75ad3_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.10xxx/2501.10007/5dd94295-db8e-4bed-a71a-6e89c9c75ad3_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.10xxx/2501.10007/5dd94295-db8e-4bed-a71a-6e89c9c75ad3_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28fee50cc15ad920fe51c4faf98b7f1289e63370e9a4e344173dd6fa4f691efe
3
+ size 3463049
2501.10xxx/2501.10007/full.md ADDED
@@ -0,0 +1,462 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Swarm Algorithm for Collaborative Traffic in Vehicular Networks
2
+
3
+ Jamal Toutouh*, Enrique Alba
4
+
5
+ Dept. de Lenguajes y Ciencias de la Computación, University of Malaga, Malaga, Spain
6
+
7
+ # Abstract
8
+
9
+ Vehicular ad hoc networks (VANETs) allow vehicles to exchange warning messages with each other. These specific kinds of networks help reduce hazardous traffic situations and improve safety, which are two of the main objectives in developing Intelligent Transportation Systems (ITS). For this, the performance of VANETs should guarantee the delivery of messages in a required time. An obstacle to this is that the data traffic generated may cause network congestion. Data congestion control is used to enhance network capabilities, increasing the reliability of the VANET by decreasing packet losses and communication delays. In this study, we propose a swarm intelligence based distributed congestion control strategy to maintain the channel usage level under the threshold of network malfunction, while keeping the quality-of-service of the VANET high. An exhaustive experimentation shows that the proposed strategy improves the throughput of the network, the channel usage, and the stability of the communications in comparison with other competing congestion control strategies.
10
+
11
+ Keywords: Broadcasting, Swarm Intelligence, Applications, Network Layer Issues
12
+
13
+ # 1. Introduction
14
+
15
+ Over the last few decades, the synergistic utilization of information and communication technologies (ICT) in vehicular environments has revolutionized the automotive industry. This has encouraged the emergence of a great variety of new services based on Intelligent Transportation Systems (ITS) focused on improving road safety and travelers' experience. Most of these great advances rely on vehicular networks that allow the periodic exchange of messages between the different agents that are part of road transportation (e.g., vehicles or elements of the infrastructure) [26]. This communication technology is commonly known as vehicular ad hoc networks (VANETs), which are principally composed by vehicles equipped with wireless interfaces in their on-board units (OBUs) that allow direct short range communications (DSRC) by utilizing the wireless access in vehicular environments (WAVE) standards, i.e., IEEE 802.11p and IEEE 1609 [4].
16
+
17
+ VANETs are applied to deploy ITS to provide a large number of smart mobility services and applications. The most important category of applications based on VANETs are designed to provide safe environments for road travel and intelligent road traffic management. Those are known as cooperative vehicle safety (CVS) and traffic efficiency applications, respectively [5, 6].
18
+
19
+ CVS principally relies on exchanging short messages (known as beacons) through the DSRC channel [8]. These messages are broadcasted in the neighborhood (1-hop) defined by the communication range of the nodes $(r)$ . Beacons include vehicle kinematics and other relevant information. VANET nodes are continuously broadcasting beacons (beaconing) with a given beacon frequency or beacon rate (see Figure 1).
20
+
21
+ A challenging issue in the deployment of CVS is the network congestion when the scale of the system grows. This is mainly due to the critical increase of the periodic beacons, which generates a heavy communication load. Congestion increases packet losses and communication delays, i.e., it degrades the performance
22
+
23
+ ![](images/948d7ffdca02bffd5747e3e1309f3a5134005250949e2da895ca511393540141.jpg)
24
+ Figure 1: Car $A$ performs CVS communication.
25
+
26
+ and the quality-of-service (QoS) of the VANETs. This may lead to excessive information inaccuracy and eventually failure of CVS [6, 10].
27
+
28
+ There are many techniques for improving congestion control in VANETs [18]. One promising line of research is to adapt the broadcasting protocol to the current channel resources by changing its configuration parameters. Most of studies propose some management of the transmission power (communication range) and the beacon frequency.
29
+
30
+ In this article, a swarm intelligence based congestion control method (Swarm FREDY) is defined. This novel method is stochastic, dynamic, and fully distributed. When the VANET uses Swarm FREDY, each node runs its own instance of the algorithm, as a particle of the swarm, to efficiently adapt its beacon rate to the available channel capacity and cooperate with the neighbor nodes in their congestion control operation.
31
+
32
+ Swarm intelligence comprises a set of nature inspired artificial intelligence methods in which a set of simple agents mimic the behavior of social organisms (ant colonies, bird flocking, fish schooling, etc.) in a given algorithm [29]. This kind of methods has been successfully applied in many hard to solve optimization problems, such as robotics [3], engineering [15], telecommunications [9], and machine learning [17].
33
+
34
+ The research questions our study addresses are:
35
+
36
+ RQ1 What is the true importance of controlling the communication congestion for road safety?
37
+ RQ2 Can a lightweight swarm based algorithm of constant complexity provide competitive congestion control in vehicular communications?
38
+ RQ3 Can a stochastic method be more competitive than the existing deterministic ones?
39
+
40
+ In short, the main contributions of this study are: $i$ ) defining the optimization problem of congestion control by beacon frequency adaptation and $ii$ ) proposing a swarm based congestion control method.
41
+
42
+ The rest of this article is organized as follows. Section 2 summarizes the state of the art in the field of congestion control in vehicular networks. Section 3 introduces the concept of fair beaconing in VANETs. Section 4 defines the fair beacon rate optimization problem. Section 5 presents the Swarm FREDY method proposed here. Section 6 describes the experimental evaluation framework and Section 7 analyzes the numerical results. Finally, Section 8 outlines our conclusions and the main lines of the future research.
43
+
44
+ # 2. Related Work
45
+
46
+ Congestion control is an important research topic with the objective of providing reliable environments in modern network communications [12]. If we focus on vehicular communications, congestion control is an even more critical concern [5]. The reliability of CVS, which could make the difference between saving lives or not, is highly dependent on two quality-of-service (QoS) metrics: the packet loss and the communication delays. Congestion, which occurs when the network load exceeds the capacity of the network links, generally leads to a deterioration of these two metrics.
47
+
48
+ Several strategies have been proposed to address congestion problems in VANETs, keeping the communication capabilities of the nodes over a given QoS threshold. Most of them can be included in the following basic schemes: $i$ ) adapting the transmission range of transmission channels, $ii$ ) adjusting the data rate generation of applications and services, iii) hybrid methods by combining the two aforementioned schemes, and iv) scheduling data packets in various channels.
49
+
50
+ The scheme that includes transmission range adaptation follows the idea that reducing the transmission power of beacons keeps the network load below a certain threshold for an optimal VANET operation. However, an excessive reduction of transmission power could cause node isolations when the network density decreases [1, 14].
51
+
52
+ In order to avoid this, the distributed fair power adjustment for VANETs (D-FPAV) dynamically controls the transmission power (range) to keep the beaconing traffic under a threshold called MaxBeaconingLoad (MBL) [24]. As the density of the neighborhood increases, the power transmission is reduced to keep the network load under the MBL. However, it cannot manage situations in which the MBL is violated and the transmission is already at the minimum allowed power.
53
+
54
+ Controlling congestion by adjusting the beacon rate (the second basic scheme), is similar to the adaptation of the transmission range. The idea is to find the optimal beaconing rate for each node that suits the applications and the network status. On the one hand, a high beacon rate increases the accuracy of the system's knowledge, which is important for safety services. However, congestion is more likely to appear in these cases. On the other hand, a low beaconing rate increases the latency of the information, but prevents VANETs from overloading the communication channel. Therefore there are trade-offs throughout.
55
+
56
+ In an initial approach, the rate of beacon generation was adjusted according to the information about the current speed of the node, the failure of the transmission attempt, and the beacon reception success rate [28]. A lookup table stores the predefined beaconing rates in terms of the three metrics analyzed. The rate decreases whenever the vehicles reduce their speed, the maximum failed transmission attempts is observed or the minimum reception success rate is measured. The main drawback of this method is that VANETs are fully distributed systems by nature, and it is not clear how to obtain the necessary statistics (such as the reception rate).
57
+
58
+ Later, an adaptive beaconing communication scheme for cooperative active safety system (CASS) was introduced [16]. In this method, the nodes broadcast kinematics information to all neighbors within a certain communication range. Each node uses a so-called self estimator to estimate its own position, speed, and heading. In turn, it runs a remote estimator to mimic the information about its own position from the perspective of neighboring vehicles. The idea is to broadcast beacons only when the difference between the calculations of both estimators exceeds a maximum deviation threshold.
59
+
60
+ In [19], the authors studied the situation-adaptive beaconing, a rate control method based on the movement of the own node and neighboring vehicles, including microscopic aspects (e.g., speeds) and macroscopic aspects (e.g., road traffic density). They analyzed different adaptation schemes based on: vehicle's own movement, surrounding vehicles' movement, and a combination of the previous ones. They concluded that the aggregation of several schemes provides a better congestion control than using them individually.
61
+
62
+ Taking into account the third congestion control scheme, some researchers defined hybrid congestion control methods that apply transmission power and rate adaptations together to overcome the limitations of both mechanisms. These methods first address the congestion control of the medium by utilizing beaconing rate adaptation. If the current beacon rate generation is under a minimum threshold for the actual requirements and still the congestion situation is not avoided, then a power transmission adjustment is also applied [7, 23].
63
+
64
+ Some studies have introduced the use of metaheuristics to define hybrid congestion control methods. They use these techniques to find efficient parameterizations, after a congestion is detected. Along these lines, the Single-Objective Tabu Search was proposed to minimize the communication delay only [21]. Afterwards, a Multi-Objective Tabu Search was applied to minimize the communication delay and jitter [22]. Their main issue was the relatively high computation complexity (run times), that needed to be considered as part of the final communication delays.
65
+
66
+ Finally, some authors have proposed scheduling beacons through several channels depending on their current availability. In [13], the authors proposed the QoS-aware radio access technology (RAT), a specific congestion control method for heterogeneous vehicular networks (HVN), i.e., nodes are equipped with WAVE and LTE cellular network interfaces. The QoS-aware RAT applies an iterative method to keep the network load under a given threshold to avoid network congestion. For this, when the network load grows, RAT reduces the beaconing rate until reaching a given threshold. If the current rate is under a given QoS threshold (it cannot be reduced anymore), the beacons are broadcast via LTE.
67
+
68
+ All these proposals have different drawbacks that prevent their use in real VANETs: they rely on information that is not available in the current standards, they require the use of a central entity (while VANETs are fully distributed), their computational cost prevents their application because of a critical increment in the communication delays, and so forth. In the present study, we propose different stochastic, dynamic, and distributed congestion control strategies to increase the communication reliability and balance in VANETs, while avoiding the previous shortcomings. The main goal is to efficiently manage the broadcasting of beacons because they generate the predominant (almost only) overheads in the control channel. This work is an extension of our preliminary study presented in [25], in which we presented the Distributed Intelligent Fair Rate Adaptation (DIFRA) algorithm family. DIFRA is a set of basic greedy methods that adapt the beaconing frequency of each vehicle according to the current channel load. Let us put them to work in this article.
69
+
70
+ # 3. Fair Beacon Rate Broadcasting in VANETs
71
+
72
+ This section presents the fair beacon rate (FBR) strategy to address the congestion control problem in VANETs whilst also avoiding any starvation of the nodes [25]. The main idea is to use the beacon rate as a QoS metric that should be numerically balanced among all the nodes in the neighborhood in order to guarantee the proper operation of the VANET [8].
73
+
74
+ The section is organized as follows: First, the frequency of broadcasting beacons as an important QoS metric for CVS applications is discussed. Then, FBR use case is illustrated.
75
+
76
+ # 3.1. Beacon Frequency as a QoS Numerical Metric
77
+
78
+ In VANETs, beacons are broadcasted to neighboring vehicles at regular intervals to make them aware of their environment. Therefore, beacons contain kinematic information of vehicles. VANET nodes broadcast beacons principally to achieve two goals: i) a fresh knowledge of their surroundings, to prevent unsafe situations and ii) internal adjustments of the VANET communication protocols.
79
+
80
+ The reliability of CVS applications is highly dependent on beacon broadcasting. Applications manage more accurate information when the nodes are able to exchange messages with higher resolution (beacon rate). Thus, the beacon frequency can be used as a QoS metric of the system, since the higher the frequency (without generating congestion) the higher the accuracy of the received information [8].
81
+
82
+ Due to the channel's capacity limitations, it is crucial that nodes broadcast beacons at a suitable beacon frequency in accordance with the current network status. On the one hand, it is strongly accepted that a high beacon rate can easily result in channel congestion in regions of high road traffic density, therefore causing a high reduction of beacon delivery and a critical throughput degradation [8]. On the other hand, larger intervals between consecutive beacons (lower beacon rate) increase the uncertainty of the CVS applications, i.e., nodes might not know the required information about their neighbors for a certain (too long) time. Thus, the beacon rate can be used as a general QoS metric to represent the reliability of a CVS. In turn, this rate affects the throughput of the VANET.
83
+
84
+ # 3.2. Use Case of FBR Utilization in VANETs
85
+
86
+ Congestion control mechanisms may produce unfair situations (nearby nodes with similar network conditions transmit with wildly different beacon rates). In IEEE 802.11 communications, mechanisms based on the RTS/CTS protocol have been defined to mitigate unfairness when Carrier-Sense Multiple Access (CSMA) is used. However, these solutions cannot be directly applied in VANET broadcasting since they have been principally designed for end-to-end data flows [27].
87
+
88
+ Congestion in VANETs can be addressed by means of fair beaconing. In this case, fairness can be seen as the situation in which those vehicles (VANET nodes) located near to each other are able to broadcast beacons with similar and high beacon rates, while avoiding network congestion. Thus, no VANET node with data to transmit suffers from starvation.
89
+
90
+ Figure 2 shows a very simple CVS example of the difference of using fair beaconing or not. Here, it is assumed that the beacon rates can be adapted from 1 to 10 beacons per second (hertz, $Hz$ ), the maximum channel occupancy (or capacity) in terms of beacons per unit time ( $MaxQ$ ) is 30 beacons per second,
91
+
92
+ the threshold limit ratio over MaxQ that can be used by CVS while still avoiding system malfunction $(\alpha)$ is $80\%$ $(\alpha = 0.8)$ , and the transmission and the carrier sense have the same range (marked by dotted circles). According to MaxQ and $\alpha$ values, the maximum number of beacons per second that can be exchanged through the communication channel is 24 $(\alpha \cdot MaxQ = 0.8 \cdot 30 = 24)$ . This value guarantees the proper operation of the VANET (see Section 3.1).
93
+
94
+ ![](images/4f7ecd7ac8c10c53a63a0fc9b87e19832b1f08b66ed67cd3267aceef7a071b1c.jpg)
95
+ Figure 2: Simple deterministic VANET scenario.
96
+
97
+ In Figure 2, there are two groups of cars located near to each other that represent different situations: Pure CSMA and Fair beacon rate. The first one demonstrates the unfair situation resulting from applying a purely CSMA-based method, which is illustrated by the starvation of cars 3 and 4, which transmit just 2 beacons per second to avoid network congestion. The main reason is that these two nodes are competing with others that are transmitting beacons at the maximum beacon rate $(10Hz)$ . This is mainly because cars 1 and 2 are not aware that there are nodes trying to broadcast beacons. In the Fair beacon rate situation (right-hand side of the figure), all the nodes in the same carrier sense (cars 5, 6, 7, and 8) apply a given mechanism to allow all the nodes to transmit the beacons at the same rate $(6Hz)$ without incurring in congestion.
98
+
99
+ In our study, FBR considers three main goals: $i$ ) maintaining the VANET load under a given threshold to avoid network congestion, $ii$ ) avoiding the starvation of nodes that have something to broadcast, and $iii$ ) balancing beacon rates (allowing close nodes to exchange beacons with similar rates).
100
+
101
+ Figure 3 illustrates a simple example to show the behavior of VANET nodes when they adapt their beacon rates by applying FBR. The main features of this VANET are the same as the ones presented above. There are two clusters of cars: Group 1 comprising the cars 1 and 2, that travel from left to right, and Group 2 (cars 3 and 4), that move in the opposite direction. We define cluster of cars as the set of VANET nodes in which all the nodes are at least covered by the communication range of one of the other nodes of the cluster.
102
+
103
+ Initially (see Figure 3.a), the two groups of nodes define two different clusters. All the nodes can broadcast beacons at their maximum frequency $(10Hz)$ because the sum of the beacon rates of the nodes in the same communication range $(10 + 10 = 20Hz)$ does not exceed the maximum beacon rate of the channel $(24Hz)$ . After a given time (see Figure 3.b), both groups of cars define a single cluster. Thus, they are aware that there are other nodes trying to broadcast beacons. For this reason, they have to adapt their beacon rate (FBR) to avoid network congestion because the sum of their beacon rates $(10 + 10 + 10 + 10 = 40Hz)$ exceeds $24Hz$ . Therefore, they change (adapt) their beacon frequency from 10 to $6Hz$ to maintain the channel load under the defined threshold $(24Hz)$ . Finally, the cars break up the cluster and build two different ones (see Figure 3.c) like in the first case. As a result, the network status is similar and the nodes can once again broadcast beacons at the maximum frequency.
104
+
105
+ # 4. Fair Beacon Rate Optimization Problem
106
+
107
+ This section presents the formulation of the optimization problem of computing beacon rates to allow efficient CVS. This problem is an extension of the one presented in [25].
108
+
109
+ The information required to adapt the beacon rates to the network status is the current network load (channel occupancy). In this study, the analysis of the channel occupancy is carried out by monitoring the length of the queues in a given time window.
110
+
111
+ ![](images/3b1ff804c82647c3253b20db4172894e6d34f32cd12e8540a5e0c1d8b9195abe.jpg)
112
+ a) No beacon rate adaptation is needed.
113
+
114
+ ![](images/b4b42de7d76af2604c8d1782ae6a0b7eeee53aa671a9788469244ac19279a5bd.jpg)
115
+ b) Beacon rates are adapted to avoid starvation or congestion.
116
+
117
+ ![](images/461f16e1bf355b35bd690976d9050216a366a54df79d33c181766807a688a16c.jpg)
118
+ c) No beacon rate adaptation is needed.
119
+ Figure 3: VANET in which nodes apply FBR.
120
+
121
+ The FBR computation problem considers:
122
+
123
+ - The maximum allowed channel occupancy $(MaxQ \in \mathbb{Z})$ . $MaxQ$ in practice represents the maximum value of queues length, i.e., the number of beacons that can be in the queue without representing a network overload (congestion).
124
+ - A threshold limit ratio over the maximum channel occupancy $\alpha \in [0,1] \subset \mathbb{R}$ . If the queue lengths exceed the effective capacity of the channel $\omega \in [0,MaxQ] \subset \mathbb{R}$ , which is computed according to $\omega = \alpha \cdot MaxQ$ , the protocol considers that the current network load could lead to a congestion situation causing a degradation in performance.
125
+ - A set of allowed beacon rates $BR = \{br^1, br^2, \dots, br^k\}$ . It contains all the beacon rate values ( $br^i \in \mathbb{Z}$ ) that can be selected by the nodes according to the VANET application restrictions.
126
+ - Given a vehicle $v$ that belongs to the VANET, the $NN(v)$ function returns the set that contains all the nodes inside its network coverage (1-hop neighbor nodes).
127
+ - The occupancy of the communication channel $\eta(v) \in [0,100] \subset \mathbb{R}$ is computed by each node $v$ according to the ratio between its queue size and the MaxQ in terms of percentage. In Eq. (1), $rbr_{j}$ represents the number of received beacons from the neighbor node $j$ and $br_{v}$ is the number of beacons to be sent by $v$ during the current time window.
128
+
129
+ $$
130
+ \eta (v) = \frac {\left(\sum_ {j} ^ {N N (v)} r b r _ {j}\right) + b r _ {v}}{M a x Q} \cdot 100 \% \tag{1}
131
+ $$
132
+
133
+ - The network balance or fairness $\sigma(v) \in [0, +\infty) \subset \mathbb{R}$ is measured by using the coefficient of the variation of the beacon rates inside the neighborhood of $v$ , see Eq. (2). However, the preliminary version of the FBR optimization problem evaluates $\sigma(v)$ by means of the standard deviation [25]. The main reason for this change is that the standard deviation for instances with different traffic densities (different beacon rates) provides highly different results, even if the performance of the algorithms is similar.
134
+
135
+ $$
136
+ \sigma (v) = \frac {\sum_ {j} ^ {N N (v)} \left(b r _ {j} - b \bar {r} _ {v}\right) ^ {2} + \left(b r _ {v} - b \bar {r} _ {v}\right) ^ {2}}{| N N (v) |} \cdot \frac {1}{b \bar {r} _ {v}} \tag {2}
137
+ $$
138
+
139
+ $b\bar{r}_v$ represents the average beacon data rates in the neighborhood of $v$ , which is defined in Eq. (3).
140
+
141
+ $$
142
+ \bar {b r} _ {v} = \frac {\left(\sum_ {j} ^ {N N (v)} b r _ {j}\right) + b r _ {v}}{| N N (v) | + 1} \tag {3}
143
+ $$
144
+
145
+ The FBR optimization consists in finding the largest $br_v \in BR$ for each node $v$ that maximizes $\eta(v)$ (the communication channel occupancy) and minimizes $\sigma(v)$ (i.e., maximizes the network balance). Furthermore, the computed beacon rate $br_v$ should not generate network congestion, i.e., $\eta(v) \leq \omega$ .
146
+
147
+ In this study, the VANET nodes can generate, at least, $br^{MIN} \in BR$ , which is the minimum beacon rate ( $br^{MIN} < br^i$ , $br^{MIN} \forall br^i \in BR$ , $br^i \neq br^{MIN}$ ), but never more than $br^{MAX} \in BR$ , which is the maximum beacon rate ( $br^{MAX} > br^i \forall br^i \in BR$ , $br^i \neq br^{MIN}$ ).
148
+
149
+ # 5. Swarm FREDY
150
+
151
+ The Swarm FREDY (Fair beacon Rate greEDY) congestion control method devised in this study dynamically computes efficient beacon rates to address the FBR optimization problem proposed in the previous section. It is fully distributed (executed individually by each VANET node), thus, no central entity is used. In common with other swarm intelligence algorithms, the nodes perform the computations according to their own experience and the experience of their neighbors [2]. This section, first presents a global view of this algorithm and, then describes how the Swarm FREDY operates.
152
+
153
+ # 5.1. Method Overview
154
+
155
+ In common with most of congestion control methods proposed in the literature, Swarm FREDY performs two main operations: network monitoring and network components re-configuration [12]. In this case, the network monitoring is performed by analyzing the queues in a given time window. Swarm FREDY applies a swarm intelligence procedure to improve the knowledge about the general network status.
156
+
157
+ The main idea consists in fairly dividing (sharing) the capacity of the channel among all the nodes in the neighborhood. Therefore, the algorithm has to evaluate the protocol queues to get the neighborhood size in order to compute an efficient desired beacon rate (DBR). This can be seen as the own experience of an individual of the swarm. At the same time, the VANET nodes share their computed DBR with the neighborhood to request them to change their beacon rates to the same DBR (i.e., sharing neighborhood experience). The Swarm FREDY operation is described below.
158
+
159
+ Swarm FREDY has three main software components (see Figure 4):
160
+
161
+ - Self Queue Monitoring Component (SQMC), which evaluates the IEEE 802.11p protocol queues.
162
+ - Swarm Information Exchange Component (SIEC), which decodes the information encoded in the received beacons.
163
+ - Beacon Rate Adaptation Component (BRAC), which analyzes the information obtained by SQMC and SIEC to compute the new beacon rate.
164
+
165
+ ![](images/b307160b06c4e8eba4a74c1e1336bf67c22a7480a755c907bab0a76d1acdce8c.jpg)
166
+ Figure 4: Swarm FREDY main software components.
167
+
168
+ # 5.2. Swarm FREDY Operation
169
+
170
+ Swarm FREDY takes into account congestion control information received from SQMC and SIEC, which are executed permanently in parallel. This information (DBR) is stored in a temporal beacon rate buffer named (BRBuffer) in order to utilize it in the near future computations are carried out by BRAC. Figure 5 summarizes Swarm FREDY operation.
171
+
172
+ ![](images/1b469339333d757e3dedb0eb64a45acbece08b010f6264ac92dcec3dee29cc4c.jpg)
173
+ Figure 5: Complete flowchart of the Swarm FREDY algorithm.
174
+
175
+ The BRBuffer of each node is a vector of natural values with $|BR| = k$ components $[x_1 x_2 \ldots x_k]$ . Each of the $k$ components $(x_i)$ stores the number of requests received by the node to change its beacon rate to $i$ beacons per second (from itself, i.e., SQMC, or from the neighborhood, i.e., SIEC). For example, if BRBuffer $= [0 0 10 0 25 0 0 0 0 0]$ , then it means that the node has received 10 requests to change the beacon rate to $3Hz$ ( $x_3 = 10$ ) and 25 to change to $5Hz$ ( $x_5 = 25$ ).
176
+
177
+ The SQMC procedure starts by analyzing the beacons received in the queues to compute the neighborhood size $|NN(v)|$ . Then, the tentative desired beacon rate ( $tDBR$ ) is computed according to Eq. (4). The DBR is obtained by bounding tDBR between $br^{MIN}$ and $br^{MAX}$ , as shown in Eq. (5). The BRBuffer is updated by increasing the DBR-th component ( $x_{DBR} = x_{DBR} + 1$ ). Finally, the current DBR is included in the beacons to be broadcast to request the neighborhood to change their beacon rates.
178
+
179
+ $$
180
+ t D B R = \left\lfloor \frac {\omega}{| N N (v) | + 1} \right\rfloor \tag {4}
181
+ $$
182
+
183
+ $$
184
+ D B R = \left\{ \begin{array}{l l} b r ^ {M I N} & \text {i f} t D B R < b r ^ {M I N} \\ t D B R & \text {i f} b r ^ {M I N} \leq t D B R \leq b r ^ {M A X} \\ b r ^ {M A X} & \text {i f} t D B R > b r ^ {M A X} \end{array} \right. \tag {5}
185
+ $$
186
+
187
+ Simultaneously, the SIEC procedure decodes the DBR from the beacons received and updates the BR-Buffer according to the stochastic Distance Discriminant (sDiDi) procedure.
188
+
189
+ The sDiDi procedure divides the neighborhood into three different categories depending on how far away they are: i) the authorities, which are the closest (distance $< d_{1}$ ), ii) the exiles, which are the furthest (distance $> d_{2}$ ), and iii) the voters, which are between the authorities and the exiles ( $d_{1} \leq \text{distance} \leq d_{2}$ ). Therefore, if the DBR is received from an authority node, then it is always included in the BRBuffer. If the source node of the beacon is an exiled one, then it is not included in the BRBuffer. Finally, Eq. (6) defines the probability $p_{i}$ of including the voter's DBR in the BRBuffer, which depends on the distance between the two nodes. Figure 6 illustrates the relationship between the nodes' distance and the probability of accepting the received DBR. The DBR information is included in the buffer by increasing the DBR-th component.
190
+
191
+ $$
192
+ p _ {i} = \frac {d _ {2} - d i s t a n c e}{d 2 - d 1} \tag {6}
193
+ $$
194
+
195
+ ![](images/4120d3c15dd854731dc9e2eac721d0733b3b1caff32966629efc5ee6190c482e.jpg)
196
+ Figure 6: The sDiDi probability of including the received DBR according to the distance (node categories).
197
+
198
+ After a given time window, BRAC is run to compute the new beaconing frequency $br^{t+1}$ according to the BRBuffer. Thus, $br^{t+1}$ is the most requested DBR (the mode) stored in the BRBuffer. For example, if BRBuffer = [0 0 5 7 9 10 6 5 5 0], it holds that $br^{t+1} = 6$ (maximum of BRBuffer is $x_6 = 10$ ).
199
+
200
+ # 6. Experimental Evaluation
201
+
202
+ This section presents the framework defined to evaluate the proposed Swarm FREDY by means of simulations. The simulation environment has been defined using MATLAB. The experimental analysis has been carried out in a Magni-Core cluster with 48 cores at $2.2\mathrm{GHz}$ , and with 48 GB RAM. In the following subsections, we describe the congestion control methods compared, the VANET scenarios, and the experiments performed.
203
+
204
+ # 6.1. Evaluated Methods
205
+
206
+ In a previous study, we evaluated nine methods to address congestion in VANET CVS applications [25]. The most competitive one was Swarm DIFRA, which estimates channel load in a distributed manner and dynamically adapts the beacon rate of each node by using deterministic computations. Thus, as it was the most competitive congestion control method, we decided to compare it with our present approach.
207
+
208
+ The performance of Swarm FREDY is highly dependent on the $d_{1}$ and $d_{2}$ values used in the sDiDi method to discriminate the information obtained from the received beacons (see Section 5.2). We therefore evaluate Swarm FREDY using different $(d_{1}, d_{2})$ configurations. The selected distances are multiples of 50 meters and they range from 0 to 250 meters. Table 1 shows the 15 Swarm FREDY configurations analyzed in this study. The time window for the execution of BRAC procedure is set to 1 second.
209
+
210
+ # 6.2. VANET Scenarios
211
+
212
+ The methods under comparison are studied in a road that covers ten kilometers long and has six lanes wide (three lanes in each direction) for 150 seconds. In order to evaluate their robustness over different road traffic situations, seven VANET scenarios are defined by changing the road traffic density (number of moving vehicles): 500, 750, 1000, 1250, 1500, 1750, and 2000.
213
+
214
+ Table 1: Swarm FREDY configurations analyzed.
215
+
216
+ <table><tr><td></td><td></td><td colspan="5">d2</td></tr><tr><td></td><td></td><td>50</td><td>100</td><td>150</td><td>200</td><td>250</td></tr><tr><td></td><td>0</td><td>(0,50)</td><td>(0,100)</td><td>(0,150)</td><td>(0,200)</td><td>(0,250)</td></tr><tr><td></td><td>50</td><td>-</td><td>(50,100)</td><td>(50,150)</td><td>(50,200)</td><td>(50,250)</td></tr><tr><td>d1</td><td>100</td><td>-</td><td>-</td><td>(100,150)</td><td>(100,200)</td><td>(150,250)</td></tr><tr><td></td><td>150</td><td>-</td><td>-</td><td>-</td><td>(150,200)</td><td>(150,250)</td></tr><tr><td></td><td>200</td><td>-</td><td>-</td><td>-</td><td>-</td><td>(200,250)</td></tr></table>
217
+
218
+ A realistic mobility model based on Intelligent-Driver Model (IDM) has been defined [11]. The vehicles are initially assigned to a given lane randomly, with a higher probability of being assigned to the outer lanes than the inner ones, as in real world roads. The speed of the vehicles is higher in the external lanes than in the internal ones. The distances between vehicles and the speeds are computed according to the square law, i.e., $speed^2 \simeq distance / 100$ [25]. As the initial location of the vehicles on the roads is non-deterministic, each time a simulation is run a different traffic scenario is generated (even if they have the same number of vehicles). This gives our study a better background by considering a huge number of realistic VANET scenarios that represent different real world road traffic situations.
219
+
220
+ In order to model the communications by using IEEE 802.11p, the probabilistic Three-Log Distance propagation model is used with $5.8\mathrm{GHz}$ radio operating at 6 Mbps data rate. The communication range of the radio devices $r$ is set to 250 meters. The maximum size of the IEEE 802.11p queues $(MaxQ)$ is 400 beacons and threshold limit ratio $(\alpha)$ is 0.8. Finally, the CVS applications running in each vehicle require beacons of 100 bytes to be exchanged with a frequencies that range from $1Hz$ to $10Hz$ .
221
+
222
+ # 6.3. Design of Experiments
223
+
224
+ In this study, although we focus our analysis on the two metrics evaluated by the FBR optimization problem: the channel occupancy (usage) and the network balance, another two metrics to evaluate the QoS of the CVS applications are also considered: the individual beacon rate, which is the average beacon rate of the nodes during the simulation, and the beacon rate stability, which represents the number of times that a node has to re-adapt its beacon rate to the current network status.
225
+
226
+ The individual beacon rate evaluates the reliability of CVS applications, as introduced in Section 3.1. The stability evaluates the number of beacon rate changes performed during the simulation because it represents the stability of the QoS provided by CVS applications.
227
+
228
+ As the road traffic and the communications are non-deterministic, each analyzed congestion control method is simulated 50 times over the same VANET scenario because each simulation produces different results. Thus, to determine the significance of the comparison, statistical tests are applied (Kolmogorov-Smirnov and Aligned Friedman Rank tests) [20] with a confidence level of $99\%$ ( $p$ -value $< 0.01$ ).
229
+
230
+ # 7. Numerical Results
231
+
232
+ This section summarizes and analyzes the main results of the experimental evaluation of the 15 variants of our Swarm FREDY analyzed in this study. Additionally, the experiments include Swarm DIFRA congestion control methods as a baseline for comparison purposes because it has already demonstrate more competitive performance than other well-known methods in previous studies [25]. In this way, we simplify the experimental analysis by avoiding including all these other well-known classic congestion control methods. The result distributions computed are not normal according to the Kolmogorov-Smirnov statistical test. Therefore, we show the median over the 50 independent simulations for each scenario. In order to simplify the tables of results the Swarm FREDY methods are named $\mathrm{SF}(d_1,d_2)$ and Swarm DIFRA is identified by SD.
233
+
234
+ For each evaluated metric, a boxplot graph of the results of the most representative VANET scenario (road traffic) of the global behavior is shown, aiming at improving the comprehension of the obtained results.
235
+
236
+ # 7.1. Individual Beacon Rate
237
+
238
+ Table 2 summarizes the individual beacon rates by showing the median values over the 50 simulations. Figure 7 shows the beacon rates used by the mobile nodes for the scenarios with 1000 vehicles. Finally, Table 3 summarizes the Aligned Friedman Rank results for this metric ( $p$ -value $< 0.01$ ).
239
+
240
+ Table 2: Median results of the vehicles' beacon rates (Hz or beacons per second) for each VANET scenario and congestion control method.
241
+
242
+ <table><tr><td>Method</td><td>500 veh.</td><td>750 veh.</td><td>1000 veh.</td><td>1250 veh.</td><td>1500 veh.</td><td>1750 veh.</td><td>2000 veh.</td></tr><tr><td>SF(000,050)</td><td>7.251</td><td>6.125</td><td>5.174</td><td>4.678</td><td>3.939</td><td>3.339</td><td>2.999</td></tr><tr><td>SF(000,100)</td><td>7.208</td><td>6.044</td><td>5.100</td><td>4.662</td><td>3.930</td><td>3.348</td><td>2.996</td></tr><tr><td>SF(000,150)</td><td>7.208</td><td>5.932</td><td>5.046</td><td>4.645</td><td>3.905</td><td>3.337</td><td>2.997</td></tr><tr><td>SF(000,200)</td><td>7.139</td><td>5.898</td><td>4.987</td><td>4.611</td><td>3.902</td><td>3.309</td><td>2.982</td></tr><tr><td>SF(000,250)</td><td>7.083</td><td>5.825</td><td>4.944</td><td>4.601</td><td>3.862</td><td>3.304</td><td>2.978</td></tr><tr><td>SF(050,100)</td><td>7.234</td><td>6.018</td><td>5.113</td><td>4.632</td><td>3.926</td><td>3.324</td><td>3.003</td></tr><tr><td>SF(050,150)</td><td>7.158</td><td>5.951</td><td>5.041</td><td>4.630</td><td>3.893</td><td>3.312</td><td>2.975</td></tr><tr><td>SF(050,200)</td><td>7.215</td><td>5.886</td><td>5.013</td><td>4.603</td><td>3.890</td><td>3.316</td><td>2.996</td></tr><tr><td>SF(050,250)</td><td>7.126</td><td>5.804</td><td>4.945</td><td>4.561</td><td>3.866</td><td>3.297</td><td>2.962</td></tr><tr><td>SF(100,150)</td><td>7.231</td><td>5.908</td><td>5.011</td><td>4.594</td><td>3.894</td><td>3.313</td><td>2.988</td></tr><tr><td>SF(100,200)</td><td>7.090</td><td>5.871</td><td>4.944</td><td>4.584</td><td>3.861</td><td>3.307</td><td>2.956</td></tr><tr><td>SF(100,250)</td><td>7.082</td><td>5.771</td><td>4.941</td><td>4.560</td><td>3.850</td><td>3.294</td><td>2.963</td></tr><tr><td>SF(150,200)</td><td>7.112</td><td>5.874</td><td>4.962</td><td>4.583</td><td>3.856</td><td>3.286</td><td>2.978</td></tr><tr><td>SF(150,250)</td><td>7.058</td><td>5.747</td><td>4.940</td><td>4.547</td><td>3.849</td><td>3.271</td><td>2.956</td></tr><tr><td>SF(200,250)</td><td>7.036</td><td>5.778</td><td>4.857</td><td>4.549</td><td>3.826</td><td>3.267</td><td>2.949</td></tr><tr><td>SD</td><td>7.025</td><td>5.748</td><td>4.879</td><td>4.512</td><td>3.808</td><td>3.242</td><td>2.935</td></tr></table>
243
+
244
+ ![](images/d340930b2925c8e163fb40be78500807c9c942feccb7ca5e6d971da337f02c8b.jpg)
245
+ Figure 7: Vehicles beacon rates results for the scenario with 1000 vehicles.
246
+
247
+ As expected, the higher the road traffic density (number of vehicles on the road) the lower the beacon rates. This common sense result has been achieved automatically (an important feature of our simulation platform). In these experiments the beacon rates decrease from higher than $7\mathrm{Hz}$ to lower than $3\mathrm{Hz}$ . This illustrates the variations in the communications depending on the road traffic density and the importance of efficiently managing the channel usage.
248
+
249
+ According to the results in Tables 2 and 3, Swarm DIFRA is statistically the least competitive method evaluated, since it provides the lowest beacon rates (see Figure 7). In those scenarios with lower road traffic and where the nodes are able to exchange information with higher beacon rates, the differences between our approach and Swarm DIFRA is higher.
250
+
251
+ Let us analyze the different variants of Swarm FREDY. We observe in Figure 7 that the shorter the $d_{2}$ distance the higher the beacon rate. This is because the communication channel is divided into a lower number of considered nodes (i.e., authorities and voters), since the algorithm classifies nodes from shorter distances as exiles.
252
+
253
+ Table 3: Aligned Friedman Ranking of the vehicles' beacon rates results.
254
+
255
+ <table><tr><td>Congestion control method</td><td>Ranking position</td><td>Rank value</td></tr><tr><td>SF(050,100)</td><td>1</td><td>3075.158</td></tr><tr><td>SF(000,050)</td><td>2</td><td>2954.868</td></tr><tr><td>SF(000,100)</td><td>3</td><td>2870.132</td></tr><tr><td>SF(100,150)</td><td>4</td><td>2620.711</td></tr><tr><td>SF(000,150)</td><td>5</td><td>2549.053</td></tr><tr><td>SF(050,200)</td><td>6</td><td>2508.342</td></tr><tr><td>SF(050,150)</td><td>7</td><td>2296.500</td></tr><tr><td>SF(000,200)</td><td>8</td><td>2104.737</td></tr><tr><td>SF(050,250)</td><td>9</td><td>1955.895</td></tr><tr><td>SF(150,200)</td><td>10</td><td>1868.947</td></tr><tr><td>SF(100,200)</td><td>11</td><td>1770.026</td></tr><tr><td>SF(000,250)</td><td>12</td><td>1663.737</td></tr><tr><td>SF(100,250)</td><td>13</td><td>1537.737</td></tr><tr><td>SF(200,250)</td><td>14</td><td>1522.263</td></tr><tr><td>SF(150,250)</td><td>15</td><td>1491.868</td></tr><tr><td>SD</td><td>16</td><td>1266.026</td></tr><tr><td colspan="3">p-value &lt;&lt; 0.0000001</td></tr></table>
256
+
257
+ The three most competitive Swarm FREDY configurations are those that do not use data packets from further than $100\mathrm{m}$ for the beacon rate computations. Specifically, the best performance of this metric is achieved by the protocol configuration $d_{1} = 50\mathrm{m}$ and $d_{2} = 100\mathrm{m}$ .
258
+
259
+ The least competitive Swarm FREDY configurations for this metric (lower beacon rates) are those that take into account beacons from the farthest nodes ( $d_2 = 250\mathrm{m}$ ). Indeed, their performance for this metric is close to that of Swarm DIFRA.
260
+
261
+ Summarizing, Swarm FREDY methods allow communications with higher beacon rates than Swarm DIFRA, i.e., the reliability of CVS applications is increased, and therefore, the road traffic safety is improved. This improvement provided by Swarm FREDY is higher for road traffic scenarios with lower traffic densities, where the vehicles are moving at faster speeds, and therefore, the CVS applications require a higher information refresh rate by exchanging beacons with higher data rates to avoid hazardous situations.
262
+
263
+ # 7.2. Channel Usage
264
+
265
+ We now present the channel usage results in terms of the percentage of protocol queue filling. This metric is evaluated because it is not only affected by the communication beacon rate in the neighborhood, but also by other factors such as signal propagation and packet collisions. In terms of channel usage, the higher the better.
266
+
267
+ According to Table 4, there is no clear trend for the channel occupancy and the road traffic density. The scenarios with the three largest channel occupancies are those with 750, 1750, and 2000 vehicles.
268
+
269
+ In contrast to the individual beacon rate, in which Swarm FREDY (0,50) presented the most competitive results for most scenarios, for this metric (channel usage), there is not a specific Swarm FREDY configuration that clearly stands out from the others. Thus, Swarm FREDY is robust for this metric.
270
+
271
+ The lowest channel occupancy is obtained by Swarm DIFRA. This can be seen in Figure 8 for the scenario with 1750 vehicles. The main reason for this result is that when the VANET nodes use this algorithm, they communicate with the lowest beacon rates (as shown in Section 7.1), and therefore, they use the available shared medium less efficiently.
272
+
273
+ The performance differences between Swarm DIFRA and Swarm FREDY are higher in the scenarios with higher road traffic densities (i.e., higher congestion control problems). Here, our proposed method more efficiently manages the communication channel in more complicated situations.
274
+
275
+ According to the Aligned Friedman ranking results (see Table 5), the Swarm FREDY configurations, which use the channel less efficiently, are those with larger $d_{2}$ distances ( $d_{2} = 250$ ). Therefore as occurs with the individual beacon rate metric, if the algorithm considers exiles from shorter distances it performs better. Swarm FREDY (50,100) achieves significantly more competitive results.
276
+
277
+ Table 4: Median results of the channel usage for each VANET scenario and congestion control method.
278
+
279
+ <table><tr><td>Method</td><td>500 veh.</td><td>750 veh.</td><td>1000 veh.</td><td>1250 veh.</td><td>1500 veh.</td><td>1750 veh.</td><td>2000 veh.</td></tr><tr><td>SF(000,050)</td><td>66.770</td><td>73.124</td><td>66.844</td><td>65.780</td><td>69.835</td><td>70.917</td><td>76.181</td></tr><tr><td>SF(000,100)</td><td>66.888</td><td>73.430</td><td>65.774</td><td>65.956</td><td>69.967</td><td>70.808</td><td>76.676</td></tr><tr><td>SF(000,150)</td><td>66.797</td><td>72.171</td><td>65.704</td><td>65.869</td><td>69.358</td><td>70.918</td><td>77.174</td></tr><tr><td>SF(000,200)</td><td>67.026</td><td>72.552</td><td>66.011</td><td>65.294</td><td>70.063</td><td>70.534</td><td>75.612</td></tr><tr><td>SF(000,250)</td><td>66.378</td><td>72.213</td><td>64.909</td><td>65.507</td><td>68.910</td><td>71.005</td><td>76.829</td></tr><tr><td>SF(050,100)</td><td>67.161</td><td>72.645</td><td>66.685</td><td>65.807</td><td>70.734</td><td>70.363</td><td>77.365</td></tr><tr><td>SF(050,150)</td><td>66.831</td><td>72.205</td><td>66.041</td><td>65.307</td><td>69.174</td><td>70.461</td><td>75.588</td></tr><tr><td>SF(050,200)</td><td>66.745</td><td>73.313</td><td>66.482</td><td>65.184</td><td>70.036</td><td>70.749</td><td>77.568</td></tr><tr><td>SF(050,250)</td><td>66.197</td><td>71.882</td><td>65.546</td><td>65.138</td><td>69.332</td><td>70.672</td><td>75.915</td></tr><tr><td>SF(100,150)</td><td>67.055</td><td>71.938</td><td>65.350</td><td>64.995</td><td>69.650</td><td>70.611</td><td>76.585</td></tr><tr><td>SF(100,200)</td><td>66.542</td><td>72.367</td><td>65.141</td><td>64.727</td><td>69.119</td><td>70.929</td><td>75.157</td></tr><tr><td>SF(100,250)</td><td>66.499</td><td>71.351</td><td>65.351</td><td>64.709</td><td>69.184</td><td>70.922</td><td>75.595</td></tr><tr><td>SF(150,200)</td><td>66.287</td><td>72.100</td><td>64.704</td><td>65.504</td><td>69.181</td><td>69.599</td><td>76.348</td></tr><tr><td>SF(150,250)</td><td>66.141</td><td>71.186</td><td>64.676</td><td>65.017</td><td>69.073</td><td>69.800</td><td>75.733</td></tr><tr><td>SF(200,250)</td><td>66.674</td><td>71.418</td><td>64.518</td><td>65.098</td><td>69.139</td><td>69.511</td><td>76.581</td></tr><tr><td>SD</td><td>66.091</td><td>71.237</td><td>64.760</td><td>64.415</td><td>68.704</td><td>69.097</td><td>74.677</td></tr></table>
280
+
281
+ ![](images/3fd30354d0d278b057844c48ed60608a3904b63495dd507d7333d3b6e937bce6.jpg)
282
+ Figure 8: Channel occupancy results (in %) for the scenario with 1750 vehicles.
283
+
284
+ Table 5: Aligned Friedman Ranking of the channel usage results.
285
+
286
+ <table><tr><td>Congestion control method</td><td>Ranking position</td><td>Rank value</td></tr><tr><td>SF(050,100)</td><td>1</td><td>2861.566</td></tr><tr><td>SF(000,100)</td><td>2</td><td>2483.934</td></tr><tr><td>SF(000,150)</td><td>3</td><td>2426.737</td></tr><tr><td>SF(050,200)</td><td>4</td><td>2413.750</td></tr><tr><td>SF(100,150)</td><td>5</td><td>2396.987</td></tr><tr><td>SF(000,050)</td><td>6</td><td>2379.395</td></tr><tr><td>SF(000,200)</td><td>7</td><td>2327.816</td></tr><tr><td>SF(100,200)</td><td>8</td><td>2300.645</td></tr><tr><td>SF(050,150)</td><td>9</td><td>2286.368</td></tr><tr><td>SF(150,200)</td><td>10</td><td>2021.750</td></tr><tr><td>SF(100,250)</td><td>11</td><td>1909.013</td></tr><tr><td>SF(200,250)</td><td>12</td><td>1839.474</td></tr><tr><td>SF(000,250)</td><td>13</td><td>1760.724</td></tr><tr><td>SF(050,250)</td><td>14</td><td>1643.474</td></tr><tr><td>SF(150,250)</td><td>15</td><td>1635.000</td></tr><tr><td>SD</td><td>16</td><td>1369.368</td></tr><tr><td colspan="3">p-value &lt;&lt; 0.000001</td></tr></table>
287
+
288
+ Summarizing, Swarm FREDY maximizes the channel usage, which is one of the objectives of the FBR optimization problem. The CVS applications exchange larger amounts of data, while avoiding channel congestion. Thus, the likelihood of exchanging the information required by the CVS applications increases, providing safer road journeys. These results answer RQ1 because it can be seen that an efficient congestion control method is important to improve road safety and traffic efficiency.
289
+
290
+ # 7.3. Network Balance
291
+
292
+ The network balance is evaluated in terms of $\sigma(v)$ . The congestion control methods that allow higher fairness provide closer beacon rates among the VANET nodes. According to Eq. (2), the lower $\sigma(v)$ the better the balance. Table 6 shows the median results for the 50 simulations for each scenario. There is no clear trend in these results, but in general when the road traffic density grows, the network balance worsens ( $\sigma(v)$ increases). This is mainly because the coefficient of variation increases when the values of the distribution (i.e., beacon rates of the nodes) is lower.
293
+
294
+ Table 6: Median results of the network balance for each VANET scenario and congestion control method.
295
+
296
+ <table><tr><td>Method</td><td>500 veh.</td><td>750 veh.</td><td>1000 veh.</td><td>1250 veh.</td><td>1500 veh.</td><td>1750 veh.</td><td>2000 veh.</td></tr><tr><td>SF(000,050)</td><td>0.476</td><td>0.525</td><td>0.677</td><td>0.769</td><td>0.643</td><td>0.521</td><td>0.434</td></tr><tr><td>SF(000,100)</td><td>0.457</td><td>0.504</td><td>0.647</td><td>0.708</td><td>0.599</td><td>0.509</td><td>0.400</td></tr><tr><td>SF(000,150)</td><td>0.417</td><td>0.461</td><td>0.577</td><td>0.676</td><td>0.563</td><td>0.455</td><td>0.393</td></tr><tr><td>SF(000,200)</td><td>0.366</td><td>0.409</td><td>0.497</td><td>0.633</td><td>0.521</td><td>0.441</td><td>0.396</td></tr><tr><td>SF(000,250)</td><td>0.316</td><td>0.369</td><td>0.452</td><td>0.588</td><td>0.465</td><td>0.390</td><td>0.315</td></tr><tr><td>SF(050,100)</td><td>0.434</td><td>0.491</td><td>0.624</td><td>0.703</td><td>0.587</td><td>0.492</td><td>0.388</td></tr><tr><td>SF(050,150)</td><td>0.396</td><td>0.446</td><td>0.567</td><td>0.664</td><td>0.546</td><td>0.459</td><td>0.384</td></tr><tr><td>SF(050,200)</td><td>0.365</td><td>0.406</td><td>0.500</td><td>0.619</td><td>0.523</td><td>0.425</td><td>0.357</td></tr><tr><td>SF(050,250)</td><td>0.315</td><td>0.374</td><td>0.448</td><td>0.555</td><td>0.461</td><td>0.378</td><td>0.318</td></tr><tr><td>SF(100,150)</td><td>0.395</td><td>0.445</td><td>0.544</td><td>0.641</td><td>0.529</td><td>0.428</td><td>0.377</td></tr><tr><td>SF(100,200)</td><td>0.345</td><td>0.405</td><td>0.497</td><td>0.627</td><td>0.528</td><td>0.426</td><td>0.372</td></tr><tr><td>SF(100,250)</td><td>0.308</td><td>0.361</td><td>0.429</td><td>0.557</td><td>0.463</td><td>0.359</td><td>0.320</td></tr><tr><td>SF(150,200)</td><td>0.353</td><td>0.402</td><td>0.482</td><td>0.604</td><td>0.486</td><td>0.412</td><td>0.359</td></tr><tr><td>SF(150,250)</td><td>0.299</td><td>0.346</td><td>0.425</td><td>0.535</td><td>0.444</td><td>0.360</td><td>0.297</td></tr><tr><td>SF(200,250)</td><td>0.283</td><td>0.337</td><td>0.398</td><td>0.506</td><td>0.412</td><td>0.332</td><td>0.273</td></tr><tr><td>SD</td><td>0.255</td><td>0.307</td><td>0.361</td><td>0.458</td><td>0.359</td><td>0.309</td><td>0.255</td></tr></table>
297
+
298
+ ![](images/7dbcb1a0998d5113c865423fe03fbb8ad7bf4befc4d4874a0e749551a572e07c.jpg)
299
+ Figure 9: Network balance results for the scenario with 1250 vehicles.
300
+
301
+ In this case, Swarm DIFRA obtains the most competitive results (the lowest $\sigma(v)$ ). These results are statistically confirmed by the Aligned Friedman Ranking test (see Table 7). The main reason for this is that Swarm DIFRA does not discriminate between the neighbor nodes [25], and therefore, it considers the whole neighborhood when computing the desired beacon rates. This computes lower beacon rates for all the nodes than Swarm FREDY. At first sight, it is fairer than Swarm FREDY but it causes a high degradation of the QoS (amount of data exchanged), and it is really bad.
302
+
303
+ Table 7: Aligned Friedman Ranking of the network balance results.
304
+
305
+ <table><tr><td>Congestion control method</td><td>Ranking position</td><td>Rank value</td></tr><tr><td>SD</td><td>1</td><td>251.773</td></tr><tr><td>SF(200,250)</td><td>2</td><td>584.648</td></tr><tr><td>SF(150,250)</td><td>3</td><td>912.441</td></tr><tr><td>SF(100,250)</td><td>4</td><td>1150.832</td></tr><tr><td>SF(050,250)</td><td>5</td><td>1312.114</td></tr><tr><td>SF(000,250)</td><td>6</td><td>1417.403</td></tr><tr><td>SF(150,200)</td><td>7</td><td>2076.919</td></tr><tr><td>SF(050,200)</td><td>8</td><td>2272.590</td></tr><tr><td>SF(100,200)</td><td>9</td><td>2299.212</td></tr><tr><td>SF(000,200)</td><td>10</td><td>2489.890</td></tr><tr><td>SF(100,150)</td><td>11</td><td>2782.993</td></tr><tr><td>SF(050,150)</td><td>12</td><td>3020.392</td></tr><tr><td>SF(000,150)</td><td>13</td><td>3212.648</td></tr><tr><td>SF(050,100)</td><td>14</td><td>3540.520</td></tr><tr><td>SF(000,100)</td><td>15</td><td>3662.623</td></tr><tr><td>SF(000,050)</td><td>16</td><td>3965.004</td></tr><tr><td colspan="3">p-value &lt;&lt; 0.0000001</td></tr></table>
306
+
307
+ Analyzing Swarm FREDY, the results in Table 6 do not show a clear behavior in terms of balance for all the methods. However, it can be seen that the longer $d_{2}$ the better results. Thus, if the set of authorities and voters nodes grows Swarm FREDY obtains more balanced beacon rates. Figure 9 illustrates this behavior for the scenario with 1250 vehicles.
308
+
309
+ # 7.4. Beacon Rate Stability
310
+
311
+ This section evaluates the beacon rate stability in terms of the number of beacon rate changes during the simulation, i.e., the number of times the nodes had to modify their beacon rate. The lower the number of times the beacon rate changed, the more stable the broadcasting method. Designers are able to create more reliable VANET applications when the performance of the network is more stable and predictable, so we look for algorithms showing this feature.
312
+
313
+ Table 8 summarizes these results for the 50 simulations. When the road traffic density grows the number of beacon adaptations increases. This is mainly because having more vehicles makes the distributed computation of efficient and accurate beacon rates more difficult.
314
+
315
+ In general, for the scenarios with lower road traffic densities (number of vehicles lower or equal to 1000) the Swarm FREDY methods provide more competitive results than Swarm DIFRA. In these scenarios they require lower beacon rate transformations, and therefore, the QoS of the communications is preserved for longer. Figure 10 illustrates these results. For the scenarios with higher traffic densities our approaches are still more stable than Swarm DIFRA.
316
+
317
+ The Aligned Friedman ranking statistical results are shown in Table 9. According to this test, the most competitive methods are those which take into account information about nearer nodes (authorities and voters are closer than $100\mathrm{m}$ ). This is because the closer surroundings change less frequently, and therefore, there is no need to continuously change the beacon rate.
318
+
319
+ The Swarm FREDY configuration $d_{1} = 0$ m and $d_{2} = 50$ m is significantly the most stable one. The least competitive results are those obtained by Swarm FREDY (200,250), which is where it considers the highest number of nodes for the computations. The Wilcoxon statistical test applied to the two last ranked methods (Swarm FREDY (200,250) and Swarm DIFRA) reveals that there are no significant differences between them. Thus, it is important to select a suitable Swarm FREDY configuration to provide stable broadcasting.
320
+
321
+ The Swarm FREDY variations studied provide more stable beacon rates (QoS). This allows CVS applications to have a predictable performance. Thus, service designers are able to better adjust the applications to the real exchange of information between VANET nodes.
322
+
323
+ Table 8: Median results of the number of beacon rate adaptations $\left( {\times {10}^{5}}\right)$ for each VANET scenario.
324
+
325
+ <table><tr><td>Method</td><td>500 veh.</td><td>750 veh.</td><td>1000 veh.</td><td>1250 veh.</td><td>1500 veh.</td><td>1750 veh.</td><td>2000 veh.</td></tr><tr><td>SF(000,050)</td><td>42.216</td><td>67.241</td><td>70.574</td><td>99.531</td><td>168.595</td><td>197.015</td><td>224.945</td></tr><tr><td>SF(000,100)</td><td>43.722</td><td>68.996</td><td>71.463</td><td>99.036</td><td>168.690</td><td>190.676</td><td>225.170</td></tr><tr><td>SF(000,150)</td><td>45.288</td><td>70.738</td><td>72.474</td><td>99.297</td><td>168.720</td><td>196.735</td><td>224.930</td></tr><tr><td>SF(000,200)</td><td>45.837</td><td>71.775</td><td>73.354</td><td>100.240</td><td>168.765</td><td>196.960</td><td>224.795</td></tr><tr><td>SF(000,250)</td><td>45.707</td><td>73.579</td><td>73.494</td><td>99.302</td><td>168.500</td><td>196.875</td><td>224.940</td></tr><tr><td>SF(050,100)</td><td>43.887</td><td>69.070</td><td>71.728</td><td>99.496</td><td>168.690</td><td>196.875</td><td>225.140</td></tr><tr><td>SF(050,150)</td><td>45.445</td><td>70.821</td><td>72.200</td><td>98.898</td><td>168.895</td><td>196.885</td><td>225.110</td></tr><tr><td>SF(050,200)</td><td>46.204</td><td>71.807</td><td>73.081</td><td>98.704</td><td>168.535</td><td>196.830</td><td>224.970</td></tr><tr><td>SF(050,250)</td><td>45.505</td><td>73.088</td><td>73.619</td><td>98.462</td><td>168.730</td><td>197.095</td><td>224.995</td></tr><tr><td>SF(100,200)</td><td>46.387</td><td>71.780</td><td>73.460</td><td>99.030</td><td>168.750</td><td>196.845</td><td>225.100</td></tr><tr><td>SF(100,250)</td><td>47.643</td><td>73.728</td><td>73.872</td><td>98.085</td><td>168.765</td><td>196.945</td><td>225.095</td></tr><tr><td>SF(100,150)</td><td>43.706</td><td>71.107</td><td>72.525</td><td>100.605</td><td>168.670</td><td>196.960</td><td>225.150</td></tr><tr><td>SF(150,200)</td><td>45.925</td><td>71.857</td><td>73.958</td><td>99.144</td><td>168.640</td><td>196.745</td><td>225.050</td></tr><tr><td>SF(150,250)</td><td>46.064</td><td>73.484</td><td>74.055</td><td>99.662</td><td>168.660</td><td>196.965</td><td>224.790</td></tr><tr><td>SF(200,250)</td><td>47.736</td><td>73.434</td><td>74.643</td><td>98.782</td><td>168.750</td><td>196.940</td><td>225.185</td></tr><tr><td>SD</td><td>48.157</td><td>73.701</td><td>74.650</td><td>99.965</td><td>168.880</td><td>196.900</td><td>224.730</td></tr></table>
326
+
327
+ ![](images/543bb8e00094b00df541eaa4ec875be593ed16789513b0bed361af8006146bfa.jpg)
328
+ Figure 10: Network balance results for the scenario with 500 vehicles.
329
+
330
+ Table 9: Aligned Friedman Ranking of the stability results.
331
+
332
+ <table><tr><td>Congestion control method</td><td>Ranking position</td><td>Rank value</td></tr><tr><td>SF(000,050)</td><td>1</td><td>1407.701</td></tr><tr><td>SF(000,100)</td><td>2</td><td>1589.606</td></tr><tr><td>SF(050,100)</td><td>3</td><td>1654.310</td></tr><tr><td>SF(000,150)</td><td>4</td><td>1885.114</td></tr><tr><td>SF(050,150)</td><td>5</td><td>1920.943</td></tr><tr><td>SF(100,150)</td><td>6</td><td>1935.092</td></tr><tr><td>SF(050,200)</td><td>7</td><td>2138.353</td></tr><tr><td>SF(000,200)</td><td>8</td><td>2253.535</td></tr><tr><td>SF(150,200)</td><td>9</td><td>2273.104</td></tr><tr><td>SF(100,200)</td><td>10</td><td>2338.093</td></tr><tr><td>SF(050,250)</td><td>11</td><td>2415.907</td></tr><tr><td>SF(000,250)</td><td>12</td><td>2475.255</td></tr><tr><td>SF(150,250)</td><td>13</td><td>2579.247</td></tr><tr><td>SF(100,250)</td><td>14</td><td>2615.007</td></tr><tr><td>SD</td><td>15</td><td>2719.504</td></tr><tr><td>SF(200,250)</td><td>16</td><td>2751.229</td></tr></table>
333
+
334
+ p-value $< < 0.0000001$
335
+
336
+ # 7.5. Results Review
337
+
338
+ According to the experimental evaluation performed in this study, the answer to RQ2 is yes, because the proposed Swarm FREDY provides an efficient congestion control and QoS by using light cost computations. This method improves $i$ ) the amount of data shared by the nodes (increases beacon rates), $ii$ ) the channel usage, and $iii$ ) the stability with regard to Swarm DIFRA.
339
+
340
+ Swarm DIFRA [25] provides competitive fair results because the nodes tend to broadcast beacons with lower and similar beacon rates. However, it is clearly the least competitive method for the other metrics analyzed. In answer to RQ3, the proposed stochastic method (Swarm FREDY) improved the performance of a deterministic congestion method based on similar fundamentals (Swarm DIFRA).
341
+
342
+ # 8. Conclusions and Future Work
343
+
344
+ This article has studied the congestion control in VANET broadcasting. More specifically, we have focused on the beaconing used by CVS applications, the most promising ones for road traffic safety and efficiency. We have defined and tackled the FBR optimization problem, using automatic solvers.
345
+
346
+ We have devised Swarm FREDY, a swarm intelligence based family of algorithms which utilize light computations to address such an optimization problem dynamically. As a swarm method, Swarm FREDY computations are based on combining self-monitored information (self-experience) and data received from the neighborhood (experience of the neighbors). Then, it allows an emergent behavior that leads the whole system to reliable, efficient, and useful communications for an updated system without a central authority.
347
+
348
+ One of the main contributions of Swarm FREDY is the use of the stochastic distance discrimination procedure based on two parameters ( $d_{1}$ and $d_{2}$ ). This process classifies neighboring nodes into three different categories to get better and more simple information about the current network status, and thereby obtaining more accurate beacon rates.
349
+
350
+ The proposed algorithm family has been compared with the Swarm DIFRA method, which had demonstrated a competitive performance in comparison with other state-of-the-art congestion control methods in previous studies. The experimental analysis confirms that significant improvements in congestion control are obtained when using Swarm FREDY, when compared with Swarm DIFRA. When the VANET nodes use Swarm FREDY, they are able to communicate at higher beacon rates, enabling CVS applications to share information with high resolution. In addition, the channel usage is maximized without exceeding its effective capacity, and therefore, the throughput is maximal. Finally, Swarm FREDY has demonstrated higher robustness than Swarm DIFRA, because the beacon rates computed by the first method vary less frequently than those computed by Swarm DIFRA.
351
+
352
+ There is some room for improvement in Swarm FREDY, as the results on balance, in experimental results versus DIFRA. The main lines of future research are three: $i$ ) evaluating the proposed congestion control method by using realistic urban scenarios aiming to confirm their competitive performance; $ii$ ) applying optimization strategies to compute the most promising (optimal) values for the configuration parameters of Swarm FREDY; and $iii$ ) using the Swarm FREDY algorithms devised here as the starting point towards developing new distributed broadcasting methods that use modern computational intelligence strategies (e.g., Neural Networks).
353
+
354
+ # References
355
+
356
+ # References
357
+
358
+ [1] Artimy, M.M., Robertson, W., Phillips, W.J., 2005. Assignment of dynamic transmission range based on estimation of vehicle density, in: 2nd ACM International Workshop on Vehicular Ad Hoc Networks, ACM. pp. 40-48.
359
+ [2] Bonabeau, E., Dorigo, M., Theraulaz, G., 1999. Swarm intelligence: from natural to artificial systems. 1, Oxford university press.
360
+ [3] Brambilla, M., Ferrante, E., Birattari, M., Dorigo, M., 2013. Swarm robotics: a review from the swarm engineering perspective. Swarm Intelligence 7, 1-41.
361
+ [4] Campolo, C., Molinaro, A., Scopigno, R. (Eds.), 2015. Vehicular ad hoc Networks - Standards, Solutions, and Research. Springer.
362
+
363
+ [5] Chaqfeh, M., Lakas, A., Jawhar, I., 2014. A survey on data dissemination in vehicular ad hoc networks. Vehicular Communications 1, 214 - 225.
364
+ [6] Dias, J.A., Rodrigues, J.J., Zhou, L., 2014. Cooperation advances on vehicular communications: A survey. Vehicular Communications 1, 22 - 32.
365
+ [7] Djahel, S., Ghamri-Doudane, Y., 2012. A robust congestion control scheme for fast and reliable dissemination of safety messages in VANETs, in: 2012 IEEE Wireless Communications and Networking Conference (WCNC), pp. 2264-2269.
366
+ [8] Fallah, Y.P., Huang, C., Sengupta, R., Krishnan, H., 2010. Congestion control based on channel occupancy in vehicular broadcast networks, in: 72nd IEEE Vehicular Technology Conference Fall (VTC 2010-Fall), IEEE. pp. 1-5.
367
+ [9] Giagkos, A., Wilson, M.S., 2014. Beeip - a swarm intelligence based routing for wireless ad hoc networks. Information Sciences 265, 23 - 35.
368
+ [10] Gupta, N., Prakash, A., Tripathi, R., 2015. Medium access control protocols for safety applications in vehicular ad-hoc network: A classification and comprehensive survey. Vehicular Communications 2, 223 - 237.
369
+ [11] Lan, K.C., Chou, C.M., 2008. Realistic mobility models for Vehicular Ad hoc Network (VANET) simulations, in: 2008 8th International Conference on ITS Telecommunications, pp. 362-366.
370
+ [12] Lochert, C., Scheuermann, B., Mauve, M., 2007. A survey on congestion control for mobile ad hoc networks. Wireless Communications and Mobile Computing 7, 655-676.
371
+ [13] Mir, Z.H., Toutouh, J., Filali, F., Alba, E., 2015. QoS-Aware Radio Access Technology (RAT) Selection in Hybrid Vehicular Networks, in: Kassab, M., Berbineau, M., Vinel, A., Jonsson, M., Garcia, F., Soler, J. (Eds.), Communication Technologies for Vehicles. Springer International Publishing. volume 9066 of Lecture Notes in Computer Science, pp. 117-128.
372
+ [14] Mittag, J., Schmidt-Eisenlohr, F., Killat, M., Härri, J., Hartenstein, H., 2008. Analysis and design of effective and lowoverhead transmission power control for VANETs, in: Fifth ACM International Workshop on Vehicular Inter-NETworking, ACM. pp. 39-48.
373
+ [15] Rahman, I., Vasant, P.M., Mahinder Singh, B.S., Abdullah-Al-Wadud, M., 2015. Swarm intelligence-based smart energy allocation strategy for charging stations of plug-in hybrid electric vehicles. Mathematical Problems in Engineering 2015.
374
+ [16] Rezaei, S., Sengupta, R., Krishnan, H., 2007. Reducing the Communication Required By DSRC-Based Vehicle Safety Systems, in: IEEE Intelligent Transportation Systems Conference (ITSC 2007), IEEE. pp. 361-366.
375
+ [17] Salama, K.M., Abdelbar, A.M., 2015. Learning neural network structures with ant colony algorithms. Swarm Intelligence 9, 229-265.
376
+ [18] Sattari, M.R.J., Noor, R.M., Keshavarz, H., 2012. A taxonomy for congestion control algorithms in vehicular ad hoc networks, in: 2012 IEEE International Conference on Communication, Networks and Satellite (ComNetSat), IEEE. pp. 44-49.
377
+ [19] Schmidt, R., Leimuller, T., Schoch, E., Kargl, F., Schafer, G., 2010. Exploration of adaptive beaconing for efficient intervehicle safety communication. Network, IEEE 24, 14-19.
378
+ [20] Sheskin, D.J., 2003. Handbook of parametric and nonparametric statistical procedures. CRC Press.
379
+ [21] Taherkhani, N., Pierre, S., 2012. Congestion control in vehicular ad hoc networks using meta-heuristic techniques, in: Second ACM International Symposium on Design and Analysis of Intelligent Vehicular Networks and Applications, ACM. pp. 47-54.
380
+ [22] Taherkhani, N., Pierre, S., 2015. Improving dynamic and distributed congestion control in vehicular ad hoc networks. Ad Hoc Networks 33, 112 - 125.
381
+ [23] Tielert, T., Jiang, D., Hartenstein, H., Delgrossi, L., 2013. Joint power/rate congestion control optimizing packet reception in vehicle safety communications, in: Tenth ACM international workshop on Vehicular inter-networking, systems, and applications, ACM. pp. 51-60.
382
+ [24] Torrent-Moreno, M., Mittag, J., Santi, P., Hartenstein, H., 2009. Vehicle-to-vehicle communication: fair transmit power control for safety-critical information. Vehicular Technology, IEEE Tran. on 58, 3684-3703.
383
+ [25] Toutouh, J., Alba, E., 2016. Distributed fair rate congestion control for vehicular networks, in: 13th International Conference Distributed Computing and Artificial Intelligence, Springer. pp. 433-442.
384
+ [26] Vahdat-Nejad, H., Ramazani, A., Mohammadi, T., Mansoor, W., 2016. A survey on context-aware vehicular network applications. Vehicular Communications 3, 43 - 57.
385
+ [27] Wischhof, L., Rohling, H., 2005. Congestion control in vehicular ad hoc networks, in: 2005 IEEE International Conference on Vehicular Electronics and Safety, IEEE. pp. 58-63.
386
+ [28] Xu, H., Barth, M., 2004. A transmission-interval and power-level modulation methodology for optimizing inter-vehicle communications, in: 1st ACM international workshop on Vehicular ad hoc networks, ACM. pp. 97-98.
387
+ [29] Yang, X.S., Cui, Z., Xiao, R., Gandomi, A.H., Karamanoglu, M., 2013. Swarm intelligence and bio-inspired computation: theory and applications. Newnes.
388
+
389
+ # Abstract
390
+
391
+ Keywords:
392
+
393
+ 1.
394
+
395
+ # Abstract
396
+
397
+ Keywords:
398
+
399
+ 1.
400
+
401
+ # Elsevier lATEX template*
402
+
403
+ Elsevier
404
+
405
+ Radarweg 29, Amsterdam
406
+
407
+ Elsevier Inc $^{a,b}$ , Global Customer Service $^{b,*}$
408
+
409
+ a1600 John F Kennedy Boulevard, Philadelphia
410
+
411
+ <sup>b</sup>360 Park Avenue South, New York
412
+
413
+ # Abstract
414
+
415
+ This template helps you to create a properly formatted IATEx manuscript.
416
+
417
+ Keywords: elssarticle.cls, $\mathrm{IATE_X}$ , Elsevier, template
418
+
419
+ 2010 MSC: 00-01, 99-00
420
+
421
+ # 1. The Elsevier article class
422
+
423
+ Installation. If the document class `elsarticle` is not available on your computer, you can download and install the system package `texlive-publishers` (Linux) or install the `IATEX` package `elsarticle` using the package manager of your `TEX` installation, which is typically `TEX` Live or `MikTEX`.
424
+
425
+ Usage. Once the package is properly installed, you can use the document class elsaarticle to create a manuscript. Please make sure that your manuscript follows the guidelines in the Guide for Authors of the relevant journal. It is not necessary to typeset your manuscript in exactly the same way as an article, unless you are submitting to a camera-ready copy (CRC) journal.
426
+
427
+ *Fully documented templates are available in the elsalticle package on CTAN.
428
+
429
+ *Corresponding author
430
+
431
+ Email address: support@elsevier.com (Global Customer Service)
432
+
433
+ URL: www.elsevier.com (Elsevier Inc)
434
+
435
+ <sup>1</sup>Since 1880.
436
+
437
+ Functionality. The Elsevier article class is based on the standard article class and supports almost all of the functionality of that class. In addition, it features commands and options to format the
438
+
439
+ - document style
440
+ - baselineskip
441
+ - front matter
442
+ - keywords and MSC codes
443
+ theorems, definitions and proofs
444
+ - lables of enumerations
445
+ - citation style and labeling.
446
+
447
+ # 2. Front matter
448
+
449
+ The author names and affiliations could be formatted in two ways:
450
+
451
+ (1) Group the authors per affiliation.
452
+ (2) Use footnotes to indicate the affiliations.
453
+
454
+ See the front matter of this document for examples. You are recommended to conform your choice to the journal you are submitting to.
455
+
456
+ # 3. Bibliography styles
457
+
458
+ There are various bibliography styles available. You can select the style of your choice in the preamble of this document. These styles are Elsevier styles based on standard styles like Harvard and Vancouver. Please use BibTEX to generate your bibliography and include DOIs whenever available.
459
+
460
+ Here are two sample references: $[\text{?} \text{?} ]$ .
461
+
462
+ # References
2501.10xxx/2501.10007/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56c9797eb5f3af238ef8b97390f00be764d492df7b0bd0841fe03b8ce0b25b88
3
+ size 1009487
2501.10xxx/2501.10007/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.10xxx/2501.10016/5e75b0e2-014d-42a0-97da-a7e002559710_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.10xxx/2501.10016/5e75b0e2-014d-42a0-97da-a7e002559710_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.10xxx/2501.10016/5e75b0e2-014d-42a0-97da-a7e002559710_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9be194c7bf7cd24389322eddec943474f56247a2225b83660221e52a80c329f
3
+ size 1432408
2501.10xxx/2501.10016/full.md ADDED
@@ -0,0 +1,507 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Infrastructure Deployment in Vehicular Communication Networks Using a Parallel Multiobjective Evolutionary Algorithm
2
+
3
+ Renzo Massobrio<sup>1*</sup>, Jamal Toutouh<sup>2</sup>, Sergio Nesmachnow<sup>1</sup>, Enrique Alba<sup>2</sup>
4
+
5
+ <sup>1</sup> Universidad de la Républika, Herrera y Reissig 565, Montevideo, 11300, Uruguay
6
+ <sup>2</sup> Dept. de Lenguajes y Ciencias de la Computación, University of Malaga, Malaga, 29071, Spain
7
+
8
+ This article describes the application of a multiobjective evolutionary algorithm for locating roadside infrastructure for vehicular communication networks over realistic urban areas. A multiobjective formulation of the problem is introduced, considering quality-of-service and cost objectives. The experimental analysis is performed over a real map of Malaga, using real traffic information and antennas, and scenarios that model different combinations of traffic patterns and applications (text/audio/video) in the communications. The proposed multiobjective evolutionary algorithm computes accurate trade-off solutions, significantly improving over state-of-the-art algorithms previously applied to the problem. © 2016 Wiley periodicals, Inc.
9
+
10
+ KEY WORDS: VANETs, infrastructure placement, multiobjective evolutionary algorithms, smart cities.
11
+
12
+ # 1. INTRODUCTION
13
+
14
+ Vehicular traffic is a major concern in modern cities. $^{14}$ Several problems related to mobility, traffic safety, environment, etc. can be efficiently solved by applying smart computational methods. In this context, the concept of smart cities has emerged as a key issue in modern urbanization. A smart city applies information technologies to enhance quality, performance, and interactivity of urban services and/or to reduce costs and resource consumption. Road traffic management is a specific area that makes use of smart city applications, with the goal of improving the management of urban flows and allowing for real time responses to challenges that have great impact on the citizens. $^{11}$
15
+
16
+ A number of smart city solutions are based on intelligent transport systems (ITS). The main idea behind these systems consists in sharing information about the traffic conditions with road users and authorities. A better informed citizen can take better driving decisions, positively influencing the global traffic safety, efficiency.
17
+
18
+ Vehicular ad hoc networks (VANETs) emerge as a promising technology to allow continuous data exchange between vehicles equipped with an on-board unit (OBU). Vehicles can also communicate with roadside unit (RSU) elements via direct short range communications (DSRC). Depending on the type of nodes involved, several types of communications can occur in a VANET: vehicle-to-vehicle (V2V) communications, when the vehicles communicate directly with each other, and vehicle-to-infrastructure (V2I) communications, when the vehicles exchange data with RSUs.
19
+
20
+ VANETs allow developing a large set of powerful applications to improve road transport experience for both drivers and passengers. Typically, these applications are categorized into safety and non-safety applications. The first ones aim at improving road safety and avoiding hazardous situations (road accidents), e.g., cooperative driving and intersection collision avoidance
21
+
22
+ ![](images/008866074aac55fff79505173e23d5ca0344508019805fb483c9266ee10b3038.jpg)
23
+ Figure 1. Global VANET architecture.
24
+
25
+ applications. Non-safety applications include a collection of different solutions oriented to enhancing the traffic efficiency (e.g., travel times, fuel consumption, $\mathrm{CO}_{2}$ emissions, etc.). These applications also allow improving the comfort and entertainment of passengers.
26
+
27
+ Safety and traffic efficiency applications, such as Cooperative Vehicle Safety, gather real-time data from diverse sources (vehicle sensors, information received from other nodes, or both), process it, and disseminate it to the other nodes. Most of these applications rely on periodic message broadcasting or beaconing. This kind of applications require very short data delivery times (in terms of milliseconds), since larger delivery times increase the uncertainty on the system and may cause hazardous situations. Infotainment VANET applications (e.g., audio or video broadcasting, on-line gaming, etc.) mainly rely on continuous data streams. The real time requirements are lower than for safety and efficiency applications, but they require larger transmission data rates (in terms of tenths of kilobytes per second) to keep the quality of the service provided.
28
+
29
+ In this study, we focus on a specific element of VANET architecture, the RSUs, which are devices that are usually installed along the roads on the roadside infrastructure elements, e.g., traffic lights. In addition, they may be fixed along roadside as specific dedicated VANET elements. RSUs include a network interface to exchange information with other VANET nodes through DSRC. They may also be equipped with other network interfaces to connect to other networks or to the Internet. RSUs perform three main functions: i) acting as an information transmitter or receiver in VANET applications, e.g., warning about of the existence of roadworks, accidents, etc.; ii) extending the effective communication range by forwarding data to other VANET nodes (OBUs or RSUs) through multi-hop communications; and iii) providing Internet connectivity to other nodes in the VANET.
30
+
31
+ Figure 1 illustrates a typical VANET scenario and the importance of including RSUs in the VANET architecture. In the figure, the coverage of the OBUs is shaded in blue, and represents the maximum distance in which two vehicles may utilize V2V to communicate with each other (i.e., vehicles 2 and 3 and vehicles 1 and 2 are the only ones that are able to exchange information with each other). The communication range of the RSU is shaded in orange, and therefore, vehicles 2, 3, and 4 can communicate with the RSU via V2I. In the presented example, the ambulance (vehicle 4) is approaching to vehicles 1, 2, and 3. These vehicles are outside the coverage of the OBU of the ambulance, therefore V2V communications cannot be used. The only way to warn that the ambulance is approaching is by using a RSU to forward messages to vehicles 2 and 3. Thus, the RSU extends the effective communication range of the ambulance. In turn, the RSU can inform vehicles about possible new and more efficient routes, helping to improve the ambulance trip. Furthermore, all vehicles in the scenario can use the RSU connectivity to access traffic services, Internet, etc.
32
+
33
+ Consequently, the deployment of a fixed infrastructure of RSUs along the roads is of vital importance when deploying modern and powerful ITS, which helps to mitigate the serious road traffic problems that have to be confronted in modern cities.
34
+
35
+ Deploying the RSU infrastructure for VANETs is a challenge because network designers must decide about the number, type, and location of RSUs to maximize quality-of-service (QoS) of the
36
+
37
+ VANET, while satisfying and/or minimizing the deployment cost requirements. At this point, the network designers have to take into account that VANETs are used by different types of applications and services (safety and non-safety), and therefore, the final effective QoS of the network has to satisfy the requiered transmission data rates and delivery times of such applications.
38
+
39
+ The RSU Deployment Problem (RSU-DP) consists in placing a set of RSU terminals in a given area. We study a multiobjective version of the RSU-DP, which proposes maximizing the network QoS and minimizing the deployment costs. This is a hard-to-solve optimization problem on city-scaled areas, as the number of possible solutions is very large.[31] Heuristics and metaheuristics[28] are promising methods to deal with the RSU-DP; they allow computing good infrastructure designs in reduced execution times.[6,34] In this article, we propose applying the NSGA-II evolutionary algorithm[13] to design the RSU infrastructure within a city-scaled network in Malaga (Spain). In order to obtain realistic results, we consider real information about road traffic (road map and traffic flow), hardware (network capabilities and costs), and VANET applications.
40
+
41
+ This article extends our previous conference paper,[24] where the problem was first presented and preliminary results of applying a multiobjective evolutionary algorithm (MOEA) were reported.
42
+
43
+ The main contributions of the research reported in this article are: i) the multiobjective formulation of the RSU-DP considers in the QoS evaluation the maximum number of vehicles that can be simultaneously attended by a given RSU type, extending our previous study that just took into account the effective radio range of each RSU type; ii) we solve realistic scenarios, larger than those previously solved in the related literature, and we include here the last new and updated traffic data published by the Málaga city council for 2015; iii) we model a set of real VANET applications, considered in the QoS metric applied in the problem formulation to evaluate a set of potential locations for RSUs (these applications were not present in the previous conference paper); iv) we adapt two state-of-the-art heuristic methods (deterministic and randomized) for the problem, to be used as a baseline for the comparison of the proposed MOEA; v) we propose a parallel master-slave MOEA, including a new initialization operator based on the Randomized Knapsack algorithm as a novelty regarding our previous study; and vi) we report accurate results for cost and QoS for the problem instances solved: the proposed NSGA-II is able to improve over the results computed by the best baseline heuristics up to $24.68\%$ and $52.71\%$ in terms of cost, and up to $34.09\%$ and $39.48\%$ in terms of QoS.
44
+
45
+ The article is organized as follows. Section 2 introduces the multiobjective version of the RSU-DP. Section 3 presents a review of works solving the RSU location problem and related radio network design problems. Section 4 introduces the methods applied to solve the problem. The specific features of the proposed MOEA to solve the RSU-DP are described in Section 5. Section 6 describes the heuristic methods proposed as a baseline for comparing the results computed using the proposed MOEA, and reports the experimental evaluation of the proposed method on a set of realistic scenarios in the city of Málaga, using real infrastructure and VANET applications. Finally, Section 7 formulates the conclusions and the main lines for future work.
46
+
47
+ # 2. THE RSU DEPLOYMENT PROBLEM
48
+
49
+ The mathematical formulation of the RSU-DP considers the following elements:
50
+
51
+ - A set of RSUs $R = \{R_{1},\dots,R_{q}\}$ to be installed in a city scenario for providing efficient VANET communications.
52
+ - A set of RSU types $T = \{t_1, t_2, \dots, t_l\}$ . Each RSU type is characterized by a given deployment cost and a coverage determined by the transmission power and the antenna gain. The type of a RSU is given by the function type: $R \to T$ .
53
+ - A set of road segments $S = \{s_1, s_2, \ldots, s_n\}$ , which are potential locations for placing RSUs along the city streets. Each segment $s_i$ is defined by a pair of points $(p_j, p_k)$ , with $p_j, p_k \in P = \{p_1, p_2, \ldots, p_m\}$ . Each point $p_j$ is identified by its geographical coordinates (latitude, longitude). The length of a given segment $s_i$ is given by the function $len: S \to \mathbb{R}^+$ . RSUs can be placed at any location within each segment $s_i$ .
54
+
55
+ - An estimation of the number of vehicles per time period across each segment $s_i$ , given by function $NV \colon S \to \mathbb{N}^+$ , and the average vehicle speed for each segment, given by function $sp \colon S \to \mathbb{R}^+$ .
56
+ - A cost function $C\colon T \to \mathbb{R}^+$ , where $C(t_g)$ indicates the monetary cost of placing a RSU of type $t_g$ in the deployed infrastructure.
57
+ - A set of applications $A = \{A_{1}, A_{2}, \ldots, A_{u}\}$ to be used over the VANET. Each application has specific QoS requirements, given by function $Q$ : $A \to \mathbb{N}^{+} \times \mathbb{N}^{+}$ . $Q(A_{h})$ is a vector with two elements, indicating the QoS requirements for packet delivery ratio (PDR) and end-to-end delay (E2ED) for application $A_{h}$ . On a given scenario, $Q(A_{h})$ is used to define the maximum number of users to be served by each RSU, given by function $MU$ : $R \times A \to \mathbb{N}^{+}$ .
58
+
59
+ Solutions of the problem are defined by a set of RSUs placed over the road segments of the city, represented by a set $sol = \{R_1, R_2, \dots, R_l\}$ , where $l$ is the number of RSUs (#RSU) in solution $sol$ ( $l \leq n$ ). Each RSU is installed in a specific coordinate within a segment $s_i$ . The segments covered by a RSU are given by the function $cov: R \to S$ , and the portion of segment $s_k$ covered by RSU $R_j$ is given by the function $cp: R \times S \to [0, 1]$ .
60
+
61
+ The multiobjective version of the RSU-DP proposes to find a set of locations and the type of RSU to deploy in each location, with the goal of maximizing the service time given by the whole RSU infrastructure, while simultaneously minimizing the total cost of deployment. The service time is a metric related to the QoS offered to the VANET users. It is related to the number of vehicles attended by RSUs, the time they are served (considering the coverage and average speed per each road segment), and the type of applications used in the studied scenario.
62
+
63
+ Formally, the problem is defined as the simultaneous optimization of two objective functions: maximize the QoS, given by $f_{1}(sol, A_{h})$ (Equation 1) and minimize the cost, given by $f_{2}(sol)$ (Equation 2). The corresponding values for function $MU$ for each RSU and application type are computed by simulations (see Section 6.2).
64
+
65
+ $$
66
+ \max f _ {1} (s o l, A _ {h}) = \sum_ {R _ {j}} ^ {R _ {j} \in s o l} \max \left(M U \left(R _ {j}, A _ {h}\right), \sum_ {s _ {i} \in c o v \left(R _ {j}\right)} N V \left(s _ {i}\right) \times \frac {c p \left(R _ {j} , s _ {i}\right) \times l e n \left(s _ {i}\right)}{s p \left(s _ {i}\right)}\right) \tag {1}
67
+ $$
68
+
69
+ $$
70
+ \min f _ {2} (s o l) = \sum_ {R _ {j}} ^ {R _ {j} \in s o l} C (t y p e (R _ {j})) \tag {2}
71
+ $$
72
+
73
+ # 3. RELATED WORK
74
+
75
+ Including RSUs in the network loop improves the global VANET performance in terms of connectivity, transmission delays, and communication ranges.[20] Deploying a low cost and high coverage RSU infrastructure is often a capital issue for the success of VANETs in real cities. This section reviews computational intelligence methods applied to the RSU-DP and related problems.
76
+
77
+ In the related literature, different studies address the RSU-DP. Most of these works analyze RSU-DP as a version of the Radio Network Design problem.[25] However, as most nodes in VANETs are vehicles, the design of the roadside platform prioritizes locations taking into account road traffic information as speed of the vehicles, traffic density, etc.
78
+
79
+ Both exact methods and heuristics have been applied to solve the RSU-DP and related problems.
80
+
81
+ Trullols et al.3 defined the Maximum Coverage with Time Threshold Problem (MCTTP) to maximize the number of vehicles that get in contact with a given number of RSUs for a given amount of time over a certain area. The authors proposed three greedy algorithms with different knowledge of the road topology and identity of the vehicles. These approaches were applied over a scenario with real road and mobility data from Zurich, Switzerland. The results showed that knowledge of vehicular mobility is the main factor to achieve an almost-optimal roadside deployment. Given such knowledge, the heuristics successfully planned a deployment capable of informing more than $95\%$ of vehicles.
82
+
83
+ Aslam et al.² applied the Balloon Expansion Heuristic (BEH) and Binary Integer Programming (BIP) to minimize the reporting time, installing a fixed number of RSUs in Miami, USA. The methods used information about speed, traffic density, and likelihood of incidents. BEH performed better than BIP in the reported experiments.
84
+
85
+ A Voronoi-based algorithm was applied by Patil and Gokhale $^{30}$ to optimize packet loss, communications delays, and network coverage, while minimizing the number of RSUs required in a deployed vehicular network in an area of Nashville, USA. The algorithm used information about the speed of vehicles and the traffic density to evaluate the solutions.
86
+
87
+ Ben Brahim et al. solved a variant of the RSU-DP in Doha, Qatar, considering the traffic network as a graph with weighted links. The weight of the links are computed according to road traffic and mobility-based parameters, such as road traffic density and average speed. Afterwards, all potential positions for the RSUs are computed by applying two different approaches: a dynamic algorithm based on 0-1 Knapsack problem (KP_DynAlg) solver and the PageRank algorithm. The KP_DynAlg improved over the results computed by the PageRank method.
88
+
89
+ Some studies have proposed applying evolutionary algorithms (EAs) for solving variants of the RSU-DP, in order to obtain accurate solutions while consuming reasonable computational resources. An early approach studied applying a genetic algorithm (GA) that uses a VANET simulator to evaluate the QoS of the computed solutions in a given area of $16 \times 16 \mathrm{~km}^2$ in the city of Brunswick, Germany, with about $500 \mathrm{~km}$ of roads and 10000 vehicles.[21] The authors introduce a domain aggregation scheme to minimize the required overall bandwidth for the VANET and propose a GA to locate static roadside units (called supporting units) to deal with a highly partitioned VANET in an early deployment stage. The proposed GA was useful to improve the travel time savings achieved by a given vector of active SU locations.
90
+
91
+ Cavalcante et al. compared GA against the greedy approach proposed by Trullols et al. to solve the MCTTP, taking into account real data form four different regions: Zurich downtown, Winterthur, Baden, and Baar. The proposed GA uses a greedy method to initialize the population. The results showed that the GA solutions obtained better vehicle coverage: up to $11\%$ better than those computed by the greedy approach.
92
+
93
+ Another GA proposal is by Cheng et al., who used geometry-based coverage information about the roads (without vehicles mobility related data) of Yukon Territory, Canada, for the solution evaluation. The GA computed the fitness in terms of the ratio between the covered road area and the whole road area, computed using a square grid of $1m \times 1m$ . This approach improved the results computed by the $\alpha$ -coverage algorithm, which proposes placing the RSUs in the center of the junctions.
94
+
95
+ A summary of the main related works about heuristics and computational intelligence methods (following evolutionary and non-evolutionary approaches) to solve the RSU-DP and related problems is presented in Table I.
96
+
97
+ Table I. Summary: related work about heuristics and computational intelligence methods applied to the RSU-DP.
98
+
99
+ <table><tr><td>author</td><td>year</td><td>problem</td><td>method</td><td>scenario</td></tr><tr><td colspan="5">non-evolutionary approaches</td></tr><tr><td>Trullols et al.33</td><td>2010</td><td>vehicle maximization</td><td>greedy algorithms</td><td>Zurich, Switzerland</td></tr><tr><td>Aslam et al.2</td><td>2012</td><td>RSU installation</td><td>BEH, BIP</td><td>Miami, USA</td></tr><tr><td>Patil and Gokhale30</td><td>2013</td><td>RSU optimization</td><td>Voronoi-based algorithm</td><td>Nashville, USA</td></tr><tr><td>Ben Brahim et al.4</td><td>2014</td><td>mobility/traffic based</td><td>Knapsack, PageRank</td><td>Doha, Qatar</td></tr><tr><td colspan="5">evolutionary approaches</td></tr><tr><td>Lochert et al.21</td><td>2008</td><td>supporting units location</td><td>GA</td><td>Brunswick, Germany</td></tr><tr><td>Cavalcante et al.6</td><td>2012</td><td>MCTTP-RSU-DP</td><td>GA</td><td>Switzerland cities</td></tr><tr><td>Cheng et al.9</td><td>2013</td><td>geometry-based coverage</td><td>GA</td><td>Yukon territory</td></tr><tr><td>Massobrio et al.23,24</td><td>2015</td><td>multiobjective RSU-DP</td><td>MOEA</td><td>Málaga, Spain</td></tr></table>
100
+
101
+ Our previous works[23,24] were the first studies that applied an explicit multiobjective approach to solve the RSU-DP. Our proposal was oriented to maximize the coverage, in terms of the time that vehicles are connected to the RSUs, and minimize the deployment cost. We consider real information concerning both traffic (speed, traffic density, and road map) and hardware (costs and capabilities) for the case of urban locations in Malaga, Spain. The proposed MOEA obtained significantly better results than ad hoc greedy approaches, but the computed solutions did not cover the map properly, focusing on streets with high number of vehicles instead.
102
+
103
+ In this article, we extend our previous work $^{24}$ by considering a more realistic QoS model that includes a set of different VANET applications, which are taken into account to compute the maximum number of vehicles that can be simultaneously attended by a given RSU type. A more comprehensive experimental analysis is performed, including updated traffic data and modeling real VANET applications. Finally, the results are compared against those computed using specific heuristics, adapted from the work of Ben Brahim et al., $^{4}$ in terms of cost, QoS, and multiobjective optimization metrics.
104
+
105
+ # 4. METAHEURISTICS AND EVOLUTIONARY COMPUTATION
106
+
107
+ This section introduces the methods applied to solve the problem: metaheuristics, evolutionary algorithms and multiobjective evolutionary algorithms.
108
+
109
+ # 4.1. Metaheuristics
110
+
111
+ Metaheuristics are strategies to define algorithmic frameworks that allow designing efficient techniques to find approximate solutions for search, optimization, and learning problems.[15] They define high-level, heuristic-based, soft computing methods that can be applied to solve different optimization problems, by instantiating a generic resolution procedure.[28]
112
+
113
+ In practice, many optimization problems arising in nowadays real-world applications from science and technology are NP-hard and intrinsically complex. A lot of computing effort is demanded to solve them, due to a number of reasons: they have very large-dimension search spaces, they include hard constraints that make the search space very sparse, they are multimodal or multiobjective problems taking into account hard-to-evaluate optimization functions, or they manage very large volumes of data. This is the case for the problem solved in this article: the deployment of roadside infrastructure for VANETs, which is a variant of the well-known Radio Network Design problem.[25]
114
+
115
+ Metaheuristics provide efficient and accurate methods for solving realistic instances of the problem, that often cannot be solved using exact optimization methods that are extremely time-consuming. In this article, we apply a multiobjective evolutionary metaheuristic to solve the RSU-DP. The main features of EAs and their multiobjective variants are described next.
116
+
117
+ # 4.2. Evolutionary algorithms
118
+
119
+ EAs are non-deterministic methods that emulate the evolutionary process of species in nature to solve optimization, search, and other related problems. $^{3,16}$ In the last thirty years, EAs have been successfully applied for solving problems underlying many real and complex applications.
120
+
121
+ Algorithm 1 shows the generic schema of an EA. It is an iterative technique (each iteration is called a generation) that works by applying stochastic operators on a set of individuals (the population $P$ ) in order to improve their fitness, a measure that evaluates how good is a solution to solve the problem. Every individual in the population encodes a candidate solution for the problem. The initial population is generated by a random method or by using a specific heuristic for the problem (line 2 in Algorithm 1). An evaluation function associates a fitness value to every individual (line 4). The search is guided by a probabilistic selection-of-the-best technique (for both parents and offspring) to tentative solutions of higher quality (line 5). Iteratively, solutions are modified by the probabilistic application of variation operators (line 6), including the recombination of parts from two individuals or random changes (mutations) in their contents, which are applied for building new solutions during the search.
122
+
123
+ The stopping criterion usually involves a fixed number of generations or execution time, a quality threshold on the best fitness value, or the detection of a stagnation situation. Specific policies are used to select the groups of individuals to recombine (the selection method) and to determine which new individuals are inserted in the population in each new generation (the replacement criterion). The EA returns the best solution ever found in the iterative process, taking into account the fitness function.
124
+
125
+ Algorithm 1 Generic schema for an EA.
126
+ 1: $t\gets 0$ Generation counter
127
+ 2: initialize $(P(0))$
128
+ 3: while not stopcriterion do
129
+ 4: evaluate $(P(t))$
130
+ 5: parents $\leftarrow$ selection $(P(t))$
131
+ 6: offspring $\leftarrow$ variation operators(parents)
132
+ 7: $P(t + 1)\gets$ replacementoffspring, $P(t))$
133
+ 8: $t\gets t + 1$
134
+ 9: end while
135
+ 10: return best solution ever found
136
+
137
+ One of the most popular variants of EA in the literature is the genetic algorithm (GA), which has been extensively used to solve optimization problems mainly due to its simplicity and versatility.
138
+
139
+ The classic GA formulation was presented by Goldberg.16 Based on the generic schema of an EA, a GA defines selection, recombination and mutation operators, applying them to the population of potential solutions in each generation. In a classic application of a GA, the recombination operator is mainly used to guide the search (by exploiting the characteristics of suitable individuals), while the mutation is used as the operator aimed at providing diversity for exploring different zones of the search space.
140
+
141
+ Parallel models for metaheuristics and EAs have been proposed to speed up the computing time required for the search when dealing with complex objective functions and hard search spaces. In this work, we apply a master-slave model for parallelization, in order to reduce the execution time of computing the QoS objective when solving the RSU-DP problem (see details in Section 5).
142
+
143
+ # 4.3. Multiobjective evolutionary algorithms and NSGA-II
144
+
145
+ Multiobjective evolutionary algorithms (MOEAs) $^{10,13}$ are specific evolutionary optimization methods conceived to solve problems with many conflicting objective functions. MOEAs have obtained accurate results when used to solve difficult real-life optimization problems in many research areas.
146
+
147
+ Unlike many traditional methods for multiobjective optimization, MOEAs find a set with several solutions in a single execution, since they work with a population of tentative solutions. MOEAs are designed to fulfill two goals at the same time: $i)$ approximate the Pareto front, and $ii)$ maintain diversity, instead of converging to a section of the Pareto front. A Pareto-based evolutionary search leads to the first goal, while the second is accomplished using specific techniques from multi-modal function optimization (e.g., sharing, crowding, etc.).
148
+
149
+ In this work, we apply NSGA-II (Non-dominated Sorting Genetic Algorithm, version II), $^{12}$ a popular state-of-the-art MOEA that has been successfully applied in many areas. A schema of NSGA-II is presented in Algorithm 2 (where $N$ is the population size). The fitness calculation is based on Pareto dominance, building fronts of solutions. The evolutionary search on NSGA-II improves over the previous version (NSGA), using: $i$ ) a non-dominated, elitist sorting that reduces the complexity of the dominance check; $ii$ ) a crowding technique for diversity preservation; and $iii$ ) a fitness assignment that considers crowding distance values.
150
+
151
+ The NSGA-II algorithm proposed in this work has been engineered to compute accurate solutions for the RSU-DP. The main implementation details are presented in the next section.
152
+
153
+ Algorithm 2 Schema of the NSGA-II algorithm.
154
+ 1: $t\gets 0$ Generation counter
155
+ 2: offspring $\leftarrow \emptyset$
156
+ 3: initialize $(P(0))$
157
+ 4: while not stopcriterion do
158
+ 5: evaluate $(P(t))$
159
+ 6: R $\leftarrow P(t)\cup$ offspring
160
+ 7: fronts $\leftarrow$ non-dominated sorting(R))
161
+ 8: $P(t + 1)\gets \emptyset$
162
+ 9: $i\gets 1$
163
+ 10: while $|P(t + 1)| + |fronts(i)|\leq N$ do
164
+ 11: crowding distance (fronts(i))
165
+ 12: $P(t + 1)\gets P(t + 1)\cup$ fronts(i)
166
+ 13: $i\gets i + 1$
167
+ 14: end while
168
+ 15: sorting by distance (fronts(i))
169
+ 16: $P(t + 1)\gets P(t + 1)\cup$ fronts(i)[1:(N- $|P(t + 1)|$ ]
170
+ 17: selected $\leftarrow$ selection $(P(t + 1))$
171
+ 18: offspring $\leftarrow$ variation operators(selected)
172
+ 19: $t\gets t + 1$
173
+ 20: end while
174
+ 21: return computed Pareto front
175
+
176
+ # 5. THE PROPOSED NSGA-II ALGORITHM FOR THE RSU-DP
177
+
178
+ This section presents the details of the proposed NSGA-II evolutionary algorithm for the RSU-DP.
179
+
180
+ # 5.1. Solution encoding
181
+
182
+ In the proposed NSGA-II, solutions are represented as vectors of real numbers, having length $n = \# S$ (the number of elements in the set of road segments $S$ ). Each position on the vector contains the information about the RSU to install (if any) on the corresponding segment: i) the type of the RSU is given by the integer part of the real number (0 stands for the absence of RSU in the considered segment, and integers $1 \ldots k$ represent types $t_1 \ldots t_k$ , respectively); and ii) the candidate location to install the RSU within the segment is given by the fractional part of the real number, mapping the interval [0, 1) to points in the segment $[p_j, p_i)$ .
183
+
184
+ Figure 2 shows an example encoding for a scenario consisting of four segments, where three RSUs are placed. For instance, the value 1.50 in position 2 of the vector indicate that the solution proposes to install a RSU of type 1 (integer part of 1.50) at the middle (fractional part of $1.50 = 0.50$ ) of segment $s_2 = (p_2, p_3)$ . The same holds for value 2.16 in the first position of the vector, corresponding to segment $s_1$ , where a RSU of type 2 is installed at $0.16 \times \text{len}(s_1)$ within segment $s_1 = (p_1, p_2)$ . Finally, the value 0.33 in the fourth position of the vector indicates that the solution proposes not installing a RSU in segment $s_4 = (p_1, p_4)$ (the fractional part of the value encoded is irrelevant in this case). In fact, considering the covering radios of the RSUs placed in the map in Figure 2, we see that it could be a wise decision, because segment $s_4$ is fully covered by RSUs $t_2$ and $t_3$ (of course, this decision reduces the installation cost, but the QoS for users might be reduced, depending on the number of vehicles and the applications used in the considered scenario).
185
+
186
+ # 5.2. Evolutionary operators
187
+
188
+ The proposed MOEA applies different evolutionary operators and a parallel model to efficiently address the RSU-DP problem. This section describes such operators and parallel model.
189
+
190
+ ![](images/6caf37443062630ce45ac9ff06b71ff5d57ab312191f31fbcc6a446d6519ad5d.jpg)
191
+ Figure 2. Encoding for RSU-DP solutions.
192
+
193
+ 5.2.1. Initialization Instead of starting from a random set of solutions, we decided to seed the initial population with the solutions computed by the Randomized Knapsack heuristic, explained in Section 6.3. This decision allows focusing the evolutionary search on a subspace of good quality solutions. The Randomized Knapsack is a constructive method, providing a range of partial solutions that are useful to generate the initial population of the MOEA. Particularly, we employ solutions with a high number of RSUs, since we are not interested in exploring the areas of the Pareto front with negligible values for QoS (this region has configurations that are not useful in practice, in a real-world scenario).
194
+ 5.2.2. Selection The selection operator used is the tournament selection, as originally proposed in the NSGA-II algorithm. $^{13}$ The tournament size is two individuals and the fittest individual survives.
195
+ 5.2.3. Exploitation: recombination The recombination operator used is the well-known two-point crossover (2PX), where offspring are generated by swapping genes in the parents' chromosomes that fall between two randomly selected cutting points.
196
+ 5.2.4. Exploration: mutation We designed an ad-hoc mutation operator in order to provide enough diversity to the search, avoiding NSGA-II to get stuck in a specific region of the Pareto front. The mutation operator probabilistically applies three variations on solutions. This variations work as follows:
197
+
198
+ 1. With probability $\pi_A$ , the mutation operator changes the integer part of the selected gene value to 0, thus removing the RSU (if any) from the corresponding segment (see Figure 3a).
199
+ 2. With probability $\pi_B$ , the mutation operator changes the integer part of the selected gene value for a different one randomly picked in $[1, k]$ , thus changing the type of the RSU (or adding one if there was none) to a random type picked uniformly in $T$ (see Figure 3b).
200
+ 3. With probability $1 - \pi_A - \pi_B$ , the variation applied corresponds to a Gaussian Mutation with a standard deviation of $\sigma$ to the selected gene value, thus changing the position of the RSU within the segment (see Figure 3c).
201
+
202
+ 5.2.5. Parallel model We apply a master-slave parallel model for metaheuristics<sup>1</sup> to reduce the execution time demanded to evaluate the objective functions for each individual in the population. The decomposition approach allows NSGA-II to efficiently compute the objective functions for the set of candidate solutions in the population, as described in the next subsection.
203
+
204
+ ![](images/f74c5635065232fd1bd34c85b691c0ade70d35c91646f023c3aab131194cf334.jpg)
205
+ (a) Mutation applied with probability $\pi_A$ (remove RSU).
206
+
207
+ ![](images/d9d5a26f49957b609ee480bd2eb29a0c25153a65ef04f531de528556f6307488.jpg)
208
+ (b) Mutation applied with probability $\pi_B$ (modify RSU type).
209
+
210
+ ![](images/3c4bda073df462ba4036fe8521bf75a332866afae438fc3e7d49d249c519d01e.jpg)
211
+ (c) Mutation applied with probability $\pi_C$ (modify RSU location within segment).
212
+ Figure 3. Variations applied by the mutation operator.
213
+
214
+ # 5.3. Computing the objective functions
215
+
216
+ In order to compute the two objective functions to be optimized in RSU-DP, two functions were evaluated: the installation cost and the quality of service. These two functions are presented next.
217
+
218
+ 5.3.1. Installation cost The total installation cost is simply computed by adding the cost of each RSU placed in the solution, taking into account the corresponding RSU type.
219
+
220
+ 5.3.2. Quality of service For computing the QoS, we consider the distances and values shown in the diagram in Figure 4 (intersection of streets A and B). The RSU placed in the point “ $\times$ ” covers the subsegments $c_{1}$ (in $s_{1}$ ), $c_{2}$ (in $s_{2}$ ), both in street A, and $c_{3}$ (in $s_{3}$ ), and $c_{4}$ (in $s_{4}$ ), both in street B. The number of effective vehicles attended is computed by $\sum_{i=1}^{i=4} NV(s_{i}) \times \frac{c_{i}}{sp(s_{i})}$ . The computation requires finding the intersections between the road segments and the circle defining the coverage of the RSU. Given that the distances involved are relatively small, we use straight lines in the latitude-longitude space as an estimation, with negligible error. This approximation makes the computation faster, improving the overall performance of the algorithm.
221
+
222
+ ![](images/ebc3727f01d8b08d7d58890f15957f19dee372f1efbb2715a8b78d77ed45fc7f.jpg)
223
+ Figure 4. Calculation of the vehicles attended by a RSU.
224
+
225
+ In order to avoid situations where the fitness function computations take into account several times the same area (i.e., some vehicles in that area are counted multiple times), it is necessary to keep track of the number of vehicles that each installed RSU has already served. That number of vehicles depends on the type of the data streams that the RSU is providing (data, voice or video), since the minimum QoS required by each type of stream limits the maximum number of vehicles that can be simultaneously attended by a given RSU type.
226
+
227
+ # 6. EXPERIMENTAL ANALYSIS
228
+
229
+ This section presents the details of the experimental analysis performed to evaluate the proposed NSGA-II to solve the RSU-DP.
230
+
231
+ # 6.1. Development and execution platform
232
+
233
+ The proposed MOEA was implemented using the ECJ library, a Java-based evolutionary computation research system developed at ECLab Evolutionary Computation Laboratory, George Mason University.35 ECJ includes easily modifiable classes for solving multiobjective optimization problems using the NSGA-II algorithm.
234
+
235
+ The experimental analysis was performed over an AMD Opteron 6172 2.10 GHz server, with 24 cores and 24 GB RAM at Cluster FING, the High Performance Computing facility at Universidad de la Republica, Uruguay.[27]
236
+
237
+ Since computing the fitness of an individual is highly CPU-intensive, the evaluation of the population is performed in parallel using 24 Java-threads. Thus, each thread evaluates 3 individuals of the population.
238
+
239
+ For each probem instance (map, traffic, and application), we performed 30 independent executions of the proposed MOEA. For the heuristics used as a baseline for the results comparison,
240
+
241
+ we performed one execution for the PageRank algorithm (deterministic) and 30 independent executions for the Knapsack algorithm (randomized). Including the parameter setting and validation experiments, we performed a total number of 1080 executions of NSGA-II and 279 executions of the heuristic algorithms.
242
+
243
+ # 6.2. Problem instances
244
+
245
+ In order to evaluate the proposed MOEA, we defined a real world problem scenario that is relevant for our community. We included real information for a number of elements: a real map of the city of Málaga, real road traffic data from the Málaga Council, real RSU network interfaces/antennas, and real applications to be executed over the VANET. The main details about these elements are presented next.
246
+
247
+ 6.2.1. Map Figure 5 shows the map of Málaga considered in the experimental analysis. The map covers an area of $42557\mathrm{km}^2$ in the city, including a total number of 106 points, which define 128 segments with lengths between 55.5 and $1248.2\mathrm{m}$ , and an average length of $483.9\mathrm{m}$ . All major traffic ways, including avenues and important streets in Málaga are sampled. Some important avenues with large traffic volume define multiple segments in the map (e.g., Avenida de Andalucía, Avenida de Velázquez, Avenida de Valle Inclán and Paseo Marítimo Pablo Ruiz Picasso, all of them with more than six segments defined in the map).
248
+
249
+ ![](images/78f1abf200661b86f3b390fceade13b06639f23f4e001aec5d8f8c88933480dc.jpg)
250
+ Figure 5. Segments defined over the real map of Málaga.
251
+
252
+ 6.2.2. Road traffic data The traffic data used in our experiments are based on the information collected by the Málaga City Council using a set of sensors located along the roads. These sensors returned the total number of vehicles that circulated during the first six months of 2015. The information is publicly available at the Málaga Council Mobility website.[26]
253
+
254
+ We used the traffic information to define the normal traffic pattern in our RSU-DP scenario. In addition, we applied two probabilistic multiplicative factors over the normal pattern, to define a low pattern, reducing the traffic randomly in $[0 - 20\%]$ and a high pattern, increasing the traffic randomly in $[0 - 20\%]$ . These patterns represent situations with low and high road traffic density, respectively, according to the real data from the Málaga City Council (in fact, studying the traffic statistics for peak hours provided by the Council, we verified that the number of vehicles for all main roads are about $20\%$ higher than in a normal traffic scenario).
255
+
256
+ 6.2.3. RSUs The initial effort to standardize the DSRC radio technology took place in the ASTM 2313 working group in the U.S. This effort migrated to the IEEE 802.11 standard group that
257
+
258
+ proposed IEEE 802.11p, which stands for wireless access in vehicular environments (WAVE).<sup>18</sup> Following these specifications, the RSUs considered in our study are equipped with a real world IEEE 802.11p commercial network interface. This hardware configures a realistic scenario and allows computing useful results from the point of view of both the research and technological communities.
259
+
260
+ Each network interface is connected to an external antenna to improve the communication capabilities, according to a given antenna gain. The gain, measured in decibels (dBi), is a measure of the power of the radio signal radiating from the antenna. Generally, the higher the gain of an antenna, the longer the radio range that can be obtained, and the better the QoS provided by the infrastructure.
261
+
262
+ The RSUs analyzed in the problem instances solved in the experimental analysis differ in the antenna connected. Three different commercial omni-directional antennas, which can be found in online shops (e.g., Cetacea $^7$ ), are considered. The antennas differ in the gain offered and the cost. A summary of the main features of such antennas is presented in Table II. Our study does not exclude the possibility of incorporating new communication devices (network interfaces and antennas) for the city infrastructure. The proposed algorithms do not depend on the type or features of the RSUs considered.
263
+
264
+ Table II. General information about the used antennas to define different RSU.
265
+
266
+ <table><tr><td>type</td><td>commercial model</td><td>gain</td><td>cost</td></tr><tr><td>t1</td><td>Echo Series Omni Site 6dBi</td><td>6 dBi</td><td>$121.70</td></tr><tr><td>t2</td><td>Echo Series Omni Site 9dBi</td><td>9 dBi</td><td>$139.20</td></tr><tr><td>t3</td><td>Echo Series Omni Site 12dBi</td><td>12 dBi</td><td>$227.50</td></tr></table>
267
+
268
+ One of the main features that defines a given RSU is its effective radio range (ERR). EER indicates the farthest distance at which the RSU may exchange data packets with the vehicles, and it is a relevant metric to evaluate the performance of the communications provided by the VANET infrastructure. In fact, ERR is one of the components we evaluate in the fitness function to compute the QoS provided by each RSU deployed in the studied scenario.
269
+
270
+ To determine the ERR metric for each studied RSU, we performed realistic VANET simulations evaluating the PDR at different distances (from 0 to $650\mathrm{m}$ ). The experiments were performed by using the ns-2 simulator[29] to evaluate the communications. The simulated VANET scenario is defined by a given RSU and 10 moving cars at $40\mathrm{Km/h}$ ( $11.11\mathrm{m/s}$ ) that utilize IEEE 802.11p network devices in a urban area. During the simulations, the RSU sent continuous data streams at 256 Kbps to the vehicles. The probabilistic Nakagami radio propagation model[32] was used to represent the channel fading characteristics of urban scenarios. Each scenario was simulated 15 times to obtain robust average PDR values. In order to ensure a realistic QoS that guarantees reliable communications, we defined the ERR of each RSU as the distance at which the average PDR is equal or higher than $66.67\%$ (i.e., less than one packet lost for each three packets transmitted).
271
+
272
+ Figure 6 reports the experimental results of the simulations to obtain the ERR for each RSU type. According to the PDR threshold defined, the ERR for the RSU type $t_1$ is $243.12\mathrm{m}$ , for RSU type $t_2$ is $338.70\mathrm{m}$ , and for RSU type $t_3$ is $503.93\mathrm{m}$ (see the values at the bottom of Figure 6).
273
+
274
+ 6.2.4. Applications As we introduced in Section 1, VANET communications comprise mainly three types of applications: road safety, traffic efficiency, and infotainment. The two first types of applications rely on the exchange of small data packets with very short communication delays. The infotainment applications comprise a wide variety of applications that are principally based on receiving audio/voice and video streams.[5,17]
275
+
276
+ One of the main contributions of the study reported in this article is that we defined problem instances taking into account the requirements of the different types of VANET applications. This analysis allows the designers to prioritize the applications' constraints in the final RSU deployment. Thus, for each RSU type, the computed QoS metric takes into account the maximum number of vehicles that can be served while fulfilling the requirements for each VANET application
277
+
278
+ ![](images/c9eba29e64d6f747642420e60de811871451815e236132a96ea92dd6209d2b38.jpg)
279
+ Figure 6. ERR experimental results.
280
+
281
+ <table><tr><td>type</td><td>t1</td><td>t2</td><td>t3</td></tr><tr><td>ERR</td><td>243.12 m</td><td>338.70 m</td><td>503.93 m</td></tr></table>
282
+
283
+ type $(MU)$ . The QoS constraints for each VANET application type evaluated in this study are summarized in Table III, based on the study by Chantaksinopas et al.
284
+
285
+ Table III. QoS requirements of VANET applications taken into account in this study $(Q(A_{i}))$
286
+
287
+ <table><tr><td>application type</td><td>packet size (bytes)</td><td>generated data flow</td><td>QoS requirements</td></tr><tr><td>data (A1)</td><td>238 bytes</td><td>19 kbps (10 packets/s)</td><td>E2ED&lt;100 ms &amp; PDR=100%</td></tr><tr><td>voice/audio (A2)</td><td>238 bytes</td><td>25 kbps</td><td>E2ED&lt;400 ms &amp; PDR&gt;16%</td></tr><tr><td>video (A3)</td><td>791 bytes</td><td>384 kbps</td><td>E2ED&lt;400 ms &amp; PDR&gt;8.33%</td></tr></table>
288
+
289
+ In order to evaluate the $MU$ function described in Section 2, we defined an iterative procedure based on performing realistic urban VANET simulations for each RSU $(R_{i})$ and application $(A_{j})$ .
290
+
291
+ The procedure consists in simulating a given scenario that includes a RSU of a given type $R_{i}$ and different number of vehicles spread through a circular area with radio $ERR(R_{i})$ . In these simulations, the VANET nodes generate data flows (traffic) according to the $A_{i}$ application (see the third column on Table III). Therefore, the VANET scenario is defined according three different parameters $(n,R_i,A_j)$ .
292
+
293
+ The evaluation procedure starts by simulating the scenario with one vehicle $(1, R_i, A_j)$ and computing the two relevant QoS metrics defined for the problem (PDR and E2ED). After that, a new vehicle is added and the new configuration is simulated. The iterative method stops when the computed QoS metrics do not accomplish the requirements defined in $Q(A_i)$ function. Finally, $MU(R_i, A_j)$ is the number of vehicles previous to the last one that was simulated. The computed results for each RSU and application type are summarized in Table IV.
294
+
295
+ Table IV. Number of vehicles that can be served for each RSU and application types.
296
+
297
+ <table><tr><td rowspan="2">RSU type</td><td colspan="3">application type</td></tr><tr><td>safety</td><td>voice/audio</td><td>video</td></tr><tr><td>t1</td><td>45</td><td>34</td><td>31</td></tr><tr><td>t2</td><td>45</td><td>44</td><td>34</td></tr><tr><td>t3</td><td>46</td><td>52</td><td>37</td></tr></table>
298
+
299
+ According to the results presented in Table IV, for example, all RSUs equipped with an antenna type $t_2$ can provide an adequate QoS to 34 vehicles executing a video stream application, and so on.
300
+
301
+ # 6.3. Heuristic methods used as a baseline for the comparison
302
+
303
+ In order to evaluate the quality of the solutions computed by the proposed NSGA-II, we implemented two versions of well-known heuristic methods to solve a different variant of the RSU location problem, originally proposed by Ben Brahim et. al:4 the PageRank heuristic and the Knapsack algorithm.
304
+
305
+ 6.3.1. Constructive PageRank heuristic PageRank is a voting algorithm, initially developed to compute the importance of web pages in the Internet by taking into account the number of inbound and outbound links from and to other web pages.[19]
306
+
307
+ The PageRank algorithm has been previously applied to solve the RSU deployment problem by Ben Brahim et al. In that study, the authors applied the PageRank version for weighted graphs to rank the potential locations for RSUs (road intersections) according to mobility-related information (e.g., traffic density, average speed of vehicles). The road traffic network is modeled as a graph with weighted links, which represent roads (segments), and vertices, which represent the intersections. The weight of each link is given by mobility-related information (e.g., density, average speed).
308
+
309
+ The weighted PageRank is applied to a given directed graph $G = (V, E)$ defined by a set of vertices $V$ , and a set of edges $E$ . The algorithm starts by setting the PageRank value of all vertices $v_i$ to a fixed value $d$ : $PR^W(v_i) = d$ , $\forall v_i \in V$ . $d$ is known as the dumping parameter and its default value is 0.85. Then, an iterative process is performed until a stop condition is reached (the convergence value is below a given threshold or a maximum number of iterations performed). In this iterative process, for a given vertex $v_i$ , $PR^W(v_i)$ is computed by
310
+
311
+ $$
312
+ P R ^ {W} \left(v _ {i}\right) = (1 - d) + d \times \left(\sum_ {v _ {j} \in I n \left(v _ {i}\right)} w _ {i j} \times \frac {P R ^ {W} \left(v _ {j}\right)}{\sum_ {v _ {k} \in O u t \left(v _ {j}\right)}}\right) \tag {3}
313
+ $$
314
+
315
+ where $\operatorname{In}(v_i)$ is the set of vertices that point to it (predecessors), and $\operatorname{Out}(v_i)$ is the set of vertices that $v_i$ points to (successors), and $w_{ij}$ is the weight that for the edge that connects $v_i$ and $v_j$ .
316
+
317
+ In our study, we consider all the road segments as potential locations to install the RSUs, and not just the intersections as proposed by Ben Brahim et al. Therefore, we adapt the weighted PageRank algorithm with the purpose of sorting the segments (the edges of the graph) according to the rank value, and after that, applying a constructive heuristic over the sorted vector of segments.
318
+
319
+ The weighted graph $G = (V, E)$ is defined by the set of points $P$ and the set of segments $S$ in the RSU-DP formulation ( $V = P$ and $E = S$ ); The weight of each edge $w_{jk}$ is given by the weight of the represented segment $W(s_i)$ , $s_i = (p_j, p_k)$ , defined by Equation 4.
320
+
321
+ $$
322
+ w _ {j k} = W \left(s _ {i}\right) = N V \left(s _ {i}\right) \times \frac {\operatorname {l e n} \left(s _ {i}\right)}{\operatorname {s p} \left(s _ {i}\right)} \tag {4}
323
+ $$
324
+
325
+ The rank value for each segment $s_i = (p_j, p_k)$ is computed as the sum of the PageRank values of $p_j$ and $p_k$ , i.e., $SR^W(s_i) = PR^W(p_j) + PR^W(p_k)$ . Thus, the segments are ranked in a sorted vector $S^{PR}$ in which $s_i^{PR}, s_j^{PR} \in S$ , $i < j \Leftrightarrow SR^W(s_i^{PR}) > SR^W(s_j^{PR})$ .
326
+
327
+ Once the segments are sorted in $S^{PR}$ , a constructive heuristic is applied to select and locate the RSUs. The heuristic iterates over the sorted vector $S^{PR}$ starting by the first segment $(s_1^{PR})$ , which is the best ranked one. For each segment $s_i^{PR} \in S^{PR}$ , the different QoS provided for each one of the three RSU types when installing them in one of 10 different equidistant points within the segment that are located in positions $n \times 0.1 \times len(s_i^{PR})$ , $n \in [0,9]$ , are computed. Thus, the constructive algorithm evaluates the QoS of 30 different possible configurations (3 RSU types × 10 locations) in each segment. It considers the configuration (RSU type and location) that provides the best QoS, if the whole QoS increases at least 1% regarding the previous segment. Otherwise, the constructive PageRank heuristic does not locate any RSU in the current segment $s_i^{PR}$ .
328
+
329
+ 6.3.2. Randomized Knapsack algorithm The 0-1 Knapsack problem $^{22}$ is a well-known optimization problem that assumes having a bag with a capacity $W$ and a set of objects characterized by their benefit value $p_i$ and their wight $w_i$ . The goal of the 0-1 Knapsack problem is to find the optimal subset of objects to include in the bag, maximizing the benefit $P$ without exceeding the capacity $W$ .
330
+
331
+ The RSU-DP problem can be reduced to a 0-1 Knapsack problem, which can be solved applying a dynamic programming algorithm. In this work, we adapt the Knapsack algorithm by Ben Brahim et. al<sup>4</sup> to solve the RSU-DP following a non-deterministic dynamic programming approach, the Randomized Knapsack (RandKS) presented in Algorithm 3.
332
+
333
+ Algorithm 3 Schema of the RandKS(B,SS,C,n,Ksol).
334
+ 1: if $(n == 0$ or $B\leq 0)$ then
335
+ 2: $K_{sol}\gets \emptyset$ ▷No more segments or budget
336
+ 3: return 0
337
+ 4: else
338
+ 5: for k-1 to K do ▷For all RSU types
339
+ 6: if $B < C(t_k)$ then ▷No budget enough
340
+ 7: $cov_{k} = \mathrm{RandKS}(B,\mathrm{SS},\mathrm{C},\mathrm{n - 1},K\mathrm{sol})$
341
+ 8: else
342
+ 9: loc $\leftarrow$ random $\in [0,1)$
343
+ 10: covk $\leftarrow$ coverage((sn,tk,loc),Ksol)+ RandKS(B-C(tk),SS,C,n-1,Ksol)
344
+ 11: end if
345
+ 12: rsu_index $\leftarrow$ getIndex(max(covi))
346
+ 13: if not rsu_index $\equiv = 0$ then
347
+ 14: $K_{sol}\gets K_{sol}\cup (s_n,t_k,$ location)
348
+ 15: end if
349
+ 16: end for
350
+ 17: end if
351
+
352
+ The Randomized Knapsack algorithm defines the set $SS$ , which stores all the possible pairs of road segments and RSU types ( $S \times R$ ), i.e., $SS = \{(s_1, t_1), (s_1, t_2), \ldots, (s_1, t_K), \ldots, (s_n, t_K)\}$ . The elements of SS are the ones to include in the Knapsack bag when building a RSU infrastructure for a VANET. The final solution is stored in the set $Ksol$ , which stores tuples that include information about the installed RSUs: the road segment $s_i$ , the RSU type $t_k$ , and the location of the RSU in the segment, i.e., $Ksol$ includes $(s_i, t_k, location)$ tuples.
353
+
354
+ The location loc of the RSU in the segment is a real number in the range [0,1), that represents the relative value of the position in the segment, as explained for the solution encoding in NSGA-II in Section 5.1. In the original (deterministic) Knapsack algorithm by Ben Brahim et. al,4 the location of RSU is limited to the corners of the streets, i.e., the extremes of each segment. In our Randomized Knapsack algorithm, the location of the RSU is picked randomly within each segment, because we are working with a set of infinite possible locations, which cannot be explored one by one.
355
+
356
+ Two new functions are defined for the RandKS algorithm: i) coverage $((s_n, t_k, \mathrm{loc}), sol)$ , which computes the complete coverage provided by the RSUs stored in $sol$ plus the RSU located in the segment $s_n$ defined by $(s_n, t_k, \mathrm{loc})$ ; and ii) getIndex $(\max(cov_i))$ , which returns the index of the RSU type that obtained the best (maximum) coverage.
357
+
358
+ # 6.4. Multiobjective optimization metrics
359
+
360
+ A large number of metrics have been proposed in the literature to evaluate MOEAs. $^{10,12}$ In this work, we apply two relevant metrics in order to evaluate the results obtained by the NSGA-II algorithm: hypervolume and relative hypervolume (RHV). These two metrics allow evaluating, in terms of both convergence and correct sampling, the set of non-dominated solutions of the problem.
361
+
362
+ The hypervolume measures the volume (in the objective functions space) covered by the computed Pareto front. The relative hypervolume is the ratio between the volumes (in the objective
363
+
364
+ functions space) covered by the computed Pareto front and the true Pareto front. The ideal RHV value is 1.
365
+
366
+ The true Pareto front—which is unknown for the problem instances studied—is approximated by the set of non-dominated solutions found for each problem instance solved, in each one of the 30 execution of the proposed MOEA. $^{10,12}$
367
+
368
+ # 6.5. Parametric configuration
369
+
370
+ Due to the stochastic nature of EAs, a parameter configuration is mandatory prior to the experimental analysis. For this purpose, we performed an experimental analysis to configure two important parameters of the NSGA-II algorithm: the crossover probability $(p_c)$ and the mutation probability $(p_m)$ .
371
+
372
+ For the parameter setting, we use a set of problem instances which is different from the real-world problem instance, in order to avoid bias in the experimental analysis. We considered a smaller region of the map of Malaga, comprised of 74 segments, and we defined 3 different instances over this region corresponding to each type of application (data, voice, and video). For the parameter configuration we considered the scenario with normal traffic. For each parameter, three candidate values were tested: $p_c \in \{0.5, 0.7, 0.9\}$ , and $p_m \in \{0.1, 0.01, 0.001\}$ .
373
+
374
+ We performed 30 independent executions of 10000 generations over each problem instance using each one of the different combinations of the candidate parameter values, thus totalling 810 executions of the proposed MOEA.
375
+
376
+ Table V reports the hypervolume achieved using each parameter configuration on the scenario involving a video application, which is representative of the set of scenarios used in the parameter tuning. The mean, median, and standard deviation $(\sigma)$ are shown along with the minimum (min) and maximum (max) values achieved. Furthermore, the $p$ -value corresponding to the Shapiro-Wilk test for normality is displayed as well as the Friedman rank values corresponding to each parameter configuration.
377
+
378
+ The results from the Shapiro-Wilk test did not allow to confidently state whether the results samples follow a normal distribution (in six out of nine instances, the $p$ -value was larger than 0.05). Therefore, the non-parametric Friedman rank test was used in order to compare the different parameter configurations against each other. The $p$ -values for the Friedman rank test did not allow to state whether there is one configuration that outperforms all the others with statistical significance. Therefore, we decided to use the configuration $p_{C} = 0.7$ ; $p_{M} = 0.1$ , which achieved the best mean and median hypervolume values.
379
+
380
+ Table V. Parameter configuration results for the scenario with a video application.
381
+
382
+ <table><tr><td rowspan="2">pc</td><td rowspan="2">pm</td><td colspan="5">hypervolume (×106)</td><td rowspan="2">p-value S-W</td><td rowspan="2">Friedman Rank</td></tr><tr><td>mean</td><td>median</td><td>σ</td><td>min</td><td>max</td></tr><tr><td rowspan="3">0.5</td><td>0.001</td><td>9.27</td><td>9.27</td><td>0.04</td><td>9.18</td><td>9.36</td><td>0.99</td><td>148.00</td></tr><tr><td>0.01</td><td>9.27</td><td>9.28</td><td>0.05</td><td>9.16</td><td>9.36</td><td>0.86</td><td>160.00</td></tr><tr><td>0.1</td><td>9.26</td><td>9.27</td><td>0.05</td><td>9.11</td><td>9.34</td><td>0.04</td><td>147.00</td></tr><tr><td rowspan="3">0.7</td><td>0.001</td><td>9.27</td><td>9.29</td><td>0.06</td><td>9.14</td><td>9.38</td><td>0.15</td><td>143.00</td></tr><tr><td>0.01</td><td>9.28</td><td>9.27</td><td>0.05</td><td>9.21</td><td>9.40</td><td>0.15</td><td>149.00</td></tr><tr><td>0.1</td><td>9.29</td><td>9.30</td><td>0.05</td><td>9.20</td><td>9.38</td><td>0.76</td><td>171.00</td></tr><tr><td rowspan="3">0.9</td><td>0.001</td><td>9.25</td><td>9.26</td><td>0.07</td><td>9.07</td><td>9.36</td><td>0.34</td><td>134.00</td></tr><tr><td>0.01</td><td>9.25</td><td>9.28</td><td>0.06</td><td>9.10</td><td>9.34</td><td>0.01</td><td>134.00</td></tr><tr><td>0.1</td><td>9.28</td><td>9.28</td><td>0.04</td><td>9.14</td><td>9.35</td><td>0.03</td><td>164.00</td></tr></table>
383
+
384
+ # 6.6. Numerical results
385
+
386
+ This subsection reports the numerical results achieved in the experimental evaluation of the proposed NSGA-II algorithm for the RSU-DP. The results shown are those corresponding to the 30 independent executions performed on each of the 9 problem instances studied: 3 different
387
+
388
+ applications (data, voice, video), each with 3 different traffic levels (normal, high, low) for the proposed NSGA-II and the baseline heuristics (PageRank/Knapsack).
389
+
390
+ The RHV metric is used to compare the solutions computed by NSGA-II against the one computed sing the baseline heuristics. RHV is a good indicator of both convergence towards an ideal Pareto front and diversity among the set of non-dominated solutions. The ideal Pareto front for a given problem instance is approximated with the set of non-dominated solutions computed by all algorithms on every independent execution of that instance.
391
+
392
+ In order to compare the RHV values achieved by each algorithm we used two statistical tests. We first performed the Shapiro-Wilk test to assess normality over the RHV values obtained by each algorithm on each problem instance. In seven out of nine instances, the results from the Shapiro-Wilk test did not allow us to state whether the samples follow a normal distribution or not. Therefore, the Friedman Rank test was used to assess whether the results achieved by one algorithm outperformed the others.
393
+
394
+ Table VI reports the RHV values achieved by NSGA-II, Knapsack and PageRank on all 9 problem instances. The mean, standard deviation $(\sigma)$ , and best $(max)$ values were computed over the 30 independent executions performed for each algorithm. In addition, the table also reports the $p$ -value from the Shapiro-Wilk test $(S - W)$ , the rank from the Friedman test $(Rank)$ , and the $p$ -value from the Friedman test.
395
+
396
+ Table VI. Relative hypervolume values achieved by NSGA-II, Knapsack, and PageRank.
397
+
398
+ <table><tr><td rowspan="2" colspan="2"></td><td colspan="3">normal</td><td colspan="3">high</td><td colspan="3">low</td></tr><tr><td>data</td><td>voice</td><td>video</td><td>data</td><td>voice</td><td>video</td><td>data</td><td>voice</td><td>video</td></tr><tr><td rowspan="3">max</td><td>NSGA-II</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td><td>0.99</td></tr><tr><td>Knapsack</td><td>0.82</td><td>0.85</td><td>0.95</td><td>0.8</td><td>0.84</td><td>0.96</td><td>0.83</td><td>0.86</td><td>0.95</td></tr><tr><td>PageRank</td><td>0.73</td><td>0.82</td><td>0.73</td><td>0.7</td><td>0.81</td><td>0.71</td><td>0.76</td><td>0.83</td><td>0.74</td></tr><tr><td rowspan="3">mean</td><td>NSGA-II</td><td>0.98</td><td>0.98</td><td>0.98</td><td>0.99</td><td>0.95</td><td>0.98</td><td>0.99</td><td>0.99</td><td>0.98</td></tr><tr><td>Knapsack</td><td>0.8</td><td>0.84</td><td>0.94</td><td>0.79</td><td>0.82</td><td>0.94</td><td>0.82</td><td>0.85</td><td>0.94</td></tr><tr><td>PageRank</td><td>0.73</td><td>0.82</td><td>0.73</td><td>0.7</td><td>0.81</td><td>0.71</td><td>0.76</td><td>0.83</td><td>0.74</td></tr><tr><td rowspan="3">σ (×10-3)</td><td>NSGA-II</td><td>3.72</td><td>2.83</td><td>3.00</td><td>2.73</td><td>176</td><td>3.79</td><td>3.74</td><td>3.65</td><td>3.33</td></tr><tr><td>Knapsack</td><td>5.57</td><td>4.57</td><td>7.77</td><td>5.88</td><td>6.81</td><td>6.43</td><td>7.26</td><td>6.53</td><td>8.05</td></tr><tr><td>PageRank</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td rowspan="3">S-W</td><td>NSGA-II</td><td>0.93</td><td>0.17</td><td>0.6</td><td>0.1</td><td>0</td><td>0.25</td><td>0.3</td><td>0.11</td><td>0.35</td></tr><tr><td>Knapsack</td><td>0.49</td><td>0.91</td><td>0.6</td><td>0.29</td><td>0.88</td><td>0.94</td><td>0.33</td><td>0.37</td><td>0.15</td></tr><tr><td>PageRank</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td></tr><tr><td rowspan="3">Rank</td><td>NSGA-II</td><td>90</td><td>90</td><td>90</td><td>90</td><td>88</td><td>90</td><td>90</td><td>90</td><td>90</td></tr><tr><td>Knapsack</td><td>60</td><td>60</td><td>60</td><td>60</td><td>60</td><td>60</td><td>60</td><td>60</td><td>60</td></tr><tr><td>PageRank</td><td>30</td><td>30</td><td>30</td><td>30</td><td>32</td><td>30</td><td>30</td><td>30</td><td>30</td></tr><tr><td colspan="2">p-value Friedman</td><td>&lt;10-6</td><td>&lt;10-6</td><td>&lt;10-6</td><td>&lt;10-6</td><td>&lt;10-6</td><td>&lt;10-6</td><td>&lt;10-6</td><td>&lt;10-6</td><td>&lt;10-6</td></tr></table>
399
+
400
+ The results in Table VI demonstrate that NSGA-II is able to achieve significantly better RHV values than the two baseline heuristics adapted from the literature, both in average and in the best case. The Friedman rank test allows to state with statistical confidence that NSGA-II is able to outperform both Knapsack and PageRank algorithms in all problem instances ( $p$ -value $< 10^{-6}$ in all comparisons). This fact suggests that NSGA-II accurately computes fronts that converge towards an ideal Pareto front of the problem while simultaneously maintaining diversity among the set of non-dominated solutions.
401
+
402
+ Table VII reports the improvements achieved by NSGA-II over Knapsack and PageRank heuristics. The reported improvements are those achieved when comparing specific (realistic) solutions from the global Pareto fronts computed by each algorithm (i.e., combining the results of all 30 independent executions performed), as usual in the MOEA literature.[10,12]
403
+
404
+ In Table VII, the improvements regarding each problem objective are evaluated by comparing the values of QoS/cost computed by each studied algorithm when considering a fixed cost/QoS used as a
405
+
406
+ Table VII. NSGA-II improvements over Knapsack and PageRank algorithms.
407
+
408
+ <table><tr><td rowspan="3" colspan="2"></td><td colspan="4">improvement in QoS (%)</td><td colspan="2">improvement in cost (%)</td></tr><tr><td colspan="2">cost=$10000</td><td colspan="2">cost=$15000</td><td colspan="2">QoS=2500</td></tr><tr><td>Knapsack</td><td>Pagerank</td><td>Knapsack</td><td>Pagerank</td><td>Knapsack</td><td>Pagerank</td></tr><tr><td rowspan="3">normal</td><td>data</td><td>22.37</td><td>44.89</td><td>-</td><td>-</td><td>31.37</td><td>39.48</td></tr><tr><td>voice</td><td>19.06</td><td>29.76</td><td>-</td><td>-</td><td>25.34</td><td>28.93</td></tr><tr><td>video</td><td>2.58</td><td>49.97</td><td>5.97</td><td>24.63</td><td>6.46</td><td>31.18</td></tr><tr><td rowspan="3">high</td><td>data</td><td>24.68</td><td>52.71</td><td>-</td><td>-</td><td>30.17</td><td>42.4</td></tr><tr><td>voice</td><td>22.11</td><td>33.82</td><td>-</td><td>-</td><td>22.35</td><td>28.26</td></tr><tr><td>video</td><td>0.83</td><td>52.55</td><td>6.42</td><td>34.13</td><td>1.84</td><td>34.18</td></tr><tr><td rowspan="3">low</td><td>data</td><td>18.44</td><td>35.61</td><td>-</td><td>-</td><td>34.09</td><td>33.52</td></tr><tr><td>voice</td><td>17.83</td><td>25.78</td><td>-</td><td>-</td><td>33.75</td><td>25.51</td></tr><tr><td>video</td><td>6.91</td><td>46.89</td><td>5.04</td><td>14.03</td><td>15.06</td><td>28.53</td></tr></table>
409
+
410
+ reference value, respectively. The improvements in QoS are measured comparing the values of QoS achieved by each algorithm with a fixed cost of $10000 (all cost values are expressed in US dollars). This cost is a reasonable value for the budget to invest in order to deploy a RSU infrastructure for VANET in a city scaled area, just like in the case of Malaga. In addition, in the scenarios corresponding to video applications, the QoS improvements with a fixed cost of$ 15000 are also reported, since the deployment costs for infrastructures that support this type of applications tend to be more expensive. Following a similar approach, the improvements in cost are reported comparing the cost values of solutions achieved by each algorithm with a fixed QoS of 2500.
411
+
412
+ In any case, since the proposed algorithm solves the problem following a multiobjective approach, a human decision-maker will always have the last word about the solution to implement: the decision-maker from the City Council will be in charge of selecting one solution of the set of nondominated ones for the real implementation over the city.
413
+
414
+ The results in Table VII clearly state that NSGA-II is able to improve the solutions computed by both Knapsack and PageRank algorithms in all scenarios. In terms of QoS, the maximum improvements are of $24.68\%$ over Knapsack and $52.71\%$ over PageRank, both in the instance with high traffic and a data application. The improvements of costs are of up to $34.09\%$ over Knapsack (instance with low traffic and a data application) and $39.48\%$ over PageRank (instance with normal traffic and a data application).
415
+
416
+ In order to provide a better insight into the main advantages of the proposed NSGA-II algorithm, Figure 7 shows the global Pareto fronts achieved by each algorithm on each problem instance. It can be observed that NSGA-II is able to compute accurate Pareto fronts with good convergence and diversity properties. The improvements of NSGA-II on both QoS and cost are clear for data and voice applications, in all three traffic patterns studied. When considering a more demanding application (video), the improvements of NSGA-II over Knapsack are observed for large values of QoS (i.e., $\mathrm{QoS} > 2500$ ). The PageRank algorithm was consistently the worst method to solve the RSU-DP in all problem instance studied.
417
+
418
+ # 7. CONCLUSIONS AND FUTURE WORK
419
+
420
+ This article reports the application of a multiobjective evolutionary approach to solve the problem of locating roadside infrastructure for vehicular networks over realistic urban areas.
421
+
422
+ A multiobjective formulation of the problem was introduced, considering the QoS and cost objectives. A specific NSGA-II evolutionary algorithm was designed, by including a problem-related encoding and ad-hoc mutation operators to explore the set of possible locations. A parallel model was applied in order to efficiently perform the evaluation of solutions in the proposed MOEA.
423
+
424
+ ![](images/ce4517f72b1ccb14b47b1d0bf3573026cd9fa8ffbb6ef25ed3eddf0a6e6222cb.jpg)
425
+ (a) Normal traffic, data app.
426
+
427
+ ![](images/495f255200b96b931f7330b2a023852d9b37fa9d1b928aaa9c5bfc0e24622240.jpg)
428
+ (b) Normal traffic, voice app.
429
+
430
+ ![](images/4260027d3202ba1c3416a80cd18b70532dd02b574cf173e056c8f299c1c4f1ba.jpg)
431
+ (c) Normal traffic, video app.
432
+
433
+ ![](images/82619a8505f3b1735ce3ec44318b635e440fe8e45304166a38396eef11dc7b53.jpg)
434
+ (d) High traffic, data app.
435
+
436
+ ![](images/e0ff52eb4da1e59f2892e999692a7c7d2734e97f3d68c319e9653f3b68b7268c.jpg)
437
+ (e) High traffic, voice app.
438
+
439
+ ![](images/b3e08af1ea6c4dc71ca7cbd17c32cfe51e76efd6012bce445b3634cdba2ca97b.jpg)
440
+ (f) High traffic, video app.
441
+
442
+ ![](images/2793fb1ce838bc3dd0f4d4959d28b90f0ff59cf9e4851ab42f30c2500ef1d8ef.jpg)
443
+ (g) Low traffic, data app.
444
+
445
+ ![](images/bf161f46b1f6f2570058abfacae5b8555ec3de5d812e12de30e4628664ac1d3d.jpg)
446
+ (h) Low traffic, voice app.
447
+ Figure 7. Global Pareto fronts.
448
+
449
+ ![](images/09d7a3a445a82df892e251d3948b47db6ae04343538120fb17fe0d170f0b9a2c.jpg)
450
+ (i) Low traffic, video app.
451
+
452
+ The NSGA-II algorithm was evaluated on a city-scaled area using the real map and up-to-date (year 2015) traffic information from the city of Málaga, in Spain. The problem instances were built considering three different types of commercial antennas and a set of realistic VANET parameterizations that could ease the use of demanding applications related to data, voice, and video communications. The scenarios used for the experimental evaluation considered three different traffic patterns and three different types of applications, each one with different infrastructure requirements.
453
+
454
+ Two heuristics were implemented to solve the problem, based on related works from the literature: a Randomized Knapsack algorithm and a constructive PageRank heuristic. These traditional methods for RSU planning were used as a reference baseline to compare the solutions computed by the proposed NSGA-II algorithm. In the experiments performed, the proposed MOEA has shown good problem solving capabilities, computing accurate Pareto fronts for the problem. NSGA-II also allows improving over the two baseline heuristics, regarding the multiobjective optimization metrics evaluated and the two problem objectives.
455
+
456
+ According to the results from the experimental analysis, NSGA-II was able to consistently compute better results than the baseline heuristics. In the best case, NSGA-II was able to outperform the Randomized Knapsack algorithm up to $24.68\%$ and the PageRank heuristic up to $52.71\%$
457
+
458
+ in terms of QoS. The improvements in cost were up to $34.09\%$ and $39.48\%$ over Knapsack and PageRank, correspondingly. The computed Pareto fronts indicate that NSGA-II provides better and more robust solutions to the problem, especially for those scenarios considering data and voice applications. The experimental analysis also demonstrates that when considering a more realistic scenario (including real applications, real traffic information, etc.) the evolutionary approach is able to achieve larger improvements over the heuristics than those reported in previous related works.
459
+
460
+ The main lines for future work are related to extend the experimental analysis to consider other real areas from cities in other countries. Right now, we are working on building a real RSU-DP scenario for VANETs in the city of Montevideo, Uruguay, using real data from the local authorities and GPS information from the public transport. We also plan to extend the problem formulation to consider additional information from important events in vehicular networks (such as accidents, traffic jams, etc.) in order to model a more realistic scenario for the problem.
461
+
462
+ # ACKNOWLEDGMENTS
463
+
464
+ The work of R. Massobrio and S. Nesmachnow has been partially funded by ANII and PEDECIBA, Uruguay. The work of J. Toutouh and E. Alba has been partially funded by the Spanish MINECO and FEDER project TIN2014-57341-R.
465
+
466
+ # CONTRIBUTIONS
467
+
468
+ Renzo Massobrio and Jamal Toutouh carried out most of the experimental study, including programming the multiobjective evolutionary approach and the heuristics to solve the problem, and performing the VANET experiments to build the real scenario studied in the experimental analysis. They also participated in the writing of the paper. Sergio Nesmachnow worked in the design and implementation of the software methods, the design and evaluation of the case study in Málaga, and is responsible for the writing of the paper. Enrique Alba has contributed in the revision of the manuscript and writing of the paper. The international cooperation was coordinated by S. Nesmachnow (U. de la Républca) and J. Toutouh (U. of Málaga). S. Nesmachnow and E. Alba were responsible for the funds dedicated to the international cooperation.
469
+
470
+ # REFERENCES
471
+
472
+ 1. E. Alba, G. Luque, and S. Nesmachnow. Parallel metaheuristics: Recent advances and new trends. International Transactions in Operational Research, 20(1):1-48, 2013.
473
+ 2. B. Aslam, F. Amjad, and C. Zou. Optimal roadside units placement in urban areas for vehicular networks. In IEEE Symposium on Computers and Communications, pages 423-429, July 2012.
474
+ 3. T. Bäck, D. Fogel, and Z. Michalewicz, editors. Handbook of evolutionary computation. Oxford University Press, 1997.
475
+ 4. Mohamed Ben Brahim, Wassim Dira, and Fethi Filali. Roadside units placement within city-scaled area in vehicular ad-hoc networks. In $3^{rd}$ International Conference on Connected Vehicles and Expo, pages 1-7. IEEE, 2014.
476
+ 5. C. Campolo, A. Molinaro, and R. Scopigno, editors. *Vehicular ad hoc Networks - Standards, Solutions, and Research*. Springer, 2015.
477
+ 6. E. Cavalcante, A. Aquino, G. Pappa, and A. Loureiro. Roadside unit deployment for information dissemination in a VANET: An evolutionary approach. In $14^{th}$ Genetic and Evolutionary Computation Conference, pages 27-34, 2012.
478
+ 7. Cetacea Wireless Solutions Company shop. Online https://shop.cetacea.com/. Retrieved December 2015.
479
+ 8. I. Chantaksinopas, W. Lee, A. Prayote, and P. Oothongsap. Delay-sensitive applications in VANET and seamless connectivity: The limitation of UMTS network. International Journal Computer Science & Network Security, 12:54-61, 2012.
480
+ 9. H. Cheng, X. Fei, A. Boukerche, A. Mammeri, and M. Almulla. A geometry-based coverage strategy over urban VANETs. In $10^{th}$ ACM Symposium on Performance Evaluation of Wireless Ad Hoc, Sensor, & Ubiquitous Networks, pages 121-128, 2013.
481
+ 10. C. Coello, D. Van Veldhuizen, and G. Lamont. Evolutionary algorithms for solving multi-objective problems. Kluwer, New York, 2002.
482
+ 11. M. Deakin. Smart Cities: Governing, Modelling, and Analysing the Transition. Routledge, 2013.
483
+ 12. K. Deb. Multi-Objective Optimization using Evolutionary Algorithms. J. Wiley & Sons, Chichester, 2001.
484
+
485
+ 13. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2):182-197, 2002.
486
+ 14. J. Falcocchio and H. Levinson. *Road Traffic Congestion: A Concise Guide*. Springer Tracts on Transportation and Traffic. Springer International, 2015.
487
+ 15. F. Glover. Future paths for integer programming and links to artificial intelligence. Computers and Operations Research, 13(5):533-549, 1986.
488
+ 16. D. Goldberg. Genetic algorithms in search, optimization, and machine learning. Addison Wesley, New York, 1989.
489
+ 17. H. Hartenstein, K. Laberteaux, and I. Ebrary. VANET: vehicular applications and inter-networking technologies. Wiley Online Library, 2010.
490
+ 18. D. Jiang and L. Delgrossi. IEEE 802.11p: Towards an international standard for wireless access in vehicular environments. In IEEE Vehicular Technology Conference, pages 2036-2040, 2008.
491
+ 19. A. Langville and C. Meyer. Google's PageRank and beyond: The science of search engine rankings. Princeton University Press, 2011.
492
+ 20. Y. Liang, H. Liu, and D. Rajan. Optimal placement and configuration of roadside units in vehicular networks. In IEEE $75^{th}$ Vehicular Technology Conference, pages 1-6. IEEE, 2012.
493
+ 21. C. Lochert, B. Scheuermann, C. Wewetzer, A. Luebke, and M. Mauve. Data aggregation and roadside unit placement for a VANET traffic information system. In $5^{th}$ ACM International Workshop on Vehicular Inter-Networking, pages 58-65, 2008.
494
+ 22. S. Martello and P. Toth. Knapsack Problems: Algorithms and Computer Implementations. John Wiley & Sons, Inc., New York, NY, USA, 1990.
495
+ 23. R. Massobrio, S. Bertinat, S. Nesmachnow, J. Toutouh, and E. Alba. Smart placement of RSU for vehicular networks using multiobjective evolutionary algorithms. In Latin America Conference on Computational Intelligence, pages 1-6, 2015.
496
+ 24. R. Massobrio, J. Toutouh, and S. Nesmachnow. A multiobjective evolutionary algorithm for infrastructure location in vehicular networks. In $7^{th}$ European Symposium on Computational Intelligence and Mathematics, pages 1-6, 2015.
497
+ 25. S. Mendes, J. Gomez-Pulido, M. Vega-Rodriguez, J. Sanchez-Perez, Y. Saez, and P. Isasi. The radio network design optimization problem. In Biologically-Inspired Optimisation Methods, pages 219-260. Springer Science + Business Media, 2009.
498
+ 26. Área de Movilidad, Ayuntimiento de Malaga. Online http://movilidad.malaga.eu/. Retrieved December 2015.
499
+ 27. S. Nesmachnow. Computación@científica de alto desempo en la Facultad de Ingeniería, Universidad de la Republica. Revista de la Asociación de Ingenieros del Uruguay, 61:12-15, 2010.
500
+ 28. S. Nesmachnow. An overview of metaheuristics: accurate and efficient methods for optimisation. International Journal of Metaheuristics, 3(4):320-347, 2014.
501
+ 29. The Network Simulator ns-2. Online http://www.isi.edu/nsnam/ns. Retrieved December 2015.
502
+ 30. P. Patil and A. Gokhale. Voronoi-based placement of road-side units to improve dynamic resource management in VANETs. In International Conference on Collaboration Technologies and Systems, pages 389-396, 2013.
503
+ 31. A. Reis, S. Sargento, F. Neves, and O. Tonguz. Deploying roadside units in sparse vehicular networks: What really works and what does not. IEEE Transactions on Vehicular Technology, 63(6):2794-2806, 2014.
504
+ 32. S. Saunders and A. Aragon. Antennas and Propagation for Wireless Communication Systems. Wiley, New York, NY, USA, 1999.
505
+ 33. O. Trullols, M. Fiore, C. Casetti, C. Chiasserini, and J. Ordinas. Planning roadside infrastructure for information dissemination in intelligent transportation systems. Computer Communications, 33(4):432-442, 2010.
506
+ 34. C. Wang, X. Li, F. Li, and H. Lu. A mobility clustering-based roadside units deployment for VANET. In $16^{th}$ Asia-Pacific Network Operations and Management Symposium, pages 1-6, 2014.
507
+ 35. D. White. Software review: the ECJ toolkit. Genetic Programming and Evolvable Machines, 13(1):65-67, 2011.
2501.10xxx/2501.10016/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:287a5ab6fb63b8ef6513609377ac82ec66c50d64b94e46eb8fde7be012f4fe99
3
+ size 837293
2501.10xxx/2501.10016/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.10xxx/2501.10018/8bff6f61-9aa1-458e-a9e1-d00224986bd3_content_list.json ADDED
@@ -0,0 +1,1785 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "DiffuEraser: A Diffusion Model for Video Inpainting",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 217,
8
+ 130,
9
+ 751,
10
+ 152
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "TECHNICAL REPORT",
17
+ "text_level": 1,
18
+ "bbox": [
19
+ 406,
20
+ 157,
21
+ 563,
22
+ 171
23
+ ],
24
+ "page_idx": 0
25
+ },
26
+ {
27
+ "type": "text",
28
+ "text": "Xiaowen Li",
29
+ "bbox": [
30
+ 217,
31
+ 191,
32
+ 313,
33
+ 208
34
+ ],
35
+ "page_idx": 0
36
+ },
37
+ {
38
+ "type": "text",
39
+ "text": "Haolan Xue",
40
+ "bbox": [
41
+ 361,
42
+ 193,
43
+ 459,
44
+ 208
45
+ ],
46
+ "page_idx": 0
47
+ },
48
+ {
49
+ "type": "text",
50
+ "text": "Peiran Ren",
51
+ "bbox": [
52
+ 516,
53
+ 193,
54
+ 606,
55
+ 209
56
+ ],
57
+ "page_idx": 0
58
+ },
59
+ {
60
+ "type": "text",
61
+ "text": "Liefeng Bo",
62
+ "bbox": [
63
+ 663,
64
+ 193,
65
+ 756,
66
+ 210
67
+ ],
68
+ "page_idx": 0
69
+ },
70
+ {
71
+ "type": "text",
72
+ "text": "Tongyi Lab, Alibaba Group",
73
+ "bbox": [
74
+ 383,
75
+ 214,
76
+ 604,
77
+ 232
78
+ ],
79
+ "page_idx": 0
80
+ },
81
+ {
82
+ "type": "text",
83
+ "text": "{1xw262398, haolan.xhl, peiran.rpr, liefeng.bo}@alibaba-inc.com",
84
+ "bbox": [
85
+ 217,
86
+ 234,
87
+ 772,
88
+ 250
89
+ ],
90
+ "page_idx": 0
91
+ },
92
+ {
93
+ "type": "text",
94
+ "text": "https://github.com/lixiaowen-xw/DiffuEraser.git",
95
+ "bbox": [
96
+ 285,
97
+ 252,
98
+ 702,
99
+ 266
100
+ ],
101
+ "page_idx": 0
102
+ },
103
+ {
104
+ "type": "image",
105
+ "img_path": "images/e134b517769412cb776a736deec30c026d64ac01ac63d8427bcda63492a15e0d.jpg",
106
+ "image_caption": [
107
+ "(a)"
108
+ ],
109
+ "image_footnote": [],
110
+ "bbox": [
111
+ 114,
112
+ 290,
113
+ 521,
114
+ 545
115
+ ],
116
+ "page_idx": 0
117
+ },
118
+ {
119
+ "type": "image",
120
+ "img_path": "images/c608b5a7dac3a4d0a6ab09797293662db06914c856524ba439f3140a3f5de0c2.jpg",
121
+ "image_caption": [
122
+ "(b)",
123
+ "Figure 1. Performance comparison between the proposed model, DiffuEraser, and Propainter. (a) Texture Quality: DiffuEraser generates more detailed and refined textures compared to the transformer-based Propainter. (b) Temporal Consistency: DiffuEraser demonstrates superior temporal consistency in the inpainted content compared to Propainter."
124
+ ],
125
+ "image_footnote": [],
126
+ "bbox": [
127
+ 531,
128
+ 287,
129
+ 848,
130
+ 542
131
+ ],
132
+ "page_idx": 0
133
+ },
134
+ {
135
+ "type": "text",
136
+ "text": "Abstract",
137
+ "text_level": 1,
138
+ "bbox": [
139
+ 233,
140
+ 642,
141
+ 310,
142
+ 657
143
+ ],
144
+ "page_idx": 0
145
+ },
146
+ {
147
+ "type": "text",
148
+ "text": "Recent video inpainting algorithms integrate flow-based pixel propagation with transformer-based generation to leverage optical flow for restoring textures and objects using information from neighboring frames, while completing masked regions through visual Transformers. However, these approaches often encounter blurring and temporal inconsistencies when dealing with large masks, highlighting the need for models with enhanced generative capabilities. Recently, diffusion models have emerged as a prominent technique in image and video generation due to their impressive performance. In this paper, we introduce DifuEraser, a video inpainting model based on stable diffusion, designed to fill masked regions with greater details and more coherent structures. We incorporate prior information to provide initialization and weak conditioning,",
149
+ "bbox": [
150
+ 73,
151
+ 674,
152
+ 470,
153
+ 902
154
+ ],
155
+ "page_idx": 0
156
+ },
157
+ {
158
+ "type": "text",
159
+ "text": "which helps mitigate noisy artifacts and suppress hallucinations. Additionally, to improve temporal consistency during long-sequence inference, we expand the temporal receptive fields of both the prior model and DiffuEraser, and further enhance consistency by leveraging the temporal smoothing property of Video Diffusion Models. Experimental results demonstrate that our proposed method outperforms state-of-the-art techniques in both content completeness and temporal consistency while maintaining acceptable efficiency.",
160
+ "bbox": [
161
+ 496,
162
+ 643,
163
+ 890,
164
+ 780
165
+ ],
166
+ "page_idx": 0
167
+ },
168
+ {
169
+ "type": "text",
170
+ "text": "1. Introduction",
171
+ "text_level": 1,
172
+ "bbox": [
173
+ 500,
174
+ 813,
175
+ 630,
176
+ 829
177
+ ],
178
+ "page_idx": 0
179
+ },
180
+ {
181
+ "type": "text",
182
+ "text": "Video inpainting aims to complete masked regions with content that is both plausible and temporally consistent. Previous video inpainting algorithms primarily rely on two mechanisms:",
183
+ "bbox": [
184
+ 496,
185
+ 839,
186
+ 890,
187
+ 898
188
+ ],
189
+ "page_idx": 0
190
+ },
191
+ {
192
+ "type": "aside_text",
193
+ "text": "arXiv:2501.10018v1 [cs.CV] 17 Jan 2025",
194
+ "bbox": [
195
+ 22,
196
+ 267,
197
+ 57,
198
+ 707
199
+ ],
200
+ "page_idx": 0
201
+ },
202
+ {
203
+ "type": "list",
204
+ "sub_type": "text",
205
+ "list_items": [
206
+ "1) Flow-based pixel propagation methods, which utilize optical flow to restore texture details and objects by leveraging information from adjacent frames; and",
207
+ "2) Transformer-based video inpainting methods, which excel at completing the structural aspects of objects [26]."
208
+ ],
209
+ "bbox": [
210
+ 76,
211
+ 90,
212
+ 467,
213
+ 181
214
+ ],
215
+ "page_idx": 1
216
+ },
217
+ {
218
+ "type": "text",
219
+ "text": "Current mainstream algorithms typically combine these two approaches, consisting of three modules or stages:",
220
+ "bbox": [
221
+ 76,
222
+ 184,
223
+ 467,
224
+ 213
225
+ ],
226
+ "page_idx": 1
227
+ },
228
+ {
229
+ "type": "list",
230
+ "sub_type": "text",
231
+ "list_items": [
232
+ "1) Flow completion,",
233
+ "2) Feature propagation, and",
234
+ "3) Content generation."
235
+ ],
236
+ "bbox": [
237
+ 96,
238
+ 215,
239
+ 294,
240
+ 261
241
+ ],
242
+ "page_idx": 1
243
+ },
244
+ {
245
+ "type": "text",
246
+ "text": "This solution categorizes masked pixels into two types:",
247
+ "bbox": [
248
+ 96,
249
+ 263,
250
+ 460,
251
+ 277
252
+ ],
253
+ "page_idx": 1
254
+ },
255
+ {
256
+ "type": "list",
257
+ "sub_type": "text",
258
+ "list_items": [
259
+ "1) Known pixels, which have appeared in some masked frames and can be propagated to other frames through flow completion and feature propagation modules, ensuring consistency between the completed content and the unmasked regions; and",
260
+ "2) Unknown pixels, which have never appeared in any masked frames and are generated by the content generation module, thereby enhancing the structural integrity of the results."
261
+ ],
262
+ "bbox": [
263
+ 76,
264
+ 279,
265
+ 467,
266
+ 412
267
+ ],
268
+ "page_idx": 1
269
+ },
270
+ {
271
+ "type": "text",
272
+ "text": "The state-of-the-art algorithm, Propainter [46], exemplifies this approach and comprises three key modules: recurrent flow completion, dual-domain propagation, and mask-guided sparse Transformer. It effectively propagates known pixels across all frames and demonstrates an initial ability to generate unknown pixels. However, when the mask size is large, the generative capability of the Transformer model proves insufficient, leading to significant artifacts, as illustrated in Figure 1.",
273
+ "bbox": [
274
+ 76,
275
+ 416,
276
+ 467,
277
+ 551
278
+ ],
279
+ "page_idx": 1
280
+ },
281
+ {
282
+ "type": "text",
283
+ "text": "Consequently, there is a need for more powerful models with enhanced generative capabilities. The Stable Diffusion model, which has recently gained prominence in the field of image and video generation, presents a promising candidate.",
284
+ "bbox": [
285
+ 76,
286
+ 551,
287
+ 467,
288
+ 626
289
+ ],
290
+ "page_idx": 1
291
+ },
292
+ {
293
+ "type": "text",
294
+ "text": "In this work, we first decompose the video inpainting task into three sub-problems and then propose corresponding solutions for each. Specifically, the three key challenges are: the propagation of known pixels, the generation of unknown pixels, and the temporal consistency of the completed content. Our main contributions are summarized as follows:",
295
+ "bbox": [
296
+ 76,
297
+ 628,
298
+ 467,
299
+ 733
300
+ ],
301
+ "page_idx": 1
302
+ },
303
+ {
304
+ "type": "list",
305
+ "sub_type": "text",
306
+ "list_items": [
307
+ "1. Video Inpainting Diffusion: We introduce a motion module for the image inpainting model BrushNet, which is based on diffusion models. The powerful generative capability of diffusion models overcomes the blurring and mosaic artifacts associated with Transformer-based models, thereby completing object structures and generating more detailed content.",
308
+ "2. Injected Priors: We incorporate priors into the diffusion model, enabling easier initialization to mitigate"
309
+ ],
310
+ "bbox": [
311
+ 88,
312
+ 750,
313
+ 467,
314
+ 900
315
+ ],
316
+ "page_idx": 1
317
+ },
318
+ {
319
+ "type": "text",
320
+ "text": "noisy artifacts and serving as a weak condition to suppress the generation of unwanted objects.",
321
+ "bbox": [
322
+ 529,
323
+ 90,
324
+ 888,
325
+ 121
326
+ ],
327
+ "page_idx": 1
328
+ },
329
+ {
330
+ "type": "text",
331
+ "text": "3. Enhanced Temporal Consistency: We improve the temporal consistency of long-sequence inference by expanding the temporal receptive fields of both the prior model and the diffusion model. Additionally, we further enhance temporal continuity at the intersections between clips by leveraging the temporal smoothing property of the Video Diffusion Model.",
332
+ "bbox": [
333
+ 511,
334
+ 133,
335
+ 890,
336
+ 239
337
+ ],
338
+ "page_idx": 1
339
+ },
340
+ {
341
+ "type": "text",
342
+ "text": "2. Related Works",
343
+ "text_level": 1,
344
+ "bbox": [
345
+ 500,
346
+ 253,
347
+ 648,
348
+ 270
349
+ ],
350
+ "page_idx": 1
351
+ },
352
+ {
353
+ "type": "text",
354
+ "text": "Diffusion Models. The advent of diffusion models [14, 32, 34] has significantly enhanced the quality and creativity of image and video generation. In the realm of image synthesis, diffusion models have driven substantial progress across various tasks, including text-to-image generation [5, 29], controllable image generation [24, 43], image editing [1, 12, 22], personalized image generation [6, 28], and image inpainting [27, 16], among others. Building on these advancements, video diffusion models incorporating additional motion modules have also gained significant traction. Key applications in this domain include text-to-video generation [11, 8, 10, 13, 15, 31], controllable video generation [3, 4, 36, 39], video editing [19, 23, 38, 21], and various training-free video synthesis methods [44, 25].",
355
+ "bbox": [
356
+ 496,
357
+ 280,
358
+ 890,
359
+ 491
360
+ ],
361
+ "page_idx": 1
362
+ },
363
+ {
364
+ "type": "text",
365
+ "text": "Video Inpainting. Video inpainting aims to fill masked regions in videos with plausible content while maintaining temporal consistency. Early approaches based on 3D convolution and shifting operations exhibited limited performance. The emergence of methods leveraging optical flow and Transformer architectures has significantly improved the quality of video inpainting. Flow-based pixel propagation methods [7, 41, 42] excel at restoring textures and details by utilizing information from adjacent frames. In contrast, Transformer-based methods [40, 20, 18, 46] are adept at completing the structural aspects of objects. Among these, Propainter [46] stands out as a representative approach, comprising recurrent flow completion, dual-domain propagation, and a mask-guided sparse Transformer. Propainter effectively propagates known pixels across all frames and demonstrates an initial ability to generate unknown pixels. However, its generative capacity is limited when dealing with large masks, leading to noticeable artifacts.",
366
+ "bbox": [
367
+ 496,
368
+ 492,
369
+ 890,
370
+ 777
371
+ ],
372
+ "page_idx": 1
373
+ },
374
+ {
375
+ "type": "text",
376
+ "text": "With the rising popularity of diffusion models, diffusion-based video inpainting methods have begun to emerge [17, 37, 30, 9, 45, 47]. These approaches leverage the powerful generative capabilities of diffusion models to enhance both the detail and structural integrity of the inpainted regions, addressing some of the limitations observed in Transformer-based methods. BIVDiff[30] is a training-free framework via bridging image and video diffusion models. AVID[45]",
377
+ "bbox": [
378
+ 496,
379
+ 779,
380
+ 890,
381
+ 900
382
+ ],
383
+ "page_idx": 1
384
+ },
385
+ {
386
+ "type": "image",
387
+ "img_path": "images/e4c86f62880d1c434c0679dab98ef6549303cedd0a66dcffe763a8e8b788e7d0.jpg",
388
+ "image_caption": [
389
+ "Figure 2. Overview of the proposed video inpainting model DiffuEraser, based on stable diffusion. The main denoising UNet performs the denoising process to generate the final output. The BrushNet branch extracts features from masked images, which are added to the main denoising UNet layer by layer after a zero convolution block. Temporal attention is incorporated after self-attention and cross-attention to improve temporal consistency."
390
+ ],
391
+ "image_footnote": [],
392
+ "bbox": [
393
+ 158,
394
+ 85,
395
+ 816,
396
+ 321
397
+ ],
398
+ "page_idx": 2
399
+ },
400
+ {
401
+ "type": "text",
402
+ "text": "and CoCoCo[47] improved text-guided video inpainting by integrating motion module to Text-to-Image(T2I) model. [37] proposes language-driven video inpainting via Multimodal Large Language Models, which uses natural language instructions to guide the inpainting process. Nevertheless, they always suffer from the inherent hallucinations of diffusion models. FloED[9] with less hallucination proposes a dedicated dual-branch architecture that incorporates motion guidance with a multi-scale flow adapter to enhance temporal consistency, focusing on object removal and background restoration. FFF-VDI[17] propagates the noise latent information of future frames to fill the masked area of the first frame's noise latent code, improving temporal consistency and suppressing hallucination effects. However, these methods do not effectively address the temporal consistency and stability needed for long-sequence inference and there is still room for improvement in detail and structural integrity. In contrast, DiffuEraser can generate temporally consistent results with enhanced detail and a more complete structure for long-sequence inference, all without requiring a text prompt.",
403
+ "bbox": [
404
+ 75,
405
+ 419,
406
+ 472,
407
+ 736
408
+ ],
409
+ "page_idx": 2
410
+ },
411
+ {
412
+ "type": "text",
413
+ "text": "3. Methodology",
414
+ "text_level": 1,
415
+ "bbox": [
416
+ 76,
417
+ 757,
418
+ 210,
419
+ 773
420
+ ],
421
+ "page_idx": 2
422
+ },
423
+ {
424
+ "type": "text",
425
+ "text": "3.1. Network Overview",
426
+ "text_level": 1,
427
+ "bbox": [
428
+ 76,
429
+ 784,
430
+ 256,
431
+ 799
432
+ ],
433
+ "page_idx": 2
434
+ },
435
+ {
436
+ "type": "text",
437
+ "text": "Our network architecture is inspired byAnimateDiff [11], integrating a motion module into the image inpainting model. For the image inpainting component, we select BrushNet [16], which enhances the main denoising UNet by adding an additional branch to extract features from masked images. An overview of our proposed model, DiffuEraser,",
438
+ "bbox": [
439
+ 75,
440
+ 809,
441
+ 470,
442
+ 902
443
+ ],
444
+ "page_idx": 2
445
+ },
446
+ {
447
+ "type": "text",
448
+ "text": "is depicted in Figure 2. The architecture comprises the primary denoising UNet and an auxiliary BrushNet. The BrushNet branch receives a conditional latent input composed of masked images, masks, and noisy latents, with dimensions $[n,f,h / 4,w / 4,9]$ . Features extracted by BrushNet are integrated into the denoising UNet layer by layer after a zero convolution block. The denoising UNet processes noisy latents with dimensions $[n,f,h / 4,w / 4,4]$ . To enhance temporal consistency, temporal attention mechanisms are incorporated following both self-attention and cross-attention layers. After denoising, the generated images are blended with the input masked images using blurred masks.",
449
+ "bbox": [
450
+ 496,
451
+ 419,
452
+ 890,
453
+ 599
454
+ ],
455
+ "page_idx": 2
456
+ },
457
+ {
458
+ "type": "text",
459
+ "text": "We define the video inpainting problem by decomposing it into three sub-problems: propagation of known pixels (pixels that have appeared in some masked frames), generation of unknown pixels (pixels that have never appeared in any masked frames), and maintaining temporal consistency of the completed content. Specifically:",
460
+ "bbox": [
461
+ 496,
462
+ 599,
463
+ 890,
464
+ 691
465
+ ],
466
+ "page_idx": 2
467
+ },
468
+ {
469
+ "type": "text",
470
+ "text": "1. Propagation of Known Pixels: The motion module inherently supports temporal propagation, allowing the restoration of texture details and objects in the current frame using information from adjacent frames. Additionally, we leverage the enhanced propagation capabilities of the prior model, which offers a longer propagation range and a more sophisticated propagation mechanism. Specifically, we apply DDIM inversion on the inpainting results from the prior model and incorporate them into the noisy latent. See Section 3.2 for details. We utilize Propainter as our prior model. Beyond supporting the propagation of known pixels, the injected prior facilitates easier initialization",
471
+ "bbox": [
472
+ 513,
473
+ 703,
474
+ 893,
475
+ 900
476
+ ],
477
+ "page_idx": 2
478
+ },
479
+ {
480
+ "type": "list",
481
+ "sub_type": "text",
482
+ "list_items": [
483
+ "for DiffuEraser, enabling the generation of meaningful completed content and suppressing noisy artifacts and visual hallucinations commonly associated with diffusion models.",
484
+ "2. Generation of Unknown Pixels: Utilizing the robust generative capabilities of the stable diffusion model, our approach can generate plausible content with more details and textures for unknown pixels.",
485
+ "3. Temporal Consistency of Completed Content: While the motion module ensures temporal consistency within individual inferences (each handling a clip of 22 frames in our setting), discrepancies arise at the boundaries between clips during long-sequence processing. To address this, we expand the temporal receptive field of the model. This is achieved by performing pre-inference, where video frames are sampled at an optimal rate and processed collectively as a single clip. This enables the model to \"see\" frames from a broader temporal context. Subsequently, the insights gained from pre-inference are used to guide the frame-by-frame inference, incorporating information from distant frames and thereby enhancing the overall temporal continuity. See Section 3.3 for details."
486
+ ],
487
+ "bbox": [
488
+ 89,
489
+ 90,
490
+ 472,
491
+ 455
492
+ ],
493
+ "page_idx": 3
494
+ },
495
+ {
496
+ "type": "text",
497
+ "text": "As demonstrated in other studies, the generative capability of stable diffusion models and the temporal consistency provided by motion modules are well-established. In this paper, we focus on illustrating the advantages of incorporating priors and optimizing temporal consistency across clips during long-sequence inference.",
498
+ "bbox": [
499
+ 75,
500
+ 467,
501
+ 468,
502
+ 556
503
+ ],
504
+ "page_idx": 3
505
+ },
506
+ {
507
+ "type": "text",
508
+ "text": "3.2. Incorporation of Priors",
509
+ "text_level": 1,
510
+ "bbox": [
511
+ 76,
512
+ 566,
513
+ 294,
514
+ 583
515
+ ],
516
+ "page_idx": 3
517
+ },
518
+ {
519
+ "type": "text",
520
+ "text": "As illustrated in Figure 3, our model occasionally generates meaningless noisy artifacts within masked regions. For instance, the masked area above the sea level may appear as random noise instead of coherent content.",
521
+ "bbox": [
522
+ 75,
523
+ 590,
524
+ 468,
525
+ 650
526
+ ],
527
+ "page_idx": 3
528
+ },
529
+ {
530
+ "type": "image",
531
+ "img_path": "images/6bf643e9eac95b32b23b37c7600b96d676732a92e76bb601ca6c13a014a839c6.jpg",
532
+ "image_caption": [
533
+ "Figure 3. Example of noisy artifacts generated by the model. The masked region above the sea level is not completed correctly and resembles random noise."
534
+ ],
535
+ "image_footnote": [],
536
+ "bbox": [
537
+ 98,
538
+ 662,
539
+ 450,
540
+ 756
541
+ ],
542
+ "page_idx": 3
543
+ },
544
+ {
545
+ "type": "text",
546
+ "text": "To address these artifacts, we enhance the noisy latent—an integral part of the model's input. Inspired by DDIM Inversion [33], we introduce priors during inference. Specifically, we perform DDIM Inversion on the outputs of a chosen lightweight inpainting model and incorporate the",
547
+ "bbox": [
548
+ 75,
549
+ 825,
550
+ 470,
551
+ 902
552
+ ],
553
+ "page_idx": 3
554
+ },
555
+ {
556
+ "type": "text",
557
+ "text": "inverted results into the noisy latent, as depicted in Figure 4. The prior provides initialization information that enables the model to generate meaningful and stable completed content, effectively eliminating the noisy artifacts shown in Figure 3. Additionally, the prior acts as a weak condition to suppress the generation of unwanted objects, mitigating visual hallucinations often encountered in diffusion models.",
558
+ "bbox": [
559
+ 496,
560
+ 90,
561
+ 890,
562
+ 196
563
+ ],
564
+ "page_idx": 3
565
+ },
566
+ {
567
+ "type": "image",
568
+ "img_path": "images/06c8213943e1fdd2f300429c635a96a09dc968fdc9c5d4f73652fcdd89c74e2d.jpg",
569
+ "image_caption": [
570
+ "Figure 4. Incorporation of priors. We introduce priors during inference by performing DDIM inversion on the outputs of the prior model and adding them to the noisy latent."
571
+ ],
572
+ "image_footnote": [],
573
+ "bbox": [
574
+ 501,
575
+ 208,
576
+ 893,
577
+ 402
578
+ ],
579
+ "page_idx": 3
580
+ },
581
+ {
582
+ "type": "text",
583
+ "text": "The selection of the prior model significantly impacts the final results. After experimental comparisons, we selected Propainter as our prior model. Notably, any blur and mosaic artifacts present in the prior do not adversely affect our model's outputs; instead, they are refined and eliminated, resulting in inpainted regions with richer textures and greater detail.",
584
+ "bbox": [
585
+ 496,
586
+ 473,
587
+ 890,
588
+ 578
589
+ ],
590
+ "page_idx": 3
591
+ },
592
+ {
593
+ "type": "text",
594
+ "text": "Figure 5 compares the results before and after incorporating priors, demonstrating that the introduction of priors effectively suppresses noisy artifacts and the emergence of unwanted objects, thereby significantly enhancing the accuracy and stability of the inpainting results.",
595
+ "bbox": [
596
+ 496,
597
+ 579,
598
+ 890,
599
+ 655
600
+ ],
601
+ "page_idx": 3
602
+ },
603
+ {
604
+ "type": "image",
605
+ "img_path": "images/ece35d2d7b7d0f52d96289698367ff67e1a560ff5ae85a40abb1c848b382b962.jpg",
606
+ "image_caption": [
607
+ "Figure 5. Comparison of inpainting results before and after incorporating priors."
608
+ ],
609
+ "image_footnote": [],
610
+ "bbox": [
611
+ 519,
612
+ 670,
613
+ 870,
614
+ 840
615
+ ],
616
+ "page_idx": 3
617
+ },
618
+ {
619
+ "type": "image",
620
+ "img_path": "images/a456d5de36d936497eab6db907d20d6126432885ae67ec309d00dcc865bba36c.jpg",
621
+ "image_caption": [
622
+ "Figure 6. Utilizing the temporal smoothing property of the Video Diffusion Model (VDM) to enhance consistency at the intersections of clips."
623
+ ],
624
+ "image_footnote": [],
625
+ "bbox": [
626
+ 81,
627
+ 89,
628
+ 893,
629
+ 234
630
+ ],
631
+ "page_idx": 4
632
+ },
633
+ {
634
+ "type": "text",
635
+ "text": "3.3. Optimizing Temporal Consistency for Long-Sequence Inference",
636
+ "text_level": 1,
637
+ "bbox": [
638
+ 76,
639
+ 303,
640
+ 470,
641
+ 335
642
+ ],
643
+ "page_idx": 4
644
+ },
645
+ {
646
+ "type": "text",
647
+ "text": "While the motion module maintains good temporal consistency within individual clips (for example, 22 frames), noticeable discrepancies emerge at the boundaries between consecutive clips during long-sequence inference, as shown in Figure 7. To ensure seamless temporal consistency across the entire video, we implement the following optimizations.",
648
+ "bbox": [
649
+ 75,
650
+ 342,
651
+ 470,
652
+ 434
653
+ ],
654
+ "page_idx": 4
655
+ },
656
+ {
657
+ "type": "text",
658
+ "text": "3.3.1 Leveraging the Temporal Smoothing Property of the Video Diffusion Model (VDM)",
659
+ "text_level": 1,
660
+ "bbox": [
661
+ 76,
662
+ 452,
663
+ 470,
664
+ 483
665
+ ],
666
+ "page_idx": 4
667
+ },
668
+ {
669
+ "type": "text",
670
+ "text": "The absence of specific temporal conditioning leads to significant changes in completed content between clips, a problem that cannot be resolved by merely overlapping neighboring clips. Inspired by the concept of interpolating between timesteps to obtain intermediate results [9], we adopt a staggered denoising approach along sequential timesteps. This method utilizes the inherent temporal smoothing property of VDM to enhance consistency between clips.",
671
+ "bbox": [
672
+ 75,
673
+ 492,
674
+ 468,
675
+ 628
676
+ ],
677
+ "page_idx": 4
678
+ },
679
+ {
680
+ "type": "text",
681
+ "text": "During inference, even-numbered timesteps remain inferred from the starting position of the clip, while odd-numbered timesteps are inferred from the midpoint of the clip, Figure 6. This staggered denoising leverages VDM's temporal smoothing property to blend frames at clip intersections smoothly. The underlying rationale is that, despite identical latent inputs, the denoising results for overlapped frames from adjacent clips differ due to VDM's temporal smoothing property, which adjusts overlapped frames to be temporally consistent with the starting frame. By applying this smoothing property at clip intersections, we achieve more seamless transitions.",
682
+ "bbox": [
683
+ 75,
684
+ 628,
685
+ 468,
686
+ 808
687
+ ],
688
+ "page_idx": 4
689
+ },
690
+ {
691
+ "type": "text",
692
+ "text": "When processing long videos divided into multiple clips, preliminary optimizations lead to multiple adjustments at clip intersections. After optimization, these transitions are smoothed into a single gradual change from the first to the last frame of the entire video. However, complete consistency across the entire video remains unattainable due to",
693
+ "bbox": [
694
+ 75,
695
+ 810,
696
+ 470,
697
+ 900
698
+ ],
699
+ "page_idx": 4
700
+ },
701
+ {
702
+ "type": "text",
703
+ "text": "inherent inconsistencies between the first and last frames.",
704
+ "bbox": [
705
+ 500,
706
+ 304,
707
+ 879,
708
+ 320
709
+ ],
710
+ "page_idx": 4
711
+ },
712
+ {
713
+ "type": "image",
714
+ "img_path": "images/551fa82f28ca4156c47b16d5a5a7670628ec03e9c00d6bc0677f1edf02269c2f.jpg",
715
+ "image_caption": [
716
+ "Figure 7. Temporal consistency optimization for long-sequence inference."
717
+ ],
718
+ "image_footnote": [],
719
+ "bbox": [
720
+ 508,
721
+ 327,
722
+ 893,
723
+ 428
724
+ ],
725
+ "page_idx": 4
726
+ },
727
+ {
728
+ "type": "text",
729
+ "text": "3.3.2 Expanding the Temporal Receptive Field",
730
+ "text_level": 1,
731
+ "bbox": [
732
+ 500,
733
+ 500,
734
+ 838,
735
+ 515
736
+ ],
737
+ "page_idx": 4
738
+ },
739
+ {
740
+ "type": "text",
741
+ "text": "A single inference pass can process only a limited number of frames (for instance, 22 frames in our setting), which restricts the temporal receptive field and prevents the propagation of known pixels from distant frames. Additionally, information sharing between different clips is constrained, resulting in inconsistencies in detailed content despite similar semantics across clips. This leads to frequent and noticeable changes during long-sequence inference, as illustrated in Figure 7. To mitigate this, we expand the temporal receptive field of the inference process through the following two strategies.",
742
+ "bbox": [
743
+ 496,
744
+ 523,
745
+ 890,
746
+ 688
747
+ ],
748
+ "page_idx": 4
749
+ },
750
+ {
751
+ "type": "text",
752
+ "text": "1. Enhancing Priors for Comprehensive Pixel Propagation",
753
+ "text_level": 1,
754
+ "bbox": [
755
+ 498,
756
+ 689,
757
+ 890,
758
+ 718
759
+ ],
760
+ "page_idx": 4
761
+ },
762
+ {
763
+ "type": "text",
764
+ "text": "Using Propainter as an example, we first sample the input video frames and perform pre-propagation to extend known pixels across the entire time domain, surpassing the temporal limitations of a single propagation pass (which typically handles dozens of frames), as shown in Figure 8(a). Full propagation of known pixels ensures that the completed content remains consistent with the unmasked regions, thereby stabilizing the results.",
765
+ "bbox": [
766
+ 496,
767
+ 719,
768
+ 890,
769
+ 839
770
+ ],
771
+ "page_idx": 4
772
+ },
773
+ {
774
+ "type": "text",
775
+ "text": "Subsequently, the inpainting results of the sampled frames guide frame-by-frame propagation, allowing the information obtained from pre-propagation to be integrated into every frame, as depicted in Figure 9(a).",
776
+ "bbox": [
777
+ 496,
778
+ 840,
779
+ 890,
780
+ 900
781
+ ],
782
+ "page_idx": 4
783
+ },
784
+ {
785
+ "type": "text",
786
+ "text": "This optimization enables Propainter to utilize information from distant frames more effectively, ensuring that known pixels are stably propagated across the entire time domain. Consequently, the prior provided to DiffuEraser is more accurate and stable. Nonetheless, DiffuEraser's limited temporal receptive field still results in significant changes at clip intersections.",
787
+ "bbox": [
788
+ 75,
789
+ 90,
790
+ 472,
791
+ 196
792
+ ],
793
+ "page_idx": 5
794
+ },
795
+ {
796
+ "type": "image",
797
+ "img_path": "images/992d21174ef8be4dd3d6f82225b043e935a2cb698ed7077ce08ae9679ee73ff1.jpg",
798
+ "image_caption": [
799
+ "(a) Pre-propagation"
800
+ ],
801
+ "image_footnote": [],
802
+ "bbox": [
803
+ 81,
804
+ 212,
805
+ 460,
806
+ 343
807
+ ],
808
+ "page_idx": 5
809
+ },
810
+ {
811
+ "type": "image",
812
+ "img_path": "images/2aeea5110bb1da2ec9fd097599a37a9e87e8f02824c7b219fa70849a875afc7f.jpg",
813
+ "image_caption": [
814
+ "(b) Pre-inference",
815
+ "Figure 8. Perform pre-propagation or pre-inference to expand the temporal receptive field of model."
816
+ ],
817
+ "image_footnote": [],
818
+ "bbox": [
819
+ 78,
820
+ 369,
821
+ 460,
822
+ 500
823
+ ],
824
+ "page_idx": 5
825
+ },
826
+ {
827
+ "type": "text",
828
+ "text": "2. Expanding the Temporal Receptive Field of DiffuEraser for consistent generation of unknown pixels",
829
+ "text_level": 1,
830
+ "bbox": [
831
+ 75,
832
+ 566,
833
+ 468,
834
+ 597
835
+ ],
836
+ "page_idx": 5
837
+ },
838
+ {
839
+ "type": "text",
840
+ "text": "To further enhance temporal consistency, we also expand the temporal receptive field of DiffuEraser. Similar to the prior optimization, we introduce a pre-inference step where video frames are sampled and processed as a single inference pass, thereby broadening the temporal context and ensuring consistent content generation across the entire video, as shown in Figure 8(b).",
841
+ "bbox": [
842
+ 75,
843
+ 598,
844
+ 468,
845
+ 703
846
+ ],
847
+ "page_idx": 5
848
+ },
849
+ {
850
+ "type": "text",
851
+ "text": "Following pre-inference, the results guide frame-by-frame inference, ensuring that the content consistency established during pre-inference is maintained throughout all remaining frames, as illustrated in Figure 9(b).",
852
+ "bbox": [
853
+ 75,
854
+ 704,
855
+ 468,
856
+ 763
857
+ ],
858
+ "page_idx": 5
859
+ },
860
+ {
861
+ "type": "text",
862
+ "text": "The core principle behind these optimizations—both for priors and DiffuEraser—is to extend the temporal receptive field to encompass the entire video duration, rather than being confined to individual clips. The optimization of prior ensures comprehensive propagation of known pixels, maintaining result correctness, while the optimization of DiffuEraser focuses on the consistent generation of unknown pixels, ensuring overall stability. Together, these enhancements effectively resolve the temporal consistency is-",
863
+ "bbox": [
864
+ 75,
865
+ 763,
866
+ 468,
867
+ 901
868
+ ],
869
+ "page_idx": 5
870
+ },
871
+ {
872
+ "type": "image",
873
+ "img_path": "images/4e49403c2aacd5c4f48e3e6920f79d506bbee54a511a79fd5151ff8f5e48fa0a.jpg",
874
+ "image_caption": [
875
+ "(a) Frame-by-frame propagation"
876
+ ],
877
+ "image_footnote": [],
878
+ "bbox": [
879
+ 504,
880
+ 87,
881
+ 893,
882
+ 208
883
+ ],
884
+ "page_idx": 5
885
+ },
886
+ {
887
+ "type": "image",
888
+ "img_path": "images/1b6487e002ce5cc1bc002e6dce812265c72cdf669ef5b71a54cd9dc78c2b37c9.jpg",
889
+ "image_caption": [
890
+ "(b) Frame-by-frame inference",
891
+ "Figure 9. The temporal consistency obtained from pre-propagation or pre-inference is maintained throughout all remaining frames."
892
+ ],
893
+ "image_footnote": [],
894
+ "bbox": [
895
+ 503,
896
+ 236,
897
+ 893,
898
+ 362
899
+ ],
900
+ "page_idx": 5
901
+ },
902
+ {
903
+ "type": "text",
904
+ "text": "sues inherent in long-sequence inference, as demonstrated in Figure 7.",
905
+ "bbox": [
906
+ 498,
907
+ 445,
908
+ 890,
909
+ 476
910
+ ],
911
+ "page_idx": 5
912
+ },
913
+ {
914
+ "type": "text",
915
+ "text": "4. Experiments",
916
+ "text_level": 1,
917
+ "bbox": [
918
+ 500,
919
+ 493,
920
+ 632,
921
+ 510
922
+ ],
923
+ "page_idx": 5
924
+ },
925
+ {
926
+ "type": "text",
927
+ "text": "Datasets. We utilized the Panda-70M dataset [2], splitting videos at scene cuts and filtering them based on matching scores to obtain 3,183,727 short video clips paired with captions. During training, we generated mask sequences with random rates, directions, and shapes to simulate video inpainting and object removal tasks.",
928
+ "bbox": [
929
+ 498,
930
+ 518,
931
+ 890,
932
+ 609
933
+ ],
934
+ "page_idx": 5
935
+ },
936
+ {
937
+ "type": "text",
938
+ "text": "Training Details and Metrics. We employed a two-stage training strategy with a resolution of 512. In the first stage, we trained the BrushNet and the main denoising UNet without the motion module to enhance content generation capabilities. In the second stage, we trained the motion module of the main denoising UNet to improve temporal consistency. The first stage is trained on 4 NVIDIA A100 GPUs for 100,000 steps with a batch size of 16, and the second stage is trained on 8 NVIDIA A100 GPUs for 80,000 steps with 22-frame video sequences and a batch size of 1. Both models were optimized using the L2 loss function and a learning rate of 1e-5.",
939
+ "bbox": [
940
+ 496,
941
+ 611,
942
+ 892,
943
+ 792
944
+ ],
945
+ "page_idx": 5
946
+ },
947
+ {
948
+ "type": "text",
949
+ "text": "Efficiency. Leveraging Phased Consistency Models (PCM) [35], our model can generate samples in only two steps, significantly improving inference efficiency. For instance, processing a 10-second video at $540\\mathrm{p}$ and 25 FPS using Nvidia GPU L20 requires about 200 seconds.",
950
+ "bbox": [
951
+ 496,
952
+ 792,
953
+ 890,
954
+ 869
955
+ ],
956
+ "page_idx": 5
957
+ },
958
+ {
959
+ "type": "text",
960
+ "text": "Qualitative Comparison. Figure 1 illustrates a comparison between our model and Propainter both in texture",
961
+ "bbox": [
962
+ 498,
963
+ 869,
964
+ 890,
965
+ 901
966
+ ],
967
+ "page_idx": 5
968
+ },
969
+ {
970
+ "type": "text",
971
+ "text": "quality and temporal consistency. For more comparison results, see Figure 10,11,12,13. Our model effectively propagates known pixels—those that appear in some masked frames—to all frames, while also generating unknown pixels—those that never appear in any masked frames—with high consistency and stability.",
972
+ "bbox": [
973
+ 75,
974
+ 90,
975
+ 472,
976
+ 183
977
+ ],
978
+ "page_idx": 6
979
+ },
980
+ {
981
+ "type": "text",
982
+ "text": "5. Conclusion and Discussion",
983
+ "text_level": 1,
984
+ "bbox": [
985
+ 76,
986
+ 194,
987
+ 326,
988
+ 209
989
+ ],
990
+ "page_idx": 6
991
+ },
992
+ {
993
+ "type": "text",
994
+ "text": "In this paper, we introduce DiffuEraser, a video inpainting model based on stable diffusion. We address the video inpainting task by decomposing it into three subproblems: propagation of known pixels (pixels appearing in some masked frames), generation of unknown pixels (pixels never appearing in any masked frames), and maintaining temporal consistency of the completed content. For each sub-problem, we propose tailored solutions.",
995
+ "bbox": [
996
+ 75,
997
+ 219,
998
+ 468,
999
+ 339
1000
+ ],
1001
+ "page_idx": 6
1002
+ },
1003
+ {
1004
+ "type": "text",
1005
+ "text": "For the generation of unknown pixels, the powerful generative capabilities of the stable diffusion model help DiffuEraser effectively overcome the blurring and mosaic issues prevalent in Transformer-based models. Additionally, we mitigate the inherent hallucinations of stable diffusion models by incorporating priors, ensuring more accurate and realistic inpainting results.",
1006
+ "bbox": [
1007
+ 75,
1008
+ 340,
1009
+ 468,
1010
+ 445
1011
+ ],
1012
+ "page_idx": 6
1013
+ },
1014
+ {
1015
+ "type": "text",
1016
+ "text": "In terms of propagating known pixels, the motion module within the denoising UNet, combined with the enhanced propagation properties provided by priors, ensures the sufficient and consistent propagation of known pixels across frames. This prevents conflicts between the completed content and the unmasked regions, thereby improving the correctness and stability of the results.",
1017
+ "bbox": [
1018
+ 75,
1019
+ 446,
1020
+ 468,
1021
+ 551
1022
+ ],
1023
+ "page_idx": 6
1024
+ },
1025
+ {
1026
+ "type": "text",
1027
+ "text": "To address temporal inconsistencies between clips for long-sequence inference, we expand the temporal receptive field for both prior model and DiffuEraser, significantly enhancing the consistency of completed content across all frames. Furthermore, we leverage the temporal smoothing property of VDM to further enhance temporal coherence at the intersections between clips.",
1028
+ "bbox": [
1029
+ 75,
1030
+ 551,
1031
+ 468,
1032
+ 657
1033
+ ],
1034
+ "page_idx": 6
1035
+ },
1036
+ {
1037
+ "type": "text",
1038
+ "text": "The concepts of incorporating priors and the methods to improve temporal consistency for long-sequence inference are also applicable to a variety of other video editing tasks, such as object replacement and local stylization. These applications will be further explored in future works. Experimental results demonstrate that DiffuEraser outperforms state-of-the-art methods in both content completeness and temporal consistency, establishing it as a superior approach for video inpainting tasks.",
1039
+ "bbox": [
1040
+ 75,
1041
+ 657,
1042
+ 468,
1043
+ 792
1044
+ ],
1045
+ "page_idx": 6
1046
+ },
1047
+ {
1048
+ "type": "text",
1049
+ "text": "References",
1050
+ "text_level": 1,
1051
+ "bbox": [
1052
+ 76,
1053
+ 805,
1054
+ 174,
1055
+ 821
1056
+ ],
1057
+ "page_idx": 6
1058
+ },
1059
+ {
1060
+ "type": "list",
1061
+ "sub_type": "ref_text",
1062
+ "list_items": [
1063
+ "[1] Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. arXiv preprint arXiv:2211.09800, 2022.",
1064
+ "[2] Tsai-Shien Chen, Aliaksandr Siarohin, Willi Menapace, Ekaterina Deyneka, Hsiang-wei Chao, Byung Eun Jeon,"
1065
+ ],
1066
+ "bbox": [
1067
+ 84,
1068
+ 829,
1069
+ 467,
1070
+ 901
1071
+ ],
1072
+ "page_idx": 6
1073
+ },
1074
+ {
1075
+ "type": "list",
1076
+ "sub_type": "ref_text",
1077
+ "list_items": [
1078
+ "Yuwei Fang, Hsin-Ying Lee, Jian Ren, Ming-Hsuan Yang, and Sergey Tulyakov. Panda-70m: Captioning 70m videos with multiple cross-modality teachers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024.",
1079
+ "[3] Weifeng Chen, Jie Wu, Pan Xie, Hefeng Wu, Jiashi Li, Xin Xia, Xuefeng Xiao, and Liang Lin. Control-a-video: Controllable text-to-video generation with diffusion models, 2023.",
1080
+ "[4] Patrick Esser, Johnathan Chiu, Parmida Atighechian, Jonathan Granskog, and Anastasis Germanidis. Structure and content-guided video synthesis with diffusion models, 2023.",
1081
+ "[5] Aditya Ramesh et al. Hierarchical text-conditional image generation with clip latents, 2022.",
1082
+ "[6] Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H. Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion, 2022.",
1083
+ "[7] Chen Gao, Ayush Saraf, Jia-Bin Huang, and Johannes Kopf. Flow-edge guided video completion. In Proc. European Conference on Computer Vision (ECCV), 2020.",
1084
+ "[8] Songwei Ge, Seungjun Nah, Guilin Liu, Tyler Poon, Andrew Tao, Bryan Catanzaro, David Jacobs, Jia-Bin Huang, Ming-Yu Liu, and Yogesh Balaji. Preserve your own correlation: A noise prior for video diffusion models, 2024.",
1085
+ "[9] Bohai Gu, Hao Luo, Song Guo, and Peiran Dong. Advanced video inpainting using optical flow-guided efficient diffusion. arXiv preprint arXiv:2412.00857, 2024.",
1086
+ "[10] Jiaxi Gu, Shicong Wang, Haoyu Zhao, Tianyi Lu, Xing Zhang, Zuxuan Wu, Songcen Xu, Wei Zhang, Yu-Gang Jiang, and Hang Xu. Reuse and diffuse: Iterative denoising for text-to-video generation.",
1087
+ "[11] Yuwei Guo, Ceyuan Yang, Anyi Rao, Zhengyang Liang, Yaohui Wang, Yu Qiao, Maneesh Agrawala, Dahua Lin, and Bo Dai. Animatediff: Animate your personalized text-to-image diffusion models without specific tuning. International Conference on Learning Representations, 2024.",
1088
+ "[12] Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626, 2022.",
1089
+ "[13] Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P. Kingma, Ben Poole, Mohammad Norouzi, David J. Fleet, and Tim Salimans. Imagen video: High definition video generation with diffusion models, 2022.",
1090
+ "[14] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. arXiv preprint arxiv:2006.11239, 2020.",
1091
+ "[15] Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. arXiv:2204.03458, 2022.",
1092
+ "[16] Xuan Ju, Xian Liu, Xintao Wang, Yuxuan Bian, Ying Shan, and Qiang Xu. Brushnet: A plug-and-play image inpainting model with decomposed dual-branch diffusion, 2024."
1093
+ ],
1094
+ "bbox": [
1095
+ 501,
1096
+ 90,
1097
+ 893,
1098
+ 900
1099
+ ],
1100
+ "page_idx": 6
1101
+ },
1102
+ {
1103
+ "type": "image",
1104
+ "img_path": "images/d3bc7ebbccf09fd4e1c9e17d35e2046263513c0baf8988931df18bde087afe80.jpg",
1105
+ "image_caption": [
1106
+ "Masked Frames"
1107
+ ],
1108
+ "image_footnote": [],
1109
+ "bbox": [
1110
+ 101,
1111
+ 114,
1112
+ 228,
1113
+ 287
1114
+ ],
1115
+ "page_idx": 7
1116
+ },
1117
+ {
1118
+ "type": "image",
1119
+ "img_path": "images/98e8e9ff32e4f294d8d178275d75f7c0d1a7a8a3b11af938e704f11ccd40ff16.jpg",
1120
+ "image_caption": [
1121
+ "Propainter"
1122
+ ],
1123
+ "image_footnote": [],
1124
+ "bbox": [
1125
+ 230,
1126
+ 114,
1127
+ 356,
1128
+ 286
1129
+ ],
1130
+ "page_idx": 7
1131
+ },
1132
+ {
1133
+ "type": "image",
1134
+ "img_path": "images/a93cd07329c0d84ca839cf34943af72d1dcc6063198e532eac9a991c00a07a92.jpg",
1135
+ "image_caption": [
1136
+ "Ours"
1137
+ ],
1138
+ "image_footnote": [],
1139
+ "bbox": [
1140
+ 357,
1141
+ 114,
1142
+ 483,
1143
+ 286
1144
+ ],
1145
+ "page_idx": 7
1146
+ },
1147
+ {
1148
+ "type": "image",
1149
+ "img_path": "images/11418655095a5b62846078b0cbfb6f34edc1b93930dc408eeeff4f7a88a9a6c7.jpg",
1150
+ "image_caption": [
1151
+ "Masked Frames"
1152
+ ],
1153
+ "image_footnote": [],
1154
+ "bbox": [
1155
+ 488,
1156
+ 114,
1157
+ 614,
1158
+ 287
1159
+ ],
1160
+ "page_idx": 7
1161
+ },
1162
+ {
1163
+ "type": "image",
1164
+ "img_path": "images/6e847a85323e143c3bc063b73b6b5cf327ce01f60632d339f5e002c8952ecaf4.jpg",
1165
+ "image_caption": [
1166
+ "Propainter"
1167
+ ],
1168
+ "image_footnote": [],
1169
+ "bbox": [
1170
+ 616,
1171
+ 114,
1172
+ 743,
1173
+ 287
1174
+ ],
1175
+ "page_idx": 7
1176
+ },
1177
+ {
1178
+ "type": "image",
1179
+ "img_path": "images/6fdacd3b8e8b72b621da2f80ca2ddfd517831ab88a609e94baf54a8df2c43beb.jpg",
1180
+ "image_caption": [
1181
+ "Ours"
1182
+ ],
1183
+ "image_footnote": [],
1184
+ "bbox": [
1185
+ 745,
1186
+ 114,
1187
+ 870,
1188
+ 287
1189
+ ],
1190
+ "page_idx": 7
1191
+ },
1192
+ {
1193
+ "type": "image",
1194
+ "img_path": "images/43f357df08024d9ac07ff5849aea6ec639318ac82e8f47daedb73647fcfb934a.jpg",
1195
+ "image_caption": [],
1196
+ "image_footnote": [],
1197
+ "bbox": [
1198
+ 101,
1199
+ 289,
1200
+ 228,
1201
+ 463
1202
+ ],
1203
+ "page_idx": 7
1204
+ },
1205
+ {
1206
+ "type": "image",
1207
+ "img_path": "images/28b6ebccac20543b867d9a6ba97205e651fbaab2d093cd7059bbf64fc1dfd32c.jpg",
1208
+ "image_caption": [],
1209
+ "image_footnote": [],
1210
+ "bbox": [
1211
+ 230,
1212
+ 289,
1213
+ 356,
1214
+ 463
1215
+ ],
1216
+ "page_idx": 7
1217
+ },
1218
+ {
1219
+ "type": "image",
1220
+ "img_path": "images/ecf75d7d5fd96e43203dc690083293b9e4f26e67df0f8fbc190e2e63b3155b68.jpg",
1221
+ "image_caption": [],
1222
+ "image_footnote": [],
1223
+ "bbox": [
1224
+ 357,
1225
+ 289,
1226
+ 483,
1227
+ 463
1228
+ ],
1229
+ "page_idx": 7
1230
+ },
1231
+ {
1232
+ "type": "image",
1233
+ "img_path": "images/02bdfb7fe858f8dacc4aa80281c833792dfc26b5f14769cd1460b3dc08a024a6.jpg",
1234
+ "image_caption": [],
1235
+ "image_footnote": [],
1236
+ "bbox": [
1237
+ 488,
1238
+ 289,
1239
+ 614,
1240
+ 463
1241
+ ],
1242
+ "page_idx": 7
1243
+ },
1244
+ {
1245
+ "type": "image",
1246
+ "img_path": "images/fb50e89052b20739b4236421a0b0af93dcc01f7b55e2eb91f8f29d1fac75b18e.jpg",
1247
+ "image_caption": [],
1248
+ "image_footnote": [],
1249
+ "bbox": [
1250
+ 616,
1251
+ 290,
1252
+ 743,
1253
+ 463
1254
+ ],
1255
+ "page_idx": 7
1256
+ },
1257
+ {
1258
+ "type": "image",
1259
+ "img_path": "images/fa92b6b82223760863095401c8d4f567afa5d87207110ef2a2967b49fd8fae7b.jpg",
1260
+ "image_caption": [],
1261
+ "image_footnote": [],
1262
+ "bbox": [
1263
+ 745,
1264
+ 290,
1265
+ 870,
1266
+ 463
1267
+ ],
1268
+ "page_idx": 7
1269
+ },
1270
+ {
1271
+ "type": "image",
1272
+ "img_path": "images/301786ce1cc9ba2c85fcadac39b1f02133566f30176edfbfe010657a33c8ea67.jpg",
1273
+ "image_caption": [],
1274
+ "image_footnote": [],
1275
+ "bbox": [
1276
+ 101,
1277
+ 465,
1278
+ 228,
1279
+ 638
1280
+ ],
1281
+ "page_idx": 7
1282
+ },
1283
+ {
1284
+ "type": "image",
1285
+ "img_path": "images/eecb89909217669e40b6abacdac0711be70055ed1680ee093e2fe664820a50f2.jpg",
1286
+ "image_caption": [],
1287
+ "image_footnote": [],
1288
+ "bbox": [
1289
+ 230,
1290
+ 465,
1291
+ 356,
1292
+ 638
1293
+ ],
1294
+ "page_idx": 7
1295
+ },
1296
+ {
1297
+ "type": "image",
1298
+ "img_path": "images/d4ed5d25a3ac258c013ae2352602519d8f0b61bde2e586cc734d007296f43b06.jpg",
1299
+ "image_caption": [],
1300
+ "image_footnote": [],
1301
+ "bbox": [
1302
+ 357,
1303
+ 465,
1304
+ 483,
1305
+ 638
1306
+ ],
1307
+ "page_idx": 7
1308
+ },
1309
+ {
1310
+ "type": "image",
1311
+ "img_path": "images/a92c094784b41cc3e31946d216f47e5beb550e828acfcff39963701032b9c4aa.jpg",
1312
+ "image_caption": [],
1313
+ "image_footnote": [],
1314
+ "bbox": [
1315
+ 488,
1316
+ 465,
1317
+ 614,
1318
+ 638
1319
+ ],
1320
+ "page_idx": 7
1321
+ },
1322
+ {
1323
+ "type": "image",
1324
+ "img_path": "images/47118d1f93aa3f5b6a0bef6c9b9b458ed5620d4dfd61d439a4a60ad2d3d537de.jpg",
1325
+ "image_caption": [],
1326
+ "image_footnote": [],
1327
+ "bbox": [
1328
+ 616,
1329
+ 465,
1330
+ 743,
1331
+ 638
1332
+ ],
1333
+ "page_idx": 7
1334
+ },
1335
+ {
1336
+ "type": "image",
1337
+ "img_path": "images/17a32a532767ea8d847634d3cd377f1c381caa36c55290db0094d126c9462275.jpg",
1338
+ "image_caption": [],
1339
+ "image_footnote": [],
1340
+ "bbox": [
1341
+ 745,
1342
+ 465,
1343
+ 870,
1344
+ 638
1345
+ ],
1346
+ "page_idx": 7
1347
+ },
1348
+ {
1349
+ "type": "image",
1350
+ "img_path": "images/3b2c16f3867803a648be0eddcadd96d170c51551092aa903345740fa959cba42.jpg",
1351
+ "image_caption": [
1352
+ "Figure 10. Texture quality comparison between DiffuEraser and Propainter."
1353
+ ],
1354
+ "image_footnote": [],
1355
+ "bbox": [
1356
+ 101,
1357
+ 640,
1358
+ 228,
1359
+ 813
1360
+ ],
1361
+ "page_idx": 7
1362
+ },
1363
+ {
1364
+ "type": "image",
1365
+ "img_path": "images/77f2e0c6bb1b670359b8423e3bc6a7f8b5477fca1bc7b6afdb82f93b91cede43.jpg",
1366
+ "image_caption": [],
1367
+ "image_footnote": [],
1368
+ "bbox": [
1369
+ 230,
1370
+ 640,
1371
+ 356,
1372
+ 813
1373
+ ],
1374
+ "page_idx": 7
1375
+ },
1376
+ {
1377
+ "type": "image",
1378
+ "img_path": "images/a4acc2c94d7f84a6a2558a359f9e44a96f15f100cb8d1609f449a80ee1cc2396.jpg",
1379
+ "image_caption": [],
1380
+ "image_footnote": [],
1381
+ "bbox": [
1382
+ 357,
1383
+ 640,
1384
+ 483,
1385
+ 813
1386
+ ],
1387
+ "page_idx": 7
1388
+ },
1389
+ {
1390
+ "type": "image",
1391
+ "img_path": "images/fdecb4ade98a1f1c1762c796fcaddbf24a8ceb9f4b7a61c4ed631a04b04b618d.jpg",
1392
+ "image_caption": [],
1393
+ "image_footnote": [],
1394
+ "bbox": [
1395
+ 488,
1396
+ 640,
1397
+ 614,
1398
+ 813
1399
+ ],
1400
+ "page_idx": 7
1401
+ },
1402
+ {
1403
+ "type": "image",
1404
+ "img_path": "images/9a560ea480cbafa4d264df0dd864cb926ff48141032322d91add0d88f66d43db.jpg",
1405
+ "image_caption": [],
1406
+ "image_footnote": [],
1407
+ "bbox": [
1408
+ 616,
1409
+ 640,
1410
+ 743,
1411
+ 813
1412
+ ],
1413
+ "page_idx": 7
1414
+ },
1415
+ {
1416
+ "type": "image",
1417
+ "img_path": "images/4a7ba9903024c0b291b1204fc0600f39db2485953cab56c00be93052b57956f7.jpg",
1418
+ "image_caption": [],
1419
+ "image_footnote": [],
1420
+ "bbox": [
1421
+ 745,
1422
+ 640,
1423
+ 870,
1424
+ 813
1425
+ ],
1426
+ "page_idx": 7
1427
+ },
1428
+ {
1429
+ "type": "image",
1430
+ "img_path": "images/675cd0f6ec16ef5cde532b3fde32290602ec89b4d40b4a40bf06ab6467f5bfaf.jpg",
1431
+ "image_caption": [
1432
+ "Masked Frames"
1433
+ ],
1434
+ "image_footnote": [],
1435
+ "bbox": [
1436
+ 99,
1437
+ 109,
1438
+ 356,
1439
+ 223
1440
+ ],
1441
+ "page_idx": 8
1442
+ },
1443
+ {
1444
+ "type": "image",
1445
+ "img_path": "images/0236dc24af1832b4dba15ab654a97dd33feb90b0906965560240cd68db0c2791.jpg",
1446
+ "image_caption": [
1447
+ "Propainter"
1448
+ ],
1449
+ "image_footnote": [],
1450
+ "bbox": [
1451
+ 357,
1452
+ 111,
1453
+ 614,
1454
+ 222
1455
+ ],
1456
+ "page_idx": 8
1457
+ },
1458
+ {
1459
+ "type": "image",
1460
+ "img_path": "images/40a085a8d3817ba09f05d0e2212f0a1e76dd68ae3db2cdc78768b66b2cd95ee8.jpg",
1461
+ "image_caption": [
1462
+ "Ours"
1463
+ ],
1464
+ "image_footnote": [],
1465
+ "bbox": [
1466
+ 616,
1467
+ 111,
1468
+ 867,
1469
+ 222
1470
+ ],
1471
+ "page_idx": 8
1472
+ },
1473
+ {
1474
+ "type": "image",
1475
+ "img_path": "images/587e7352f647a8e985618f7301b6a372805533ff26471a9682762a67c42b7d3d.jpg",
1476
+ "image_caption": [],
1477
+ "image_footnote": [],
1478
+ "bbox": [
1479
+ 99,
1480
+ 224,
1481
+ 356,
1482
+ 335
1483
+ ],
1484
+ "page_idx": 8
1485
+ },
1486
+ {
1487
+ "type": "image",
1488
+ "img_path": "images/8ee5ec11e7d0726a49540d0c999f472ce9b6eda13371b24fe9ed0e275a61c03f.jpg",
1489
+ "image_caption": [],
1490
+ "image_footnote": [],
1491
+ "bbox": [
1492
+ 357,
1493
+ 224,
1494
+ 614,
1495
+ 335
1496
+ ],
1497
+ "page_idx": 8
1498
+ },
1499
+ {
1500
+ "type": "image",
1501
+ "img_path": "images/04b263f49307ef0c9201cbb4a923bd66757952d3565f409435b7df9adca672a1.jpg",
1502
+ "image_caption": [],
1503
+ "image_footnote": [],
1504
+ "bbox": [
1505
+ 616,
1506
+ 224,
1507
+ 867,
1508
+ 335
1509
+ ],
1510
+ "page_idx": 8
1511
+ },
1512
+ {
1513
+ "type": "image",
1514
+ "img_path": "images/f6f8994b31c384693ad5747ad644c6ac510c29cb478a3d09d45931ebc0e61ad1.jpg",
1515
+ "image_caption": [],
1516
+ "image_footnote": [],
1517
+ "bbox": [
1518
+ 99,
1519
+ 338,
1520
+ 356,
1521
+ 449
1522
+ ],
1523
+ "page_idx": 8
1524
+ },
1525
+ {
1526
+ "type": "image",
1527
+ "img_path": "images/500f3b0cc6f00bc469d2d59818d10b6798f0f8ec3a99f00108bfcb6d87e583e7.jpg",
1528
+ "image_caption": [],
1529
+ "image_footnote": [],
1530
+ "bbox": [
1531
+ 357,
1532
+ 338,
1533
+ 614,
1534
+ 449
1535
+ ],
1536
+ "page_idx": 8
1537
+ },
1538
+ {
1539
+ "type": "image",
1540
+ "img_path": "images/74ab6cd90dbb119bcdb32b26f6b7fc1bff0436ae04ae4421a56e51350a078b90.jpg",
1541
+ "image_caption": [],
1542
+ "image_footnote": [],
1543
+ "bbox": [
1544
+ 616,
1545
+ 338,
1546
+ 870,
1547
+ 449
1548
+ ],
1549
+ "page_idx": 8
1550
+ },
1551
+ {
1552
+ "type": "image",
1553
+ "img_path": "images/d7c0cd08cba41d04c3ca293d8116f3ca101e1ad02fb3cd182b601e1c124501a7.jpg",
1554
+ "image_caption": [],
1555
+ "image_footnote": [],
1556
+ "bbox": [
1557
+ 99,
1558
+ 450,
1559
+ 354,
1560
+ 563
1561
+ ],
1562
+ "page_idx": 8
1563
+ },
1564
+ {
1565
+ "type": "image",
1566
+ "img_path": "images/c6606cde700ab79b43d941192a6e4cdcf402a529fb46019f7eb3c565817c0902.jpg",
1567
+ "image_caption": [],
1568
+ "image_footnote": [],
1569
+ "bbox": [
1570
+ 359,
1571
+ 450,
1572
+ 614,
1573
+ 563
1574
+ ],
1575
+ "page_idx": 8
1576
+ },
1577
+ {
1578
+ "type": "image",
1579
+ "img_path": "images/b1a3499022b3c185936f308a44e1fad115ab1dfae0b4e17b9dc26f3a2d9f09fc.jpg",
1580
+ "image_caption": [],
1581
+ "image_footnote": [],
1582
+ "bbox": [
1583
+ 616,
1584
+ 450,
1585
+ 870,
1586
+ 563
1587
+ ],
1588
+ "page_idx": 8
1589
+ },
1590
+ {
1591
+ "type": "image",
1592
+ "img_path": "images/3432449190a09e2e9cf554bd513e5559188b4a0e68d88a5fd28e1aa322f0be91.jpg",
1593
+ "image_caption": [],
1594
+ "image_footnote": [],
1595
+ "bbox": [
1596
+ 99,
1597
+ 564,
1598
+ 356,
1599
+ 688
1600
+ ],
1601
+ "page_idx": 8
1602
+ },
1603
+ {
1604
+ "type": "image",
1605
+ "img_path": "images/fb802a6349e40deb83613e5f4264bed8e555afe8db980b894721a81acbbf1b93.jpg",
1606
+ "image_caption": [],
1607
+ "image_footnote": [],
1608
+ "bbox": [
1609
+ 357,
1610
+ 564,
1611
+ 614,
1612
+ 688
1613
+ ],
1614
+ "page_idx": 8
1615
+ },
1616
+ {
1617
+ "type": "image",
1618
+ "img_path": "images/3435540395fcaa6c13d62c9d734998717745532cb3bef601e118d938a3b52b28.jpg",
1619
+ "image_caption": [],
1620
+ "image_footnote": [],
1621
+ "bbox": [
1622
+ 616,
1623
+ 564,
1624
+ 870,
1625
+ 688
1626
+ ],
1627
+ "page_idx": 8
1628
+ },
1629
+ {
1630
+ "type": "image",
1631
+ "img_path": "images/dd11c3a4b5fe4be76cecdeb43b2ddad446caab267f2a575c29f1ef8d35110321.jpg",
1632
+ "image_caption": [
1633
+ "Figure 11. Texture quality comparison between DiffuEraser and Propainter."
1634
+ ],
1635
+ "image_footnote": [],
1636
+ "bbox": [
1637
+ 99,
1638
+ 688,
1639
+ 354,
1640
+ 795
1641
+ ],
1642
+ "page_idx": 8
1643
+ },
1644
+ {
1645
+ "type": "image",
1646
+ "img_path": "images/12581ba0d1893d85450e733f8c781255b5e1ab400ad30a57843f20cb12902196.jpg",
1647
+ "image_caption": [],
1648
+ "image_footnote": [],
1649
+ "bbox": [
1650
+ 357,
1651
+ 688,
1652
+ 614,
1653
+ 795
1654
+ ],
1655
+ "page_idx": 8
1656
+ },
1657
+ {
1658
+ "type": "image",
1659
+ "img_path": "images/acd6f5a68c784151e7d1cf5fef786c47f342fe780418155cd9b787dc01654ec9.jpg",
1660
+ "image_caption": [],
1661
+ "image_footnote": [],
1662
+ "bbox": [
1663
+ 616,
1664
+ 688,
1665
+ 870,
1666
+ 795
1667
+ ],
1668
+ "page_idx": 8
1669
+ },
1670
+ {
1671
+ "type": "image",
1672
+ "img_path": "images/03784d385d4e9d3840d208b1ebb4406daed6989dae605b97f493e0bb4303cdaa.jpg",
1673
+ "image_caption": [
1674
+ "Figure 12. Temporal consistency comparison between DiffuEraser and Propainter."
1675
+ ],
1676
+ "image_footnote": [],
1677
+ "bbox": [
1678
+ 223,
1679
+ 83,
1680
+ 750,
1681
+ 501
1682
+ ],
1683
+ "page_idx": 9
1684
+ },
1685
+ {
1686
+ "type": "image",
1687
+ "img_path": "images/1a6d7eb5424d91352b089f3858db3c45460bcb263bbd637dade7a53ecfde0782.jpg",
1688
+ "image_caption": [
1689
+ "Figure 13. Temporal consistency comparison between DiffuEraser and Propainter."
1690
+ ],
1691
+ "image_footnote": [],
1692
+ "bbox": [
1693
+ 124,
1694
+ 535,
1695
+ 848,
1696
+ 743
1697
+ ],
1698
+ "page_idx": 9
1699
+ },
1700
+ {
1701
+ "type": "list",
1702
+ "sub_type": "ref_text",
1703
+ "list_items": [
1704
+ "[17] Minhyeok Lee, Suhwan Cho, Chajin Shin, Jungho Lee, Sunghun Yang, and Sangyoun Lee. Video diffusion models are strong video inpainter, 2024.",
1705
+ "[18] Zhen Li, Cheng-Ze Lu, Jianhua Qin, Chun-Le Guo, and Ming-Ming Cheng. Towards an end-to-end framework for flow-guided video inpainting. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022.",
1706
+ "[19] Jun Hao Liew, Hanshu Yan, Jianfeng Zhang, Zhongcong Xu,"
1707
+ ],
1708
+ "bbox": [
1709
+ 78,
1710
+ 787,
1711
+ 467,
1712
+ 901
1713
+ ],
1714
+ "page_idx": 9
1715
+ },
1716
+ {
1717
+ "type": "list",
1718
+ "sub_type": "ref_text",
1719
+ "list_items": [
1720
+ "and Jiashi Feng. Magicedit: High-fidelity and temporally coherent video editing. In arXiv, 2023.",
1721
+ "[20] Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu Sun, Xiaogang Wang, Jifeng Dai, and Hongsheng Li. Fuseformer: Fusing fine-grained information in transformers for video inpainting. In International Conference on Computer Vision (ICCV), 2021.",
1722
+ "[21] Shaoteng Liu, Yuechen Zhang, Wenbo Li, Zhe Lin, and Jiaya"
1723
+ ],
1724
+ "bbox": [
1725
+ 501,
1726
+ 787,
1727
+ 890,
1728
+ 900
1729
+ ],
1730
+ "page_idx": 9
1731
+ },
1732
+ {
1733
+ "type": "list",
1734
+ "sub_type": "ref_text",
1735
+ "list_items": [
1736
+ "Jia. Video-p2p: Video editing with cross-attention control, 2023.",
1737
+ "[22] Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. arXiv preprint arXiv:2211.09794, 2022.",
1738
+ "[23] Eyal Molad, Eliahu Horwitz, Dani Valevski, Alex Rav Acha, Yossi Matias, Yael Pritch, Yaniv Leviathan, and Yedid Hoshen. Dreamix: Video diffusion models are general video editors, 2023.",
1739
+ "[24] Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, and Xiaohu Qie. T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. arXiv preprint arXiv:2302.08453, 2023.",
1740
+ "[25] Chenyang Qi, Xiaodong Cun, Yong Zhang, Chenyang Lei, Xintao Wang, Ying Shan, and Qifeng Chen. Fatezero: Fusing attentions for zero-shot text-based video editing. arXiv:2303.09535, 2023.",
1741
+ "[26] Weize Quan, Jiaxi Chen, Yanli Liu, Dong-Ming Yan, and Peter Wonka. Deep learning-based image and video inpainting: A survey, 2024.",
1742
+ "[27] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models, 2021.",
1743
+ "[28] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In arXiv preprint arxiv:2208.12242, 2022.",
1744
+ "[29] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. Photorealistic text-to-image diffusion models with deep language understanding, 2022.",
1745
+ "[30] Fengyuan Shi, Jiaxi Gu, Hang Xu, Songcen Xu, Wei Zhang, and Limin Wang. Bivdiff: A training-free framework for general-purpose video synthesis via bridging image and video diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7393-7402, June 2024.",
1746
+ "[31] Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, Devi Parikh, Sonal Gupta, and Yaniv Taigman. Make-a-video: Text-to-video generation without text-video data, 2022.",
1747
+ "[32] Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics, 2015.",
1748
+ "[33] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv:2010.02502, October 2020.",
1749
+ "[34] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021."
1750
+ ],
1751
+ "bbox": [
1752
+ 78,
1753
+ 90,
1754
+ 468,
1755
+ 898
1756
+ ],
1757
+ "page_idx": 10
1758
+ },
1759
+ {
1760
+ "type": "list",
1761
+ "sub_type": "ref_text",
1762
+ "list_items": [
1763
+ "[35] Fu-Yun Wang, Zhaoyang Huang, Alexander William Bergman, Dazhong Shen, Peng Gao, Michael Lingelbach, Keqiang Sun, Weikang Bian, Guanglu Song, Yu Liu, et al. Phased consistency model. arXiv preprint arXiv:2405.18407, 2024.",
1764
+ "[36] Xiang Wang, Hangjie Yuan, Shiwei Zhang, Dayou Chen, Jiuniu Wang, Yingya Zhang, Yujun Shen, Deli Zhao, and Jingren Zhou. Videocomposer: Compositional video synthesis with motion controllability, 2023.",
1765
+ "[37] Jianzong Wu, Xiangtai Li, Chenyang Si, Shangchen Zhou, Jingkang Yang, Jiangning Zhang, Yining Li, Kai Chen, Yunhai Tong, Ziwei Liu, et al. Towards language-driven video inpainting via multimodal large language models. arXiv preprint arXiv:2401.10226, 2024.",
1766
+ "[38] Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, Stan Weixian Lei, Yuchao Gu, Yufei Shi, Wynne Hsu, Ying Shan, Xiaohu Qie, and Mike Zheng Shou. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7623-7633, 2023.",
1767
+ "[39] Jinbo Xing, Menghan Xia, Yuxin Liu, Yuechen Zhang, Yong Zhang, Yingqing He, Hanyuan Liu, Haoxin Chen, Xiaodong Cun, Xintao Wang, et al. Make-your-video: Customized video generation using textual and structural guidance. arXiv preprint arXiv:2306.00943, 2023.",
1768
+ "[40] Yanhong Zeng, Jianlong Fu, and Hongyang Chao. Learning joint spatial-temporal transformations for video inpainting. In The Proceedings of the European Conference on Computer Vision (ECCV), 2020.",
1769
+ "[41] Kaidong Zhang, Jingjing Fu, and Dong Liu. Flow-guided transformer for video inpainting. In European Conference on Computer Vision, pages 74-90. Springer, 2022.",
1770
+ "[42] Kaidong Zhang, Jingjing Fu, and Dong Liu. Inertia-guided flow completion and style fusion for video inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5982-5991, June 2022.",
1771
+ "[43] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models, 2023.",
1772
+ "[44] Yabo Zhang, Yuxiang Wei, Dongsheng Jiang, Xiaopeng Zhang, Wangmeng Zuo, and Qi Tian. Controlvideo: Training-free controllable text-to-video generation. arXiv preprint arXiv:2305.13077, 2023.",
1773
+ "[45] Zhixing Zhang, Bichen Wu, Xiaoyan Wang, Yaqiao Luo, Luxin Zhang, Yinan Zhao, Peter Vajda, Dimitris Metaxas, and Licheng Yu. Avid: Any-length video inpainting with diffusion model. arXiv preprint arXiv:2312.03816, 2023.",
1774
+ "[46] Shangchen Zhou, Chongyi Li, Kelvin C.K Chan, and Chen Change Loy. ProPainter: Improving propagation and transformer for video inpainting. In Proceedings of IEEE International Conference on Computer Vision (ICCV), 2023.",
1775
+ "[47] Bojia Zi, Shihao Zhao, Xianbiao Qi, Jianan Wang, Yukai Shi, Qianyu Chen, Bin Liang, Kam-Fai Wong, and Lei Zhang. Cococo: Improving text-guided video inpainting for better consistency, controllability and compatibility. ArXiv, abs/2403.12035, 2024."
1776
+ ],
1777
+ "bbox": [
1778
+ 501,
1779
+ 92,
1780
+ 890,
1781
+ 880
1782
+ ],
1783
+ "page_idx": 10
1784
+ }
1785
+ ]
2501.10xxx/2501.10018/8bff6f61-9aa1-458e-a9e1-d00224986bd3_model.json ADDED
@@ -0,0 +1,2532 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ [
3
+ {
4
+ "type": "aside_text",
5
+ "bbox": [
6
+ 0.023,
7
+ 0.268,
8
+ 0.058,
9
+ 0.708
10
+ ],
11
+ "angle": 270,
12
+ "content": "arXiv:2501.10018v1 [cs.CV] 17 Jan 2025"
13
+ },
14
+ {
15
+ "type": "title",
16
+ "bbox": [
17
+ 0.218,
18
+ 0.131,
19
+ 0.753,
20
+ 0.153
21
+ ],
22
+ "angle": 0,
23
+ "content": "DiffuEraser: A Diffusion Model for Video Inpainting"
24
+ },
25
+ {
26
+ "type": "title",
27
+ "bbox": [
28
+ 0.407,
29
+ 0.159,
30
+ 0.564,
31
+ 0.172
32
+ ],
33
+ "angle": 0,
34
+ "content": "TECHNICAL REPORT"
35
+ },
36
+ {
37
+ "type": "text",
38
+ "bbox": [
39
+ 0.218,
40
+ 0.193,
41
+ 0.314,
42
+ 0.209
43
+ ],
44
+ "angle": 0,
45
+ "content": "Xiaowen Li"
46
+ },
47
+ {
48
+ "type": "text",
49
+ "bbox": [
50
+ 0.362,
51
+ 0.194,
52
+ 0.46,
53
+ 0.209
54
+ ],
55
+ "angle": 0,
56
+ "content": "Haolan Xue"
57
+ },
58
+ {
59
+ "type": "text",
60
+ "bbox": [
61
+ 0.517,
62
+ 0.194,
63
+ 0.607,
64
+ 0.21
65
+ ],
66
+ "angle": 0,
67
+ "content": "Peiran Ren"
68
+ },
69
+ {
70
+ "type": "text",
71
+ "bbox": [
72
+ 0.665,
73
+ 0.194,
74
+ 0.757,
75
+ 0.211
76
+ ],
77
+ "angle": 0,
78
+ "content": "Liefeng Bo"
79
+ },
80
+ {
81
+ "type": "text",
82
+ "bbox": [
83
+ 0.385,
84
+ 0.215,
85
+ 0.605,
86
+ 0.233
87
+ ],
88
+ "angle": 0,
89
+ "content": "Tongyi Lab, Alibaba Group"
90
+ },
91
+ {
92
+ "type": "text",
93
+ "bbox": [
94
+ 0.218,
95
+ 0.235,
96
+ 0.773,
97
+ 0.251
98
+ ],
99
+ "angle": 0,
100
+ "content": "{1xw262398, haolan.xhl, peiran.rpr, liefeng.bo}@alibaba-inc.com"
101
+ },
102
+ {
103
+ "type": "text",
104
+ "bbox": [
105
+ 0.286,
106
+ 0.253,
107
+ 0.703,
108
+ 0.267
109
+ ],
110
+ "angle": 0,
111
+ "content": "https://github.com/lixiaowen-xw/DiffuEraser.git"
112
+ },
113
+ {
114
+ "type": "image",
115
+ "bbox": [
116
+ 0.116,
117
+ 0.291,
118
+ 0.522,
119
+ 0.546
120
+ ],
121
+ "angle": 0,
122
+ "content": null
123
+ },
124
+ {
125
+ "type": "image_caption",
126
+ "bbox": [
127
+ 0.31,
128
+ 0.548,
129
+ 0.328,
130
+ 0.56
131
+ ],
132
+ "angle": 0,
133
+ "content": "(a)"
134
+ },
135
+ {
136
+ "type": "image",
137
+ "bbox": [
138
+ 0.532,
139
+ 0.288,
140
+ 0.849,
141
+ 0.544
142
+ ],
143
+ "angle": 0,
144
+ "content": null
145
+ },
146
+ {
147
+ "type": "image_caption",
148
+ "bbox": [
149
+ 0.691,
150
+ 0.547,
151
+ 0.709,
152
+ 0.559
153
+ ],
154
+ "angle": 0,
155
+ "content": "(b)"
156
+ },
157
+ {
158
+ "type": "image_caption",
159
+ "bbox": [
160
+ 0.076,
161
+ 0.568,
162
+ 0.892,
163
+ 0.611
164
+ ],
165
+ "angle": 0,
166
+ "content": "Figure 1. Performance comparison between the proposed model, DiffuEraser, and Propainter. (a) Texture Quality: DiffuEraser generates more detailed and refined textures compared to the transformer-based Propainter. (b) Temporal Consistency: DiffuEraser demonstrates superior temporal consistency in the inpainted content compared to Propainter."
167
+ },
168
+ {
169
+ "type": "title",
170
+ "bbox": [
171
+ 0.235,
172
+ 0.643,
173
+ 0.312,
174
+ 0.658
175
+ ],
176
+ "angle": 0,
177
+ "content": "Abstract"
178
+ },
179
+ {
180
+ "type": "text",
181
+ "bbox": [
182
+ 0.075,
183
+ 0.675,
184
+ 0.471,
185
+ 0.903
186
+ ],
187
+ "angle": 0,
188
+ "content": "Recent video inpainting algorithms integrate flow-based pixel propagation with transformer-based generation to leverage optical flow for restoring textures and objects using information from neighboring frames, while completing masked regions through visual Transformers. However, these approaches often encounter blurring and temporal inconsistencies when dealing with large masks, highlighting the need for models with enhanced generative capabilities. Recently, diffusion models have emerged as a prominent technique in image and video generation due to their impressive performance. In this paper, we introduce DifuEraser, a video inpainting model based on stable diffusion, designed to fill masked regions with greater details and more coherent structures. We incorporate prior information to provide initialization and weak conditioning,"
189
+ },
190
+ {
191
+ "type": "text",
192
+ "bbox": [
193
+ 0.498,
194
+ 0.644,
195
+ 0.892,
196
+ 0.781
197
+ ],
198
+ "angle": 0,
199
+ "content": "which helps mitigate noisy artifacts and suppress hallucinations. Additionally, to improve temporal consistency during long-sequence inference, we expand the temporal receptive fields of both the prior model and DiffuEraser, and further enhance consistency by leveraging the temporal smoothing property of Video Diffusion Models. Experimental results demonstrate that our proposed method outperforms state-of-the-art techniques in both content completeness and temporal consistency while maintaining acceptable efficiency."
200
+ },
201
+ {
202
+ "type": "title",
203
+ "bbox": [
204
+ 0.501,
205
+ 0.814,
206
+ 0.631,
207
+ 0.83
208
+ ],
209
+ "angle": 0,
210
+ "content": "1. Introduction"
211
+ },
212
+ {
213
+ "type": "text",
214
+ "bbox": [
215
+ 0.498,
216
+ 0.84,
217
+ 0.892,
218
+ 0.9
219
+ ],
220
+ "angle": 0,
221
+ "content": "Video inpainting aims to complete masked regions with content that is both plausible and temporally consistent. Previous video inpainting algorithms primarily rely on two mechanisms:"
222
+ }
223
+ ],
224
+ [
225
+ {
226
+ "type": "text",
227
+ "bbox": [
228
+ 0.077,
229
+ 0.092,
230
+ 0.468,
231
+ 0.136
232
+ ],
233
+ "angle": 0,
234
+ "content": "1) Flow-based pixel propagation methods, which utilize optical flow to restore texture details and objects by leveraging information from adjacent frames; and"
235
+ },
236
+ {
237
+ "type": "text",
238
+ "bbox": [
239
+ 0.078,
240
+ 0.138,
241
+ 0.468,
242
+ 0.182
243
+ ],
244
+ "angle": 0,
245
+ "content": "2) Transformer-based video inpainting methods, which excel at completing the structural aspects of objects [26]."
246
+ },
247
+ {
248
+ "type": "list",
249
+ "bbox": [
250
+ 0.077,
251
+ 0.092,
252
+ 0.468,
253
+ 0.182
254
+ ],
255
+ "angle": 0,
256
+ "content": null
257
+ },
258
+ {
259
+ "type": "text",
260
+ "bbox": [
261
+ 0.078,
262
+ 0.185,
263
+ 0.468,
264
+ 0.214
265
+ ],
266
+ "angle": 0,
267
+ "content": "Current mainstream algorithms typically combine these two approaches, consisting of three modules or stages:"
268
+ },
269
+ {
270
+ "type": "text",
271
+ "bbox": [
272
+ 0.098,
273
+ 0.216,
274
+ 0.239,
275
+ 0.229
276
+ ],
277
+ "angle": 0,
278
+ "content": "1) Flow completion,"
279
+ },
280
+ {
281
+ "type": "text",
282
+ "bbox": [
283
+ 0.097,
284
+ 0.231,
285
+ 0.295,
286
+ 0.245
287
+ ],
288
+ "angle": 0,
289
+ "content": "2) Feature propagation, and"
290
+ },
291
+ {
292
+ "type": "text",
293
+ "bbox": [
294
+ 0.097,
295
+ 0.248,
296
+ 0.258,
297
+ 0.262
298
+ ],
299
+ "angle": 0,
300
+ "content": "3) Content generation."
301
+ },
302
+ {
303
+ "type": "list",
304
+ "bbox": [
305
+ 0.097,
306
+ 0.216,
307
+ 0.295,
308
+ 0.262
309
+ ],
310
+ "angle": 0,
311
+ "content": null
312
+ },
313
+ {
314
+ "type": "text",
315
+ "bbox": [
316
+ 0.097,
317
+ 0.264,
318
+ 0.462,
319
+ 0.278
320
+ ],
321
+ "angle": 0,
322
+ "content": "This solution categorizes masked pixels into two types:"
323
+ },
324
+ {
325
+ "type": "text",
326
+ "bbox": [
327
+ 0.077,
328
+ 0.28,
329
+ 0.468,
330
+ 0.353
331
+ ],
332
+ "angle": 0,
333
+ "content": "1) Known pixels, which have appeared in some masked frames and can be propagated to other frames through flow completion and feature propagation modules, ensuring consistency between the completed content and the unmasked regions; and"
334
+ },
335
+ {
336
+ "type": "text",
337
+ "bbox": [
338
+ 0.077,
339
+ 0.356,
340
+ 0.468,
341
+ 0.413
342
+ ],
343
+ "angle": 0,
344
+ "content": "2) Unknown pixels, which have never appeared in any masked frames and are generated by the content generation module, thereby enhancing the structural integrity of the results."
345
+ },
346
+ {
347
+ "type": "list",
348
+ "bbox": [
349
+ 0.077,
350
+ 0.28,
351
+ 0.468,
352
+ 0.413
353
+ ],
354
+ "angle": 0,
355
+ "content": null
356
+ },
357
+ {
358
+ "type": "text",
359
+ "bbox": [
360
+ 0.077,
361
+ 0.417,
362
+ 0.468,
363
+ 0.552
364
+ ],
365
+ "angle": 0,
366
+ "content": "The state-of-the-art algorithm, Propainter [46], exemplifies this approach and comprises three key modules: recurrent flow completion, dual-domain propagation, and mask-guided sparse Transformer. It effectively propagates known pixels across all frames and demonstrates an initial ability to generate unknown pixels. However, when the mask size is large, the generative capability of the Transformer model proves insufficient, leading to significant artifacts, as illustrated in Figure 1."
367
+ },
368
+ {
369
+ "type": "text",
370
+ "bbox": [
371
+ 0.077,
372
+ 0.553,
373
+ 0.468,
374
+ 0.627
375
+ ],
376
+ "angle": 0,
377
+ "content": "Consequently, there is a need for more powerful models with enhanced generative capabilities. The Stable Diffusion model, which has recently gained prominence in the field of image and video generation, presents a promising candidate."
378
+ },
379
+ {
380
+ "type": "text",
381
+ "bbox": [
382
+ 0.077,
383
+ 0.63,
384
+ 0.468,
385
+ 0.734
386
+ ],
387
+ "angle": 0,
388
+ "content": "In this work, we first decompose the video inpainting task into three sub-problems and then propose corresponding solutions for each. Specifically, the three key challenges are: the propagation of known pixels, the generation of unknown pixels, and the temporal consistency of the completed content. Our main contributions are summarized as follows:"
389
+ },
390
+ {
391
+ "type": "text",
392
+ "bbox": [
393
+ 0.09,
394
+ 0.751,
395
+ 0.468,
396
+ 0.856
397
+ ],
398
+ "angle": 0,
399
+ "content": "1. Video Inpainting Diffusion: We introduce a motion module for the image inpainting model BrushNet, which is based on diffusion models. The powerful generative capability of diffusion models overcomes the blurring and mosaic artifacts associated with Transformer-based models, thereby completing object structures and generating more detailed content."
400
+ },
401
+ {
402
+ "type": "text",
403
+ "bbox": [
404
+ 0.089,
405
+ 0.871,
406
+ 0.468,
407
+ 0.901
408
+ ],
409
+ "angle": 0,
410
+ "content": "2. Injected Priors: We incorporate priors into the diffusion model, enabling easier initialization to mitigate"
411
+ },
412
+ {
413
+ "type": "list",
414
+ "bbox": [
415
+ 0.089,
416
+ 0.751,
417
+ 0.468,
418
+ 0.901
419
+ ],
420
+ "angle": 0,
421
+ "content": null
422
+ },
423
+ {
424
+ "type": "text",
425
+ "bbox": [
426
+ 0.531,
427
+ 0.092,
428
+ 0.89,
429
+ 0.122
430
+ ],
431
+ "angle": 0,
432
+ "content": "noisy artifacts and serving as a weak condition to suppress the generation of unwanted objects."
433
+ },
434
+ {
435
+ "type": "text",
436
+ "bbox": [
437
+ 0.513,
438
+ 0.134,
439
+ 0.892,
440
+ 0.24
441
+ ],
442
+ "angle": 0,
443
+ "content": "3. Enhanced Temporal Consistency: We improve the temporal consistency of long-sequence inference by expanding the temporal receptive fields of both the prior model and the diffusion model. Additionally, we further enhance temporal continuity at the intersections between clips by leveraging the temporal smoothing property of the Video Diffusion Model."
444
+ },
445
+ {
446
+ "type": "title",
447
+ "bbox": [
448
+ 0.5,
449
+ 0.255,
450
+ 0.65,
451
+ 0.271
452
+ ],
453
+ "angle": 0,
454
+ "content": "2. Related Works"
455
+ },
456
+ {
457
+ "type": "text",
458
+ "bbox": [
459
+ 0.498,
460
+ 0.281,
461
+ 0.892,
462
+ 0.492
463
+ ],
464
+ "angle": 0,
465
+ "content": "Diffusion Models. The advent of diffusion models [14, 32, 34] has significantly enhanced the quality and creativity of image and video generation. In the realm of image synthesis, diffusion models have driven substantial progress across various tasks, including text-to-image generation [5, 29], controllable image generation [24, 43], image editing [1, 12, 22], personalized image generation [6, 28], and image inpainting [27, 16], among others. Building on these advancements, video diffusion models incorporating additional motion modules have also gained significant traction. Key applications in this domain include text-to-video generation [11, 8, 10, 13, 15, 31], controllable video generation [3, 4, 36, 39], video editing [19, 23, 38, 21], and various training-free video synthesis methods [44, 25]."
466
+ },
467
+ {
468
+ "type": "text",
469
+ "bbox": [
470
+ 0.498,
471
+ 0.493,
472
+ 0.892,
473
+ 0.779
474
+ ],
475
+ "angle": 0,
476
+ "content": "Video Inpainting. Video inpainting aims to fill masked regions in videos with plausible content while maintaining temporal consistency. Early approaches based on 3D convolution and shifting operations exhibited limited performance. The emergence of methods leveraging optical flow and Transformer architectures has significantly improved the quality of video inpainting. Flow-based pixel propagation methods [7, 41, 42] excel at restoring textures and details by utilizing information from adjacent frames. In contrast, Transformer-based methods [40, 20, 18, 46] are adept at completing the structural aspects of objects. Among these, Propainter [46] stands out as a representative approach, comprising recurrent flow completion, dual-domain propagation, and a mask-guided sparse Transformer. Propainter effectively propagates known pixels across all frames and demonstrates an initial ability to generate unknown pixels. However, its generative capacity is limited when dealing with large masks, leading to noticeable artifacts."
477
+ },
478
+ {
479
+ "type": "text",
480
+ "bbox": [
481
+ 0.498,
482
+ 0.78,
483
+ 0.892,
484
+ 0.901
485
+ ],
486
+ "angle": 0,
487
+ "content": "With the rising popularity of diffusion models, diffusion-based video inpainting methods have begun to emerge [17, 37, 30, 9, 45, 47]. These approaches leverage the powerful generative capabilities of diffusion models to enhance both the detail and structural integrity of the inpainted regions, addressing some of the limitations observed in Transformer-based methods. BIVDiff[30] is a training-free framework via bridging image and video diffusion models. AVID[45]"
488
+ }
489
+ ],
490
+ [
491
+ {
492
+ "type": "image",
493
+ "bbox": [
494
+ 0.159,
495
+ 0.087,
496
+ 0.817,
497
+ 0.322
498
+ ],
499
+ "angle": 0,
500
+ "content": null
501
+ },
502
+ {
503
+ "type": "image_caption",
504
+ "bbox": [
505
+ 0.076,
506
+ 0.339,
507
+ 0.895,
508
+ 0.396
509
+ ],
510
+ "angle": 0,
511
+ "content": "Figure 2. Overview of the proposed video inpainting model DiffuEraser, based on stable diffusion. The main denoising UNet performs the denoising process to generate the final output. The BrushNet branch extracts features from masked images, which are added to the main denoising UNet layer by layer after a zero convolution block. Temporal attention is incorporated after self-attention and cross-attention to improve temporal consistency."
512
+ },
513
+ {
514
+ "type": "text",
515
+ "bbox": [
516
+ 0.076,
517
+ 0.42,
518
+ 0.473,
519
+ 0.737
520
+ ],
521
+ "angle": 0,
522
+ "content": "and CoCoCo[47] improved text-guided video inpainting by integrating motion module to Text-to-Image(T2I) model. [37] proposes language-driven video inpainting via Multimodal Large Language Models, which uses natural language instructions to guide the inpainting process. Nevertheless, they always suffer from the inherent hallucinations of diffusion models. FloED[9] with less hallucination proposes a dedicated dual-branch architecture that incorporates motion guidance with a multi-scale flow adapter to enhance temporal consistency, focusing on object removal and background restoration. FFF-VDI[17] propagates the noise latent information of future frames to fill the masked area of the first frame's noise latent code, improving temporal consistency and suppressing hallucination effects. However, these methods do not effectively address the temporal consistency and stability needed for long-sequence inference and there is still room for improvement in detail and structural integrity. In contrast, DiffuEraser can generate temporally consistent results with enhanced detail and a more complete structure for long-sequence inference, all without requiring a text prompt."
523
+ },
524
+ {
525
+ "type": "title",
526
+ "bbox": [
527
+ 0.077,
528
+ 0.758,
529
+ 0.212,
530
+ 0.775
531
+ ],
532
+ "angle": 0,
533
+ "content": "3. Methodology"
534
+ },
535
+ {
536
+ "type": "title",
537
+ "bbox": [
538
+ 0.077,
539
+ 0.785,
540
+ 0.258,
541
+ 0.8
542
+ ],
543
+ "angle": 0,
544
+ "content": "3.1. Network Overview"
545
+ },
546
+ {
547
+ "type": "text",
548
+ "bbox": [
549
+ 0.076,
550
+ 0.81,
551
+ 0.471,
552
+ 0.903
553
+ ],
554
+ "angle": 0,
555
+ "content": "Our network architecture is inspired byAnimateDiff [11], integrating a motion module into the image inpainting model. For the image inpainting component, we select BrushNet [16], which enhances the main denoising UNet by adding an additional branch to extract features from masked images. An overview of our proposed model, DiffuEraser,"
556
+ },
557
+ {
558
+ "type": "text",
559
+ "bbox": [
560
+ 0.498,
561
+ 0.42,
562
+ 0.892,
563
+ 0.6
564
+ ],
565
+ "angle": 0,
566
+ "content": "is depicted in Figure 2. The architecture comprises the primary denoising UNet and an auxiliary BrushNet. The BrushNet branch receives a conditional latent input composed of masked images, masks, and noisy latents, with dimensions \\([n,f,h / 4,w / 4,9]\\). Features extracted by BrushNet are integrated into the denoising UNet layer by layer after a zero convolution block. The denoising UNet processes noisy latents with dimensions \\([n,f,h / 4,w / 4,4]\\). To enhance temporal consistency, temporal attention mechanisms are incorporated following both self-attention and cross-attention layers. After denoising, the generated images are blended with the input masked images using blurred masks."
567
+ },
568
+ {
569
+ "type": "text",
570
+ "bbox": [
571
+ 0.498,
572
+ 0.601,
573
+ 0.892,
574
+ 0.692
575
+ ],
576
+ "angle": 0,
577
+ "content": "We define the video inpainting problem by decomposing it into three sub-problems: propagation of known pixels (pixels that have appeared in some masked frames), generation of unknown pixels (pixels that have never appeared in any masked frames), and maintaining temporal consistency of the completed content. Specifically:"
578
+ },
579
+ {
580
+ "type": "text",
581
+ "bbox": [
582
+ 0.514,
583
+ 0.704,
584
+ 0.895,
585
+ 0.901
586
+ ],
587
+ "angle": 0,
588
+ "content": "1. Propagation of Known Pixels: The motion module inherently supports temporal propagation, allowing the restoration of texture details and objects in the current frame using information from adjacent frames. Additionally, we leverage the enhanced propagation capabilities of the prior model, which offers a longer propagation range and a more sophisticated propagation mechanism. Specifically, we apply DDIM inversion on the inpainting results from the prior model and incorporate them into the noisy latent. See Section 3.2 for details. We utilize Propainter as our prior model. Beyond supporting the propagation of known pixels, the injected prior facilitates easier initialization"
589
+ }
590
+ ],
591
+ [
592
+ {
593
+ "type": "text",
594
+ "bbox": [
595
+ 0.109,
596
+ 0.092,
597
+ 0.47,
598
+ 0.152
599
+ ],
600
+ "angle": 0,
601
+ "content": "for DiffuEraser, enabling the generation of meaningful completed content and suppressing noisy artifacts and visual hallucinations commonly associated with diffusion models."
602
+ },
603
+ {
604
+ "type": "text",
605
+ "bbox": [
606
+ 0.09,
607
+ 0.161,
608
+ 0.47,
609
+ 0.222
610
+ ],
611
+ "angle": 0,
612
+ "content": "2. Generation of Unknown Pixels: Utilizing the robust generative capabilities of the stable diffusion model, our approach can generate plausible content with more details and textures for unknown pixels."
613
+ },
614
+ {
615
+ "type": "text",
616
+ "bbox": [
617
+ 0.09,
618
+ 0.231,
619
+ 0.473,
620
+ 0.457
621
+ ],
622
+ "angle": 0,
623
+ "content": "3. Temporal Consistency of Completed Content: While the motion module ensures temporal consistency within individual inferences (each handling a clip of 22 frames in our setting), discrepancies arise at the boundaries between clips during long-sequence processing. To address this, we expand the temporal receptive field of the model. This is achieved by performing pre-inference, where video frames are sampled at an optimal rate and processed collectively as a single clip. This enables the model to \"see\" frames from a broader temporal context. Subsequently, the insights gained from pre-inference are used to guide the frame-by-frame inference, incorporating information from distant frames and thereby enhancing the overall temporal continuity. See Section 3.3 for details."
624
+ },
625
+ {
626
+ "type": "list",
627
+ "bbox": [
628
+ 0.09,
629
+ 0.092,
630
+ 0.473,
631
+ 0.457
632
+ ],
633
+ "angle": 0,
634
+ "content": null
635
+ },
636
+ {
637
+ "type": "text",
638
+ "bbox": [
639
+ 0.076,
640
+ 0.468,
641
+ 0.47,
642
+ 0.558
643
+ ],
644
+ "angle": 0,
645
+ "content": "As demonstrated in other studies, the generative capability of stable diffusion models and the temporal consistency provided by motion modules are well-established. In this paper, we focus on illustrating the advantages of incorporating priors and optimizing temporal consistency across clips during long-sequence inference."
646
+ },
647
+ {
648
+ "type": "title",
649
+ "bbox": [
650
+ 0.077,
651
+ 0.567,
652
+ 0.295,
653
+ 0.584
654
+ ],
655
+ "angle": 0,
656
+ "content": "3.2. Incorporation of Priors"
657
+ },
658
+ {
659
+ "type": "text",
660
+ "bbox": [
661
+ 0.076,
662
+ 0.591,
663
+ 0.47,
664
+ 0.651
665
+ ],
666
+ "angle": 0,
667
+ "content": "As illustrated in Figure 3, our model occasionally generates meaningless noisy artifacts within masked regions. For instance, the masked area above the sea level may appear as random noise instead of coherent content."
668
+ },
669
+ {
670
+ "type": "image",
671
+ "bbox": [
672
+ 0.099,
673
+ 0.663,
674
+ 0.451,
675
+ 0.757
676
+ ],
677
+ "angle": 0,
678
+ "content": null
679
+ },
680
+ {
681
+ "type": "image_caption",
682
+ "bbox": [
683
+ 0.076,
684
+ 0.77,
685
+ 0.47,
686
+ 0.811
687
+ ],
688
+ "angle": 0,
689
+ "content": "Figure 3. Example of noisy artifacts generated by the model. The masked region above the sea level is not completed correctly and resembles random noise."
690
+ },
691
+ {
692
+ "type": "text",
693
+ "bbox": [
694
+ 0.076,
695
+ 0.826,
696
+ 0.471,
697
+ 0.903
698
+ ],
699
+ "angle": 0,
700
+ "content": "To address these artifacts, we enhance the noisy latent—an integral part of the model's input. Inspired by DDIM Inversion [33], we introduce priors during inference. Specifically, we perform DDIM Inversion on the outputs of a chosen lightweight inpainting model and incorporate the"
701
+ },
702
+ {
703
+ "type": "text",
704
+ "bbox": [
705
+ 0.498,
706
+ 0.092,
707
+ 0.892,
708
+ 0.198
709
+ ],
710
+ "angle": 0,
711
+ "content": "inverted results into the noisy latent, as depicted in Figure 4. The prior provides initialization information that enables the model to generate meaningful and stable completed content, effectively eliminating the noisy artifacts shown in Figure 3. Additionally, the prior acts as a weak condition to suppress the generation of unwanted objects, mitigating visual hallucinations often encountered in diffusion models."
712
+ },
713
+ {
714
+ "type": "image",
715
+ "bbox": [
716
+ 0.502,
717
+ 0.209,
718
+ 0.895,
719
+ 0.403
720
+ ],
721
+ "angle": 0,
722
+ "content": null
723
+ },
724
+ {
725
+ "type": "image_caption",
726
+ "bbox": [
727
+ 0.499,
728
+ 0.415,
729
+ 0.893,
730
+ 0.458
731
+ ],
732
+ "angle": 0,
733
+ "content": "Figure 4. Incorporation of priors. We introduce priors during inference by performing DDIM inversion on the outputs of the prior model and adding them to the noisy latent."
734
+ },
735
+ {
736
+ "type": "text",
737
+ "bbox": [
738
+ 0.498,
739
+ 0.474,
740
+ 0.892,
741
+ 0.579
742
+ ],
743
+ "angle": 0,
744
+ "content": "The selection of the prior model significantly impacts the final results. After experimental comparisons, we selected Propainter as our prior model. Notably, any blur and mosaic artifacts present in the prior do not adversely affect our model's outputs; instead, they are refined and eliminated, resulting in inpainted regions with richer textures and greater detail."
745
+ },
746
+ {
747
+ "type": "text",
748
+ "bbox": [
749
+ 0.498,
750
+ 0.58,
751
+ 0.892,
752
+ 0.656
753
+ ],
754
+ "angle": 0,
755
+ "content": "Figure 5 compares the results before and after incorporating priors, demonstrating that the introduction of priors effectively suppresses noisy artifacts and the emergence of unwanted objects, thereby significantly enhancing the accuracy and stability of the inpainting results."
756
+ },
757
+ {
758
+ "type": "image",
759
+ "bbox": [
760
+ 0.521,
761
+ 0.671,
762
+ 0.872,
763
+ 0.842
764
+ ],
765
+ "angle": 0,
766
+ "content": null
767
+ },
768
+ {
769
+ "type": "image_caption",
770
+ "bbox": [
771
+ 0.5,
772
+ 0.856,
773
+ 0.892,
774
+ 0.884
775
+ ],
776
+ "angle": 0,
777
+ "content": "Figure 5. Comparison of inpainting results before and after incorporating priors."
778
+ }
779
+ ],
780
+ [
781
+ {
782
+ "type": "image",
783
+ "bbox": [
784
+ 0.083,
785
+ 0.09,
786
+ 0.895,
787
+ 0.236
788
+ ],
789
+ "angle": 0,
790
+ "content": null
791
+ },
792
+ {
793
+ "type": "image_caption",
794
+ "bbox": [
795
+ 0.076,
796
+ 0.253,
797
+ 0.895,
798
+ 0.281
799
+ ],
800
+ "angle": 0,
801
+ "content": "Figure 6. Utilizing the temporal smoothing property of the Video Diffusion Model (VDM) to enhance consistency at the intersections of clips."
802
+ },
803
+ {
804
+ "type": "title",
805
+ "bbox": [
806
+ 0.077,
807
+ 0.304,
808
+ 0.471,
809
+ 0.336
810
+ ],
811
+ "angle": 0,
812
+ "content": "3.3. Optimizing Temporal Consistency for Long-Sequence Inference"
813
+ },
814
+ {
815
+ "type": "text",
816
+ "bbox": [
817
+ 0.076,
818
+ 0.343,
819
+ 0.471,
820
+ 0.435
821
+ ],
822
+ "angle": 0,
823
+ "content": "While the motion module maintains good temporal consistency within individual clips (for example, 22 frames), noticeable discrepancies emerge at the boundaries between consecutive clips during long-sequence inference, as shown in Figure 7. To ensure seamless temporal consistency across the entire video, we implement the following optimizations."
824
+ },
825
+ {
826
+ "type": "title",
827
+ "bbox": [
828
+ 0.077,
829
+ 0.453,
830
+ 0.471,
831
+ 0.484
832
+ ],
833
+ "angle": 0,
834
+ "content": "3.3.1 Leveraging the Temporal Smoothing Property of the Video Diffusion Model (VDM)"
835
+ },
836
+ {
837
+ "type": "text",
838
+ "bbox": [
839
+ 0.076,
840
+ 0.493,
841
+ 0.47,
842
+ 0.629
843
+ ],
844
+ "angle": 0,
845
+ "content": "The absence of specific temporal conditioning leads to significant changes in completed content between clips, a problem that cannot be resolved by merely overlapping neighboring clips. Inspired by the concept of interpolating between timesteps to obtain intermediate results [9], we adopt a staggered denoising approach along sequential timesteps. This method utilizes the inherent temporal smoothing property of VDM to enhance consistency between clips."
846
+ },
847
+ {
848
+ "type": "text",
849
+ "bbox": [
850
+ 0.076,
851
+ 0.63,
852
+ 0.47,
853
+ 0.809
854
+ ],
855
+ "angle": 0,
856
+ "content": "During inference, even-numbered timesteps remain inferred from the starting position of the clip, while odd-numbered timesteps are inferred from the midpoint of the clip, Figure 6. This staggered denoising leverages VDM's temporal smoothing property to blend frames at clip intersections smoothly. The underlying rationale is that, despite identical latent inputs, the denoising results for overlapped frames from adjacent clips differ due to VDM's temporal smoothing property, which adjusts overlapped frames to be temporally consistent with the starting frame. By applying this smoothing property at clip intersections, we achieve more seamless transitions."
857
+ },
858
+ {
859
+ "type": "text",
860
+ "bbox": [
861
+ 0.076,
862
+ 0.811,
863
+ 0.471,
864
+ 0.901
865
+ ],
866
+ "angle": 0,
867
+ "content": "When processing long videos divided into multiple clips, preliminary optimizations lead to multiple adjustments at clip intersections. After optimization, these transitions are smoothed into a single gradual change from the first to the last frame of the entire video. However, complete consistency across the entire video remains unattainable due to"
868
+ },
869
+ {
870
+ "type": "text",
871
+ "bbox": [
872
+ 0.5,
873
+ 0.305,
874
+ 0.88,
875
+ 0.321
876
+ ],
877
+ "angle": 0,
878
+ "content": "inherent inconsistencies between the first and last frames."
879
+ },
880
+ {
881
+ "type": "image",
882
+ "bbox": [
883
+ 0.509,
884
+ 0.328,
885
+ 0.895,
886
+ 0.429
887
+ ],
888
+ "angle": 0,
889
+ "content": null
890
+ },
891
+ {
892
+ "type": "image_caption",
893
+ "bbox": [
894
+ 0.5,
895
+ 0.441,
896
+ 0.893,
897
+ 0.469
898
+ ],
899
+ "angle": 0,
900
+ "content": "Figure 7. Temporal consistency optimization for long-sequence inference."
901
+ },
902
+ {
903
+ "type": "title",
904
+ "bbox": [
905
+ 0.5,
906
+ 0.5,
907
+ 0.839,
908
+ 0.516
909
+ ],
910
+ "angle": 0,
911
+ "content": "3.3.2 Expanding the Temporal Receptive Field"
912
+ },
913
+ {
914
+ "type": "text",
915
+ "bbox": [
916
+ 0.498,
917
+ 0.524,
918
+ 0.892,
919
+ 0.689
920
+ ],
921
+ "angle": 0,
922
+ "content": "A single inference pass can process only a limited number of frames (for instance, 22 frames in our setting), which restricts the temporal receptive field and prevents the propagation of known pixels from distant frames. Additionally, information sharing between different clips is constrained, resulting in inconsistencies in detailed content despite similar semantics across clips. This leads to frequent and noticeable changes during long-sequence inference, as illustrated in Figure 7. To mitigate this, we expand the temporal receptive field of the inference process through the following two strategies."
923
+ },
924
+ {
925
+ "type": "title",
926
+ "bbox": [
927
+ 0.499,
928
+ 0.69,
929
+ 0.891,
930
+ 0.719
931
+ ],
932
+ "angle": 0,
933
+ "content": "1. Enhancing Priors for Comprehensive Pixel Propagation"
934
+ },
935
+ {
936
+ "type": "text",
937
+ "bbox": [
938
+ 0.498,
939
+ 0.72,
940
+ 0.892,
941
+ 0.84
942
+ ],
943
+ "angle": 0,
944
+ "content": "Using Propainter as an example, we first sample the input video frames and perform pre-propagation to extend known pixels across the entire time domain, surpassing the temporal limitations of a single propagation pass (which typically handles dozens of frames), as shown in Figure 8(a). Full propagation of known pixels ensures that the completed content remains consistent with the unmasked regions, thereby stabilizing the results."
945
+ },
946
+ {
947
+ "type": "text",
948
+ "bbox": [
949
+ 0.498,
950
+ 0.841,
951
+ 0.892,
952
+ 0.901
953
+ ],
954
+ "angle": 0,
955
+ "content": "Subsequently, the inpainting results of the sampled frames guide frame-by-frame propagation, allowing the information obtained from pre-propagation to be integrated into every frame, as depicted in Figure 9(a)."
956
+ }
957
+ ],
958
+ [
959
+ {
960
+ "type": "text",
961
+ "bbox": [
962
+ 0.076,
963
+ 0.092,
964
+ 0.473,
965
+ 0.198
966
+ ],
967
+ "angle": 0,
968
+ "content": "This optimization enables Propainter to utilize information from distant frames more effectively, ensuring that known pixels are stably propagated across the entire time domain. Consequently, the prior provided to DiffuEraser is more accurate and stable. Nonetheless, DiffuEraser's limited temporal receptive field still results in significant changes at clip intersections."
969
+ },
970
+ {
971
+ "type": "image",
972
+ "bbox": [
973
+ 0.082,
974
+ 0.213,
975
+ 0.462,
976
+ 0.344
977
+ ],
978
+ "angle": 0,
979
+ "content": null
980
+ },
981
+ {
982
+ "type": "image_caption",
983
+ "bbox": [
984
+ 0.219,
985
+ 0.347,
986
+ 0.321,
987
+ 0.36
988
+ ],
989
+ "angle": 0,
990
+ "content": "(a) Pre-propagation"
991
+ },
992
+ {
993
+ "type": "image",
994
+ "bbox": [
995
+ 0.08,
996
+ 0.37,
997
+ 0.462,
998
+ 0.5
999
+ ],
1000
+ "angle": 0,
1001
+ "content": null
1002
+ },
1003
+ {
1004
+ "type": "image_caption",
1005
+ "bbox": [
1006
+ 0.22,
1007
+ 0.502,
1008
+ 0.31,
1009
+ 0.514
1010
+ ],
1011
+ "angle": 0,
1012
+ "content": "(b) Pre-inference"
1013
+ },
1014
+ {
1015
+ "type": "image_caption",
1016
+ "bbox": [
1017
+ 0.076,
1018
+ 0.524,
1019
+ 0.47,
1020
+ 0.553
1021
+ ],
1022
+ "angle": 0,
1023
+ "content": "Figure 8. Perform pre-propagation or pre-inference to expand the temporal receptive field of model."
1024
+ },
1025
+ {
1026
+ "type": "title",
1027
+ "bbox": [
1028
+ 0.076,
1029
+ 0.568,
1030
+ 0.469,
1031
+ 0.598
1032
+ ],
1033
+ "angle": 0,
1034
+ "content": "2. Expanding the Temporal Receptive Field of DiffuEraser for consistent generation of unknown pixels"
1035
+ },
1036
+ {
1037
+ "type": "text",
1038
+ "bbox": [
1039
+ 0.076,
1040
+ 0.599,
1041
+ 0.469,
1042
+ 0.704
1043
+ ],
1044
+ "angle": 0,
1045
+ "content": "To further enhance temporal consistency, we also expand the temporal receptive field of DiffuEraser. Similar to the prior optimization, we introduce a pre-inference step where video frames are sampled and processed as a single inference pass, thereby broadening the temporal context and ensuring consistent content generation across the entire video, as shown in Figure 8(b)."
1046
+ },
1047
+ {
1048
+ "type": "text",
1049
+ "bbox": [
1050
+ 0.076,
1051
+ 0.705,
1052
+ 0.469,
1053
+ 0.765
1054
+ ],
1055
+ "angle": 0,
1056
+ "content": "Following pre-inference, the results guide frame-by-frame inference, ensuring that the content consistency established during pre-inference is maintained throughout all remaining frames, as illustrated in Figure 9(b)."
1057
+ },
1058
+ {
1059
+ "type": "text",
1060
+ "bbox": [
1061
+ 0.076,
1062
+ 0.765,
1063
+ 0.47,
1064
+ 0.902
1065
+ ],
1066
+ "angle": 0,
1067
+ "content": "The core principle behind these optimizations—both for priors and DiffuEraser—is to extend the temporal receptive field to encompass the entire video duration, rather than being confined to individual clips. The optimization of prior ensures comprehensive propagation of known pixels, maintaining result correctness, while the optimization of DiffuEraser focuses on the consistent generation of unknown pixels, ensuring overall stability. Together, these enhancements effectively resolve the temporal consistency is-"
1068
+ },
1069
+ {
1070
+ "type": "image",
1071
+ "bbox": [
1072
+ 0.505,
1073
+ 0.088,
1074
+ 0.895,
1075
+ 0.209
1076
+ ],
1077
+ "angle": 0,
1078
+ "content": null
1079
+ },
1080
+ {
1081
+ "type": "image_caption",
1082
+ "bbox": [
1083
+ 0.604,
1084
+ 0.213,
1085
+ 0.785,
1086
+ 0.227
1087
+ ],
1088
+ "angle": 0,
1089
+ "content": "(a) Frame-by-frame propagation"
1090
+ },
1091
+ {
1092
+ "type": "image",
1093
+ "bbox": [
1094
+ 0.504,
1095
+ 0.237,
1096
+ 0.895,
1097
+ 0.363
1098
+ ],
1099
+ "angle": 0,
1100
+ "content": null
1101
+ },
1102
+ {
1103
+ "type": "image_caption",
1104
+ "bbox": [
1105
+ 0.603,
1106
+ 0.367,
1107
+ 0.769,
1108
+ 0.38
1109
+ ],
1110
+ "angle": 0,
1111
+ "content": "(b) Frame-by-frame inference"
1112
+ },
1113
+ {
1114
+ "type": "image_caption",
1115
+ "bbox": [
1116
+ 0.499,
1117
+ 0.39,
1118
+ 0.892,
1119
+ 0.419
1120
+ ],
1121
+ "angle": 0,
1122
+ "content": "Figure 9. The temporal consistency obtained from pre-propagation or pre-inference is maintained throughout all remaining frames."
1123
+ },
1124
+ {
1125
+ "type": "text",
1126
+ "bbox": [
1127
+ 0.499,
1128
+ 0.446,
1129
+ 0.892,
1130
+ 0.477
1131
+ ],
1132
+ "angle": 0,
1133
+ "content": "sues inherent in long-sequence inference, as demonstrated in Figure 7."
1134
+ },
1135
+ {
1136
+ "type": "title",
1137
+ "bbox": [
1138
+ 0.5,
1139
+ 0.494,
1140
+ 0.633,
1141
+ 0.511
1142
+ ],
1143
+ "angle": 0,
1144
+ "content": "4. Experiments"
1145
+ },
1146
+ {
1147
+ "type": "text",
1148
+ "bbox": [
1149
+ 0.499,
1150
+ 0.52,
1151
+ 0.892,
1152
+ 0.61
1153
+ ],
1154
+ "angle": 0,
1155
+ "content": "Datasets. We utilized the Panda-70M dataset [2], splitting videos at scene cuts and filtering them based on matching scores to obtain 3,183,727 short video clips paired with captions. During training, we generated mask sequences with random rates, directions, and shapes to simulate video inpainting and object removal tasks."
1156
+ },
1157
+ {
1158
+ "type": "text",
1159
+ "bbox": [
1160
+ 0.498,
1161
+ 0.612,
1162
+ 0.893,
1163
+ 0.793
1164
+ ],
1165
+ "angle": 0,
1166
+ "content": "Training Details and Metrics. We employed a two-stage training strategy with a resolution of 512. In the first stage, we trained the BrushNet and the main denoising UNet without the motion module to enhance content generation capabilities. In the second stage, we trained the motion module of the main denoising UNet to improve temporal consistency. The first stage is trained on 4 NVIDIA A100 GPUs for 100,000 steps with a batch size of 16, and the second stage is trained on 8 NVIDIA A100 GPUs for 80,000 steps with 22-frame video sequences and a batch size of 1. Both models were optimized using the L2 loss function and a learning rate of 1e-5."
1167
+ },
1168
+ {
1169
+ "type": "text",
1170
+ "bbox": [
1171
+ 0.498,
1172
+ 0.794,
1173
+ 0.892,
1174
+ 0.87
1175
+ ],
1176
+ "angle": 0,
1177
+ "content": "Efficiency. Leveraging Phased Consistency Models (PCM) [35], our model can generate samples in only two steps, significantly improving inference efficiency. For instance, processing a 10-second video at \\(540\\mathrm{p}\\) and 25 FPS using Nvidia GPU L20 requires about 200 seconds."
1178
+ },
1179
+ {
1180
+ "type": "text",
1181
+ "bbox": [
1182
+ 0.499,
1183
+ 0.871,
1184
+ 0.892,
1185
+ 0.902
1186
+ ],
1187
+ "angle": 0,
1188
+ "content": "Qualitative Comparison. Figure 1 illustrates a comparison between our model and Propainter both in texture"
1189
+ }
1190
+ ],
1191
+ [
1192
+ {
1193
+ "type": "text",
1194
+ "bbox": [
1195
+ 0.076,
1196
+ 0.092,
1197
+ 0.473,
1198
+ 0.184
1199
+ ],
1200
+ "angle": 0,
1201
+ "content": "quality and temporal consistency. For more comparison results, see Figure 10,11,12,13. Our model effectively propagates known pixels—those that appear in some masked frames—to all frames, while also generating unknown pixels—those that never appear in any masked frames—with high consistency and stability."
1202
+ },
1203
+ {
1204
+ "type": "title",
1205
+ "bbox": [
1206
+ 0.077,
1207
+ 0.195,
1208
+ 0.327,
1209
+ 0.21
1210
+ ],
1211
+ "angle": 0,
1212
+ "content": "5. Conclusion and Discussion"
1213
+ },
1214
+ {
1215
+ "type": "text",
1216
+ "bbox": [
1217
+ 0.076,
1218
+ 0.22,
1219
+ 0.47,
1220
+ 0.34
1221
+ ],
1222
+ "angle": 0,
1223
+ "content": "In this paper, we introduce DiffuEraser, a video inpainting model based on stable diffusion. We address the video inpainting task by decomposing it into three subproblems: propagation of known pixels (pixels appearing in some masked frames), generation of unknown pixels (pixels never appearing in any masked frames), and maintaining temporal consistency of the completed content. For each sub-problem, we propose tailored solutions."
1224
+ },
1225
+ {
1226
+ "type": "text",
1227
+ "bbox": [
1228
+ 0.076,
1229
+ 0.341,
1230
+ 0.47,
1231
+ 0.446
1232
+ ],
1233
+ "angle": 0,
1234
+ "content": "For the generation of unknown pixels, the powerful generative capabilities of the stable diffusion model help DiffuEraser effectively overcome the blurring and mosaic issues prevalent in Transformer-based models. Additionally, we mitigate the inherent hallucinations of stable diffusion models by incorporating priors, ensuring more accurate and realistic inpainting results."
1235
+ },
1236
+ {
1237
+ "type": "text",
1238
+ "bbox": [
1239
+ 0.076,
1240
+ 0.447,
1241
+ 0.47,
1242
+ 0.552
1243
+ ],
1244
+ "angle": 0,
1245
+ "content": "In terms of propagating known pixels, the motion module within the denoising UNet, combined with the enhanced propagation properties provided by priors, ensures the sufficient and consistent propagation of known pixels across frames. This prevents conflicts between the completed content and the unmasked regions, thereby improving the correctness and stability of the results."
1246
+ },
1247
+ {
1248
+ "type": "text",
1249
+ "bbox": [
1250
+ 0.076,
1251
+ 0.552,
1252
+ 0.47,
1253
+ 0.658
1254
+ ],
1255
+ "angle": 0,
1256
+ "content": "To address temporal inconsistencies between clips for long-sequence inference, we expand the temporal receptive field for both prior model and DiffuEraser, significantly enhancing the consistency of completed content across all frames. Furthermore, we leverage the temporal smoothing property of VDM to further enhance temporal coherence at the intersections between clips."
1257
+ },
1258
+ {
1259
+ "type": "text",
1260
+ "bbox": [
1261
+ 0.076,
1262
+ 0.658,
1263
+ 0.47,
1264
+ 0.794
1265
+ ],
1266
+ "angle": 0,
1267
+ "content": "The concepts of incorporating priors and the methods to improve temporal consistency for long-sequence inference are also applicable to a variety of other video editing tasks, such as object replacement and local stylization. These applications will be further explored in future works. Experimental results demonstrate that DiffuEraser outperforms state-of-the-art methods in both content completeness and temporal consistency, establishing it as a superior approach for video inpainting tasks."
1268
+ },
1269
+ {
1270
+ "type": "title",
1271
+ "bbox": [
1272
+ 0.078,
1273
+ 0.806,
1274
+ 0.175,
1275
+ 0.822
1276
+ ],
1277
+ "angle": 0,
1278
+ "content": "References"
1279
+ },
1280
+ {
1281
+ "type": "ref_text",
1282
+ "bbox": [
1283
+ 0.085,
1284
+ 0.83,
1285
+ 0.468,
1286
+ 0.873
1287
+ ],
1288
+ "angle": 0,
1289
+ "content": "[1] Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. arXiv preprint arXiv:2211.09800, 2022."
1290
+ },
1291
+ {
1292
+ "type": "ref_text",
1293
+ "bbox": [
1294
+ 0.087,
1295
+ 0.874,
1296
+ 0.468,
1297
+ 0.902
1298
+ ],
1299
+ "angle": 0,
1300
+ "content": "[2] Tsai-Shien Chen, Aliaksandr Siarohin, Willi Menapace, Ekaterina Deyneka, Hsiang-wei Chao, Byung Eun Jeon,"
1301
+ },
1302
+ {
1303
+ "type": "list",
1304
+ "bbox": [
1305
+ 0.085,
1306
+ 0.83,
1307
+ 0.468,
1308
+ 0.902
1309
+ ],
1310
+ "angle": 0,
1311
+ "content": null
1312
+ },
1313
+ {
1314
+ "type": "ref_text",
1315
+ "bbox": [
1316
+ 0.533,
1317
+ 0.092,
1318
+ 0.895,
1319
+ 0.162
1320
+ ],
1321
+ "angle": 0,
1322
+ "content": "Yuwei Fang, Hsin-Ying Lee, Jian Ren, Ming-Hsuan Yang, and Sergey Tulyakov. Panda-70m: Captioning 70m videos with multiple cross-modality teachers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024."
1323
+ },
1324
+ {
1325
+ "type": "ref_text",
1326
+ "bbox": [
1327
+ 0.509,
1328
+ 0.164,
1329
+ 0.895,
1330
+ 0.219
1331
+ ],
1332
+ "angle": 0,
1333
+ "content": "[3] Weifeng Chen, Jie Wu, Pan Xie, Hefeng Wu, Jiashi Li, Xin Xia, Xuefeng Xiao, and Liang Lin. Control-a-video: Controllable text-to-video generation with diffusion models, 2023."
1334
+ },
1335
+ {
1336
+ "type": "ref_text",
1337
+ "bbox": [
1338
+ 0.509,
1339
+ 0.222,
1340
+ 0.895,
1341
+ 0.276
1342
+ ],
1343
+ "angle": 0,
1344
+ "content": "[4] Patrick Esser, Johnathan Chiu, Parmida Atighechian, Jonathan Granskog, and Anastasis Germanidis. Structure and content-guided video synthesis with diffusion models, 2023."
1345
+ },
1346
+ {
1347
+ "type": "ref_text",
1348
+ "bbox": [
1349
+ 0.509,
1350
+ 0.279,
1351
+ 0.895,
1352
+ 0.308
1353
+ ],
1354
+ "angle": 0,
1355
+ "content": "[5] Aditya Ramesh et al. Hierarchical text-conditional image generation with clip latents, 2022."
1356
+ },
1357
+ {
1358
+ "type": "ref_text",
1359
+ "bbox": [
1360
+ 0.509,
1361
+ 0.31,
1362
+ 0.895,
1363
+ 0.365
1364
+ ],
1365
+ "angle": 0,
1366
+ "content": "[6] Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H. Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion, 2022."
1367
+ },
1368
+ {
1369
+ "type": "ref_text",
1370
+ "bbox": [
1371
+ 0.509,
1372
+ 0.368,
1373
+ 0.895,
1374
+ 0.409
1375
+ ],
1376
+ "angle": 0,
1377
+ "content": "[7] Chen Gao, Ayush Saraf, Jia-Bin Huang, and Johannes Kopf. Flow-edge guided video completion. In Proc. European Conference on Computer Vision (ECCV), 2020."
1378
+ },
1379
+ {
1380
+ "type": "ref_text",
1381
+ "bbox": [
1382
+ 0.509,
1383
+ 0.412,
1384
+ 0.895,
1385
+ 0.467
1386
+ ],
1387
+ "angle": 0,
1388
+ "content": "[8] Songwei Ge, Seungjun Nah, Guilin Liu, Tyler Poon, Andrew Tao, Bryan Catanzaro, David Jacobs, Jia-Bin Huang, Ming-Yu Liu, and Yogesh Balaji. Preserve your own correlation: A noise prior for video diffusion models, 2024."
1389
+ },
1390
+ {
1391
+ "type": "ref_text",
1392
+ "bbox": [
1393
+ 0.509,
1394
+ 0.47,
1395
+ 0.895,
1396
+ 0.511
1397
+ ],
1398
+ "angle": 0,
1399
+ "content": "[9] Bohai Gu, Hao Luo, Song Guo, and Peiran Dong. Advanced video inpainting using optical flow-guided efficient diffusion. arXiv preprint arXiv:2412.00857, 2024."
1400
+ },
1401
+ {
1402
+ "type": "ref_text",
1403
+ "bbox": [
1404
+ 0.503,
1405
+ 0.513,
1406
+ 0.892,
1407
+ 0.568
1408
+ ],
1409
+ "angle": 0,
1410
+ "content": "[10] Jiaxi Gu, Shicong Wang, Haoyu Zhao, Tianyi Lu, Xing Zhang, Zuxuan Wu, Songcen Xu, Wei Zhang, Yu-Gang Jiang, and Hang Xu. Reuse and diffuse: Iterative denoising for text-to-video generation."
1411
+ },
1412
+ {
1413
+ "type": "ref_text",
1414
+ "bbox": [
1415
+ 0.502,
1416
+ 0.571,
1417
+ 0.892,
1418
+ 0.641
1419
+ ],
1420
+ "angle": 0,
1421
+ "content": "[11] Yuwei Guo, Ceyuan Yang, Anyi Rao, Zhengyang Liang, Yaohui Wang, Yu Qiao, Maneesh Agrawala, Dahua Lin, and Bo Dai. Animatediff: Animate your personalized text-to-image diffusion models without specific tuning. International Conference on Learning Representations, 2024."
1422
+ },
1423
+ {
1424
+ "type": "ref_text",
1425
+ "bbox": [
1426
+ 0.503,
1427
+ 0.643,
1428
+ 0.892,
1429
+ 0.697
1430
+ ],
1431
+ "angle": 0,
1432
+ "content": "[12] Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626, 2022."
1433
+ },
1434
+ {
1435
+ "type": "ref_text",
1436
+ "bbox": [
1437
+ 0.503,
1438
+ 0.7,
1439
+ 0.892,
1440
+ 0.769
1441
+ ],
1442
+ "angle": 0,
1443
+ "content": "[13] Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P. Kingma, Ben Poole, Mohammad Norouzi, David J. Fleet, and Tim Salimans. Imagen video: High definition video generation with diffusion models, 2022."
1444
+ },
1445
+ {
1446
+ "type": "ref_text",
1447
+ "bbox": [
1448
+ 0.503,
1449
+ 0.772,
1450
+ 0.892,
1451
+ 0.812
1452
+ ],
1453
+ "angle": 0,
1454
+ "content": "[14] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. arXiv preprint arxiv:2006.11239, 2020."
1455
+ },
1456
+ {
1457
+ "type": "ref_text",
1458
+ "bbox": [
1459
+ 0.503,
1460
+ 0.816,
1461
+ 0.892,
1462
+ 0.857
1463
+ ],
1464
+ "angle": 0,
1465
+ "content": "[15] Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. arXiv:2204.03458, 2022."
1466
+ },
1467
+ {
1468
+ "type": "ref_text",
1469
+ "bbox": [
1470
+ 0.503,
1471
+ 0.86,
1472
+ 0.892,
1473
+ 0.901
1474
+ ],
1475
+ "angle": 0,
1476
+ "content": "[16] Xuan Ju, Xian Liu, Xintao Wang, Yuxuan Bian, Ying Shan, and Qiang Xu. Brushnet: A plug-and-play image inpainting model with decomposed dual-branch diffusion, 2024."
1477
+ },
1478
+ {
1479
+ "type": "list",
1480
+ "bbox": [
1481
+ 0.502,
1482
+ 0.092,
1483
+ 0.895,
1484
+ 0.901
1485
+ ],
1486
+ "angle": 0,
1487
+ "content": null
1488
+ }
1489
+ ],
1490
+ [
1491
+ {
1492
+ "type": "image_caption",
1493
+ "bbox": [
1494
+ 0.112,
1495
+ 0.096,
1496
+ 0.209,
1497
+ 0.108
1498
+ ],
1499
+ "angle": 0,
1500
+ "content": "Masked Frames"
1501
+ },
1502
+ {
1503
+ "type": "image_caption",
1504
+ "bbox": [
1505
+ 0.26,
1506
+ 0.096,
1507
+ 0.323,
1508
+ 0.108
1509
+ ],
1510
+ "angle": 0,
1511
+ "content": "Propainter"
1512
+ },
1513
+ {
1514
+ "type": "image_caption",
1515
+ "bbox": [
1516
+ 0.405,
1517
+ 0.096,
1518
+ 0.436,
1519
+ 0.108
1520
+ ],
1521
+ "angle": 0,
1522
+ "content": "Ours"
1523
+ },
1524
+ {
1525
+ "type": "image_caption",
1526
+ "bbox": [
1527
+ 0.503,
1528
+ 0.096,
1529
+ 0.599,
1530
+ 0.108
1531
+ ],
1532
+ "angle": 0,
1533
+ "content": "Masked Frames"
1534
+ },
1535
+ {
1536
+ "type": "image_caption",
1537
+ "bbox": [
1538
+ 0.648,
1539
+ 0.096,
1540
+ 0.71,
1541
+ 0.108
1542
+ ],
1543
+ "angle": 0,
1544
+ "content": "Propainter"
1545
+ },
1546
+ {
1547
+ "type": "image_caption",
1548
+ "bbox": [
1549
+ 0.792,
1550
+ 0.096,
1551
+ 0.823,
1552
+ 0.108
1553
+ ],
1554
+ "angle": 0,
1555
+ "content": "Ours"
1556
+ },
1557
+ {
1558
+ "type": "image",
1559
+ "bbox": [
1560
+ 0.102,
1561
+ 0.115,
1562
+ 0.229,
1563
+ 0.288
1564
+ ],
1565
+ "angle": 0,
1566
+ "content": null
1567
+ },
1568
+ {
1569
+ "type": "image",
1570
+ "bbox": [
1571
+ 0.231,
1572
+ 0.115,
1573
+ 0.357,
1574
+ 0.287
1575
+ ],
1576
+ "angle": 0,
1577
+ "content": null
1578
+ },
1579
+ {
1580
+ "type": "image",
1581
+ "bbox": [
1582
+ 0.359,
1583
+ 0.115,
1584
+ 0.484,
1585
+ 0.287
1586
+ ],
1587
+ "angle": 0,
1588
+ "content": null
1589
+ },
1590
+ {
1591
+ "type": "image",
1592
+ "bbox": [
1593
+ 0.489,
1594
+ 0.115,
1595
+ 0.615,
1596
+ 0.288
1597
+ ],
1598
+ "angle": 0,
1599
+ "content": null
1600
+ },
1601
+ {
1602
+ "type": "image",
1603
+ "bbox": [
1604
+ 0.617,
1605
+ 0.115,
1606
+ 0.744,
1607
+ 0.288
1608
+ ],
1609
+ "angle": 0,
1610
+ "content": null
1611
+ },
1612
+ {
1613
+ "type": "image",
1614
+ "bbox": [
1615
+ 0.746,
1616
+ 0.115,
1617
+ 0.871,
1618
+ 0.288
1619
+ ],
1620
+ "angle": 0,
1621
+ "content": null
1622
+ },
1623
+ {
1624
+ "type": "image",
1625
+ "bbox": [
1626
+ 0.102,
1627
+ 0.29,
1628
+ 0.229,
1629
+ 0.464
1630
+ ],
1631
+ "angle": 0,
1632
+ "content": null
1633
+ },
1634
+ {
1635
+ "type": "image",
1636
+ "bbox": [
1637
+ 0.231,
1638
+ 0.29,
1639
+ 0.357,
1640
+ 0.464
1641
+ ],
1642
+ "angle": 0,
1643
+ "content": null
1644
+ },
1645
+ {
1646
+ "type": "image",
1647
+ "bbox": [
1648
+ 0.359,
1649
+ 0.29,
1650
+ 0.484,
1651
+ 0.464
1652
+ ],
1653
+ "angle": 0,
1654
+ "content": null
1655
+ },
1656
+ {
1657
+ "type": "image",
1658
+ "bbox": [
1659
+ 0.489,
1660
+ 0.29,
1661
+ 0.615,
1662
+ 0.464
1663
+ ],
1664
+ "angle": 0,
1665
+ "content": null
1666
+ },
1667
+ {
1668
+ "type": "image",
1669
+ "bbox": [
1670
+ 0.617,
1671
+ 0.291,
1672
+ 0.744,
1673
+ 0.464
1674
+ ],
1675
+ "angle": 0,
1676
+ "content": null
1677
+ },
1678
+ {
1679
+ "type": "image",
1680
+ "bbox": [
1681
+ 0.746,
1682
+ 0.291,
1683
+ 0.871,
1684
+ 0.464
1685
+ ],
1686
+ "angle": 0,
1687
+ "content": null
1688
+ },
1689
+ {
1690
+ "type": "image",
1691
+ "bbox": [
1692
+ 0.102,
1693
+ 0.466,
1694
+ 0.229,
1695
+ 0.639
1696
+ ],
1697
+ "angle": 0,
1698
+ "content": null
1699
+ },
1700
+ {
1701
+ "type": "image",
1702
+ "bbox": [
1703
+ 0.231,
1704
+ 0.466,
1705
+ 0.357,
1706
+ 0.639
1707
+ ],
1708
+ "angle": 0,
1709
+ "content": null
1710
+ },
1711
+ {
1712
+ "type": "image",
1713
+ "bbox": [
1714
+ 0.359,
1715
+ 0.466,
1716
+ 0.484,
1717
+ 0.639
1718
+ ],
1719
+ "angle": 0,
1720
+ "content": null
1721
+ },
1722
+ {
1723
+ "type": "image",
1724
+ "bbox": [
1725
+ 0.489,
1726
+ 0.466,
1727
+ 0.615,
1728
+ 0.639
1729
+ ],
1730
+ "angle": 0,
1731
+ "content": null
1732
+ },
1733
+ {
1734
+ "type": "image",
1735
+ "bbox": [
1736
+ 0.617,
1737
+ 0.466,
1738
+ 0.744,
1739
+ 0.639
1740
+ ],
1741
+ "angle": 0,
1742
+ "content": null
1743
+ },
1744
+ {
1745
+ "type": "image",
1746
+ "bbox": [
1747
+ 0.746,
1748
+ 0.466,
1749
+ 0.871,
1750
+ 0.639
1751
+ ],
1752
+ "angle": 0,
1753
+ "content": null
1754
+ },
1755
+ {
1756
+ "type": "image",
1757
+ "bbox": [
1758
+ 0.102,
1759
+ 0.641,
1760
+ 0.229,
1761
+ 0.814
1762
+ ],
1763
+ "angle": 0,
1764
+ "content": null
1765
+ },
1766
+ {
1767
+ "type": "image",
1768
+ "bbox": [
1769
+ 0.231,
1770
+ 0.641,
1771
+ 0.357,
1772
+ 0.814
1773
+ ],
1774
+ "angle": 0,
1775
+ "content": null
1776
+ },
1777
+ {
1778
+ "type": "image",
1779
+ "bbox": [
1780
+ 0.359,
1781
+ 0.641,
1782
+ 0.484,
1783
+ 0.814
1784
+ ],
1785
+ "angle": 0,
1786
+ "content": null
1787
+ },
1788
+ {
1789
+ "type": "image",
1790
+ "bbox": [
1791
+ 0.489,
1792
+ 0.641,
1793
+ 0.615,
1794
+ 0.814
1795
+ ],
1796
+ "angle": 0,
1797
+ "content": null
1798
+ },
1799
+ {
1800
+ "type": "image",
1801
+ "bbox": [
1802
+ 0.617,
1803
+ 0.641,
1804
+ 0.744,
1805
+ 0.814
1806
+ ],
1807
+ "angle": 0,
1808
+ "content": null
1809
+ },
1810
+ {
1811
+ "type": "image",
1812
+ "bbox": [
1813
+ 0.746,
1814
+ 0.641,
1815
+ 0.871,
1816
+ 0.814
1817
+ ],
1818
+ "angle": 0,
1819
+ "content": null
1820
+ },
1821
+ {
1822
+ "type": "image_caption",
1823
+ "bbox": [
1824
+ 0.258,
1825
+ 0.832,
1826
+ 0.712,
1827
+ 0.847
1828
+ ],
1829
+ "angle": 0,
1830
+ "content": "Figure 10. Texture quality comparison between DiffuEraser and Propainter."
1831
+ }
1832
+ ],
1833
+ [
1834
+ {
1835
+ "type": "image_caption",
1836
+ "bbox": [
1837
+ 0.186,
1838
+ 0.097,
1839
+ 0.274,
1840
+ 0.107
1841
+ ],
1842
+ "angle": 0,
1843
+ "content": "Masked Frames"
1844
+ },
1845
+ {
1846
+ "type": "image_caption",
1847
+ "bbox": [
1848
+ 0.46,
1849
+ 0.096,
1850
+ 0.518,
1851
+ 0.107
1852
+ ],
1853
+ "angle": 0,
1854
+ "content": "Propainter"
1855
+ },
1856
+ {
1857
+ "type": "image_caption",
1858
+ "bbox": [
1859
+ 0.729,
1860
+ 0.099,
1861
+ 0.757,
1862
+ 0.108
1863
+ ],
1864
+ "angle": 0,
1865
+ "content": "Ours"
1866
+ },
1867
+ {
1868
+ "type": "image",
1869
+ "bbox": [
1870
+ 0.101,
1871
+ 0.111,
1872
+ 0.357,
1873
+ 0.224
1874
+ ],
1875
+ "angle": 0,
1876
+ "content": null
1877
+ },
1878
+ {
1879
+ "type": "image",
1880
+ "bbox": [
1881
+ 0.358,
1882
+ 0.112,
1883
+ 0.615,
1884
+ 0.223
1885
+ ],
1886
+ "angle": 0,
1887
+ "content": null
1888
+ },
1889
+ {
1890
+ "type": "image",
1891
+ "bbox": [
1892
+ 0.617,
1893
+ 0.112,
1894
+ 0.869,
1895
+ 0.223
1896
+ ],
1897
+ "angle": 0,
1898
+ "content": null
1899
+ },
1900
+ {
1901
+ "type": "image",
1902
+ "bbox": [
1903
+ 0.101,
1904
+ 0.225,
1905
+ 0.357,
1906
+ 0.337
1907
+ ],
1908
+ "angle": 0,
1909
+ "content": null
1910
+ },
1911
+ {
1912
+ "type": "image",
1913
+ "bbox": [
1914
+ 0.358,
1915
+ 0.225,
1916
+ 0.615,
1917
+ 0.337
1918
+ ],
1919
+ "angle": 0,
1920
+ "content": null
1921
+ },
1922
+ {
1923
+ "type": "image",
1924
+ "bbox": [
1925
+ 0.617,
1926
+ 0.225,
1927
+ 0.869,
1928
+ 0.337
1929
+ ],
1930
+ "angle": 0,
1931
+ "content": null
1932
+ },
1933
+ {
1934
+ "type": "image",
1935
+ "bbox": [
1936
+ 0.101,
1937
+ 0.339,
1938
+ 0.357,
1939
+ 0.45
1940
+ ],
1941
+ "angle": 0,
1942
+ "content": null
1943
+ },
1944
+ {
1945
+ "type": "image",
1946
+ "bbox": [
1947
+ 0.358,
1948
+ 0.339,
1949
+ 0.615,
1950
+ 0.45
1951
+ ],
1952
+ "angle": 0,
1953
+ "content": null
1954
+ },
1955
+ {
1956
+ "type": "image",
1957
+ "bbox": [
1958
+ 0.617,
1959
+ 0.339,
1960
+ 0.871,
1961
+ 0.45
1962
+ ],
1963
+ "angle": 0,
1964
+ "content": null
1965
+ },
1966
+ {
1967
+ "type": "image",
1968
+ "bbox": [
1969
+ 0.101,
1970
+ 0.451,
1971
+ 0.356,
1972
+ 0.564
1973
+ ],
1974
+ "angle": 0,
1975
+ "content": null
1976
+ },
1977
+ {
1978
+ "type": "image",
1979
+ "bbox": [
1980
+ 0.36,
1981
+ 0.451,
1982
+ 0.615,
1983
+ 0.564
1984
+ ],
1985
+ "angle": 0,
1986
+ "content": null
1987
+ },
1988
+ {
1989
+ "type": "image",
1990
+ "bbox": [
1991
+ 0.617,
1992
+ 0.451,
1993
+ 0.871,
1994
+ 0.564
1995
+ ],
1996
+ "angle": 0,
1997
+ "content": null
1998
+ },
1999
+ {
2000
+ "type": "image",
2001
+ "bbox": [
2002
+ 0.101,
2003
+ 0.565,
2004
+ 0.357,
2005
+ 0.689
2006
+ ],
2007
+ "angle": 0,
2008
+ "content": null
2009
+ },
2010
+ {
2011
+ "type": "image",
2012
+ "bbox": [
2013
+ 0.358,
2014
+ 0.565,
2015
+ 0.615,
2016
+ 0.689
2017
+ ],
2018
+ "angle": 0,
2019
+ "content": null
2020
+ },
2021
+ {
2022
+ "type": "image",
2023
+ "bbox": [
2024
+ 0.617,
2025
+ 0.565,
2026
+ 0.871,
2027
+ 0.689
2028
+ ],
2029
+ "angle": 0,
2030
+ "content": null
2031
+ },
2032
+ {
2033
+ "type": "image",
2034
+ "bbox": [
2035
+ 0.101,
2036
+ 0.689,
2037
+ 0.356,
2038
+ 0.796
2039
+ ],
2040
+ "angle": 0,
2041
+ "content": null
2042
+ },
2043
+ {
2044
+ "type": "image",
2045
+ "bbox": [
2046
+ 0.358,
2047
+ 0.689,
2048
+ 0.615,
2049
+ 0.796
2050
+ ],
2051
+ "angle": 0,
2052
+ "content": null
2053
+ },
2054
+ {
2055
+ "type": "image",
2056
+ "bbox": [
2057
+ 0.617,
2058
+ 0.689,
2059
+ 0.871,
2060
+ 0.796
2061
+ ],
2062
+ "angle": 0,
2063
+ "content": null
2064
+ },
2065
+ {
2066
+ "type": "image_caption",
2067
+ "bbox": [
2068
+ 0.258,
2069
+ 0.813,
2070
+ 0.712,
2071
+ 0.827
2072
+ ],
2073
+ "angle": 0,
2074
+ "content": "Figure 11. Texture quality comparison between DiffuEraser and Propainter."
2075
+ }
2076
+ ],
2077
+ [
2078
+ {
2079
+ "type": "image",
2080
+ "bbox": [
2081
+ 0.225,
2082
+ 0.084,
2083
+ 0.75,
2084
+ 0.502
2085
+ ],
2086
+ "angle": 0,
2087
+ "content": null
2088
+ },
2089
+ {
2090
+ "type": "image_caption",
2091
+ "bbox": [
2092
+ 0.238,
2093
+ 0.507,
2094
+ 0.731,
2095
+ 0.521
2096
+ ],
2097
+ "angle": 0,
2098
+ "content": "Figure 12. Temporal consistency comparison between DiffuEraser and Propainter."
2099
+ },
2100
+ {
2101
+ "type": "image",
2102
+ "bbox": [
2103
+ 0.125,
2104
+ 0.536,
2105
+ 0.849,
2106
+ 0.744
2107
+ ],
2108
+ "angle": 0,
2109
+ "content": null
2110
+ },
2111
+ {
2112
+ "type": "image_caption",
2113
+ "bbox": [
2114
+ 0.238,
2115
+ 0.751,
2116
+ 0.731,
2117
+ 0.765
2118
+ ],
2119
+ "angle": 0,
2120
+ "content": "Figure 13. Temporal consistency comparison between DiffuEraser and Propainter."
2121
+ },
2122
+ {
2123
+ "type": "ref_text",
2124
+ "bbox": [
2125
+ 0.079,
2126
+ 0.788,
2127
+ 0.468,
2128
+ 0.829
2129
+ ],
2130
+ "angle": 0,
2131
+ "content": "[17] Minhyeok Lee, Suhwan Cho, Chajin Shin, Jungho Lee, Sunghun Yang, and Sangyoun Lee. Video diffusion models are strong video inpainter, 2024."
2132
+ },
2133
+ {
2134
+ "type": "ref_text",
2135
+ "bbox": [
2136
+ 0.08,
2137
+ 0.831,
2138
+ 0.468,
2139
+ 0.886
2140
+ ],
2141
+ "angle": 0,
2142
+ "content": "[18] Zhen Li, Cheng-Ze Lu, Jianhua Qin, Chun-Le Guo, and Ming-Ming Cheng. Towards an end-to-end framework for flow-guided video inpainting. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022."
2143
+ },
2144
+ {
2145
+ "type": "ref_text",
2146
+ "bbox": [
2147
+ 0.08,
2148
+ 0.887,
2149
+ 0.468,
2150
+ 0.902
2151
+ ],
2152
+ "angle": 0,
2153
+ "content": "[19] Jun Hao Liew, Hanshu Yan, Jianfeng Zhang, Zhongcong Xu,"
2154
+ },
2155
+ {
2156
+ "type": "list",
2157
+ "bbox": [
2158
+ 0.079,
2159
+ 0.788,
2160
+ 0.468,
2161
+ 0.902
2162
+ ],
2163
+ "angle": 0,
2164
+ "content": null
2165
+ },
2166
+ {
2167
+ "type": "ref_text",
2168
+ "bbox": [
2169
+ 0.533,
2170
+ 0.788,
2171
+ 0.892,
2172
+ 0.816
2173
+ ],
2174
+ "angle": 0,
2175
+ "content": "and Jiashi Feng. Magicedit: High-fidelity and temporally coherent video editing. In arXiv, 2023."
2176
+ },
2177
+ {
2178
+ "type": "ref_text",
2179
+ "bbox": [
2180
+ 0.502,
2181
+ 0.817,
2182
+ 0.892,
2183
+ 0.886
2184
+ ],
2185
+ "angle": 0,
2186
+ "content": "[20] Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu Sun, Xiaogang Wang, Jifeng Dai, and Hongsheng Li. Fuseformer: Fusing fine-grained information in transformers for video inpainting. In International Conference on Computer Vision (ICCV), 2021."
2187
+ },
2188
+ {
2189
+ "type": "ref_text",
2190
+ "bbox": [
2191
+ 0.503,
2192
+ 0.887,
2193
+ 0.892,
2194
+ 0.901
2195
+ ],
2196
+ "angle": 0,
2197
+ "content": "[21] Shaoteng Liu, Yuechen Zhang, Wenbo Li, Zhe Lin, and Jiaya"
2198
+ },
2199
+ {
2200
+ "type": "list",
2201
+ "bbox": [
2202
+ 0.502,
2203
+ 0.788,
2204
+ 0.892,
2205
+ 0.901
2206
+ ],
2207
+ "angle": 0,
2208
+ "content": null
2209
+ }
2210
+ ],
2211
+ [
2212
+ {
2213
+ "type": "ref_text",
2214
+ "bbox": [
2215
+ 0.11,
2216
+ 0.092,
2217
+ 0.468,
2218
+ 0.119
2219
+ ],
2220
+ "angle": 0,
2221
+ "content": "Jia. Video-p2p: Video editing with cross-attention control, 2023."
2222
+ },
2223
+ {
2224
+ "type": "ref_text",
2225
+ "bbox": [
2226
+ 0.08,
2227
+ 0.122,
2228
+ 0.469,
2229
+ 0.176
2230
+ ],
2231
+ "angle": 0,
2232
+ "content": "[22] Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. arXiv preprint arXiv:2211.09794, 2022."
2233
+ },
2234
+ {
2235
+ "type": "ref_text",
2236
+ "bbox": [
2237
+ 0.08,
2238
+ 0.179,
2239
+ 0.469,
2240
+ 0.233
2241
+ ],
2242
+ "angle": 0,
2243
+ "content": "[23] Eyal Molad, Eliahu Horwitz, Dani Valevski, Alex Rav Acha, Yossi Matias, Yael Pritch, Yaniv Leviathan, and Yedid Hoshen. Dreamix: Video diffusion models are general video editors, 2023."
2244
+ },
2245
+ {
2246
+ "type": "ref_text",
2247
+ "bbox": [
2248
+ 0.08,
2249
+ 0.236,
2250
+ 0.469,
2251
+ 0.303
2252
+ ],
2253
+ "angle": 0,
2254
+ "content": "[24] Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, and Xiaohu Qie. T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. arXiv preprint arXiv:2302.08453, 2023."
2255
+ },
2256
+ {
2257
+ "type": "ref_text",
2258
+ "bbox": [
2259
+ 0.08,
2260
+ 0.307,
2261
+ 0.468,
2262
+ 0.36
2263
+ ],
2264
+ "angle": 0,
2265
+ "content": "[25] Chenyang Qi, Xiaodong Cun, Yong Zhang, Chenyang Lei, Xintao Wang, Ying Shan, and Qifeng Chen. Fatezero: Fusing attentions for zero-shot text-based video editing. arXiv:2303.09535, 2023."
2266
+ },
2267
+ {
2268
+ "type": "ref_text",
2269
+ "bbox": [
2270
+ 0.08,
2271
+ 0.363,
2272
+ 0.468,
2273
+ 0.404
2274
+ ],
2275
+ "angle": 0,
2276
+ "content": "[26] Weize Quan, Jiaxi Chen, Yanli Liu, Dong-Ming Yan, and Peter Wonka. Deep learning-based image and video inpainting: A survey, 2024."
2277
+ },
2278
+ {
2279
+ "type": "ref_text",
2280
+ "bbox": [
2281
+ 0.08,
2282
+ 0.407,
2283
+ 0.468,
2284
+ 0.447
2285
+ ],
2286
+ "angle": 0,
2287
+ "content": "[27] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models, 2021."
2288
+ },
2289
+ {
2290
+ "type": "ref_text",
2291
+ "bbox": [
2292
+ 0.08,
2293
+ 0.449,
2294
+ 0.468,
2295
+ 0.504
2296
+ ],
2297
+ "angle": 0,
2298
+ "content": "[28] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In arXiv preprint arxiv:2208.12242, 2022."
2299
+ },
2300
+ {
2301
+ "type": "ref_text",
2302
+ "bbox": [
2303
+ 0.08,
2304
+ 0.507,
2305
+ 0.469,
2306
+ 0.589
2307
+ ],
2308
+ "angle": 0,
2309
+ "content": "[29] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. Photorealistic text-to-image diffusion models with deep language understanding, 2022."
2310
+ },
2311
+ {
2312
+ "type": "ref_text",
2313
+ "bbox": [
2314
+ 0.08,
2315
+ 0.591,
2316
+ 0.469,
2317
+ 0.673
2318
+ ],
2319
+ "angle": 0,
2320
+ "content": "[30] Fengyuan Shi, Jiaxi Gu, Hang Xu, Songcen Xu, Wei Zhang, and Limin Wang. Bivdiff: A training-free framework for general-purpose video synthesis via bridging image and video diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7393-7402, June 2024."
2321
+ },
2322
+ {
2323
+ "type": "ref_text",
2324
+ "bbox": [
2325
+ 0.08,
2326
+ 0.676,
2327
+ 0.469,
2328
+ 0.743
2329
+ ],
2330
+ "angle": 0,
2331
+ "content": "[31] Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, Devi Parikh, Sonal Gupta, and Yaniv Taigman. Make-a-video: Text-to-video generation without text-video data, 2022."
2332
+ },
2333
+ {
2334
+ "type": "ref_text",
2335
+ "bbox": [
2336
+ 0.08,
2337
+ 0.746,
2338
+ 0.469,
2339
+ 0.788
2340
+ ],
2341
+ "angle": 0,
2342
+ "content": "[32] Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics, 2015."
2343
+ },
2344
+ {
2345
+ "type": "ref_text",
2346
+ "bbox": [
2347
+ 0.08,
2348
+ 0.79,
2349
+ 0.469,
2350
+ 0.829
2351
+ ],
2352
+ "angle": 0,
2353
+ "content": "[33] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv:2010.02502, October 2020."
2354
+ },
2355
+ {
2356
+ "type": "ref_text",
2357
+ "bbox": [
2358
+ 0.08,
2359
+ 0.832,
2360
+ 0.469,
2361
+ 0.9
2362
+ ],
2363
+ "angle": 0,
2364
+ "content": "[34] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021."
2365
+ },
2366
+ {
2367
+ "type": "list",
2368
+ "bbox": [
2369
+ 0.08,
2370
+ 0.092,
2371
+ 0.469,
2372
+ 0.9
2373
+ ],
2374
+ "angle": 0,
2375
+ "content": null
2376
+ },
2377
+ {
2378
+ "type": "ref_text",
2379
+ "bbox": [
2380
+ 0.503,
2381
+ 0.093,
2382
+ 0.892,
2383
+ 0.16
2384
+ ],
2385
+ "angle": 0,
2386
+ "content": "[35] Fu-Yun Wang, Zhaoyang Huang, Alexander William Bergman, Dazhong Shen, Peng Gao, Michael Lingelbach, Keqiang Sun, Weikang Bian, Guanglu Song, Yu Liu, et al. Phased consistency model. arXiv preprint arXiv:2405.18407, 2024."
2387
+ },
2388
+ {
2389
+ "type": "ref_text",
2390
+ "bbox": [
2391
+ 0.503,
2392
+ 0.164,
2393
+ 0.892,
2394
+ 0.218
2395
+ ],
2396
+ "angle": 0,
2397
+ "content": "[36] Xiang Wang, Hangjie Yuan, Shiwei Zhang, Dayou Chen, Jiuniu Wang, Yingya Zhang, Yujun Shen, Deli Zhao, and Jingren Zhou. Videocomposer: Compositional video synthesis with motion controllability, 2023."
2398
+ },
2399
+ {
2400
+ "type": "ref_text",
2401
+ "bbox": [
2402
+ 0.503,
2403
+ 0.221,
2404
+ 0.892,
2405
+ 0.289
2406
+ ],
2407
+ "angle": 0,
2408
+ "content": "[37] Jianzong Wu, Xiangtai Li, Chenyang Si, Shangchen Zhou, Jingkang Yang, Jiangning Zhang, Yining Li, Kai Chen, Yunhai Tong, Ziwei Liu, et al. Towards language-driven video inpainting via multimodal large language models. arXiv preprint arXiv:2401.10226, 2024."
2409
+ },
2410
+ {
2411
+ "type": "ref_text",
2412
+ "bbox": [
2413
+ 0.504,
2414
+ 0.291,
2415
+ 0.892,
2416
+ 0.373
2417
+ ],
2418
+ "angle": 0,
2419
+ "content": "[38] Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, Stan Weixian Lei, Yuchao Gu, Yufei Shi, Wynne Hsu, Ying Shan, Xiaohu Qie, and Mike Zheng Shou. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7623-7633, 2023."
2420
+ },
2421
+ {
2422
+ "type": "ref_text",
2423
+ "bbox": [
2424
+ 0.504,
2425
+ 0.375,
2426
+ 0.892,
2427
+ 0.443
2428
+ ],
2429
+ "angle": 0,
2430
+ "content": "[39] Jinbo Xing, Menghan Xia, Yuxin Liu, Yuechen Zhang, Yong Zhang, Yingqing He, Hanyuan Liu, Haoxin Chen, Xiaodong Cun, Xintao Wang, et al. Make-your-video: Customized video generation using textual and structural guidance. arXiv preprint arXiv:2306.00943, 2023."
2431
+ },
2432
+ {
2433
+ "type": "ref_text",
2434
+ "bbox": [
2435
+ 0.504,
2436
+ 0.446,
2437
+ 0.892,
2438
+ 0.5
2439
+ ],
2440
+ "angle": 0,
2441
+ "content": "[40] Yanhong Zeng, Jianlong Fu, and Hongyang Chao. Learning joint spatial-temporal transformations for video inpainting. In The Proceedings of the European Conference on Computer Vision (ECCV), 2020."
2442
+ },
2443
+ {
2444
+ "type": "ref_text",
2445
+ "bbox": [
2446
+ 0.504,
2447
+ 0.502,
2448
+ 0.892,
2449
+ 0.543
2450
+ ],
2451
+ "angle": 0,
2452
+ "content": "[41] Kaidong Zhang, Jingjing Fu, and Dong Liu. Flow-guided transformer for video inpainting. In European Conference on Computer Vision, pages 74-90. Springer, 2022."
2453
+ },
2454
+ {
2455
+ "type": "ref_text",
2456
+ "bbox": [
2457
+ 0.504,
2458
+ 0.545,
2459
+ 0.892,
2460
+ 0.612
2461
+ ],
2462
+ "angle": 0,
2463
+ "content": "[42] Kaidong Zhang, Jingjing Fu, and Dong Liu. Inertia-guided flow completion and style fusion for video inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5982-5991, June 2022."
2464
+ },
2465
+ {
2466
+ "type": "ref_text",
2467
+ "bbox": [
2468
+ 0.504,
2469
+ 0.615,
2470
+ 0.892,
2471
+ 0.642
2472
+ ],
2473
+ "angle": 0,
2474
+ "content": "[43] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models, 2023."
2475
+ },
2476
+ {
2477
+ "type": "ref_text",
2478
+ "bbox": [
2479
+ 0.504,
2480
+ 0.644,
2481
+ 0.892,
2482
+ 0.699
2483
+ ],
2484
+ "angle": 0,
2485
+ "content": "[44] Yabo Zhang, Yuxiang Wei, Dongsheng Jiang, Xiaopeng Zhang, Wangmeng Zuo, and Qi Tian. Controlvideo: Training-free controllable text-to-video generation. arXiv preprint arXiv:2305.13077, 2023."
2486
+ },
2487
+ {
2488
+ "type": "ref_text",
2489
+ "bbox": [
2490
+ 0.504,
2491
+ 0.701,
2492
+ 0.892,
2493
+ 0.756
2494
+ ],
2495
+ "angle": 0,
2496
+ "content": "[45] Zhixing Zhang, Bichen Wu, Xiaoyan Wang, Yaqiao Luo, Luxin Zhang, Yinan Zhao, Peter Vajda, Dimitris Metaxas, and Licheng Yu. Avid: Any-length video inpainting with diffusion model. arXiv preprint arXiv:2312.03816, 2023."
2497
+ },
2498
+ {
2499
+ "type": "ref_text",
2500
+ "bbox": [
2501
+ 0.504,
2502
+ 0.758,
2503
+ 0.892,
2504
+ 0.812
2505
+ ],
2506
+ "angle": 0,
2507
+ "content": "[46] Shangchen Zhou, Chongyi Li, Kelvin C.K Chan, and Chen Change Loy. ProPainter: Improving propagation and transformer for video inpainting. In Proceedings of IEEE International Conference on Computer Vision (ICCV), 2023."
2508
+ },
2509
+ {
2510
+ "type": "ref_text",
2511
+ "bbox": [
2512
+ 0.504,
2513
+ 0.814,
2514
+ 0.892,
2515
+ 0.881
2516
+ ],
2517
+ "angle": 0,
2518
+ "content": "[47] Bojia Zi, Shihao Zhao, Xianbiao Qi, Jianan Wang, Yukai Shi, Qianyu Chen, Bin Liang, Kam-Fai Wong, and Lei Zhang. Cococo: Improving text-guided video inpainting for better consistency, controllability and compatibility. ArXiv, abs/2403.12035, 2024."
2519
+ },
2520
+ {
2521
+ "type": "list",
2522
+ "bbox": [
2523
+ 0.503,
2524
+ 0.093,
2525
+ 0.892,
2526
+ 0.881
2527
+ ],
2528
+ "angle": 0,
2529
+ "content": null
2530
+ }
2531
+ ]
2532
+ ]
2501.10xxx/2501.10018/8bff6f61-9aa1-458e-a9e1-d00224986bd3_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7f24349dc05bc112b2cfa9fa6db6f1b3971a2ebedcd1d68f79442d41de87941
3
+ size 13892811
2501.10xxx/2501.10018/full.md ADDED
@@ -0,0 +1,352 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DiffuEraser: A Diffusion Model for Video Inpainting
2
+
3
+ # TECHNICAL REPORT
4
+
5
+ Xiaowen Li
6
+
7
+ Haolan Xue
8
+
9
+ Peiran Ren
10
+
11
+ Liefeng Bo
12
+
13
+ Tongyi Lab, Alibaba Group
14
+
15
+ {1xw262398, haolan.xhl, peiran.rpr, liefeng.bo}@alibaba-inc.com
16
+
17
+ https://github.com/lixiaowen-xw/DiffuEraser.git
18
+
19
+ ![](images/e134b517769412cb776a736deec30c026d64ac01ac63d8427bcda63492a15e0d.jpg)
20
+ (a)
21
+
22
+ ![](images/c608b5a7dac3a4d0a6ab09797293662db06914c856524ba439f3140a3f5de0c2.jpg)
23
+ (b)
24
+ Figure 1. Performance comparison between the proposed model, DiffuEraser, and Propainter. (a) Texture Quality: DiffuEraser generates more detailed and refined textures compared to the transformer-based Propainter. (b) Temporal Consistency: DiffuEraser demonstrates superior temporal consistency in the inpainted content compared to Propainter.
25
+
26
+ # Abstract
27
+
28
+ Recent video inpainting algorithms integrate flow-based pixel propagation with transformer-based generation to leverage optical flow for restoring textures and objects using information from neighboring frames, while completing masked regions through visual Transformers. However, these approaches often encounter blurring and temporal inconsistencies when dealing with large masks, highlighting the need for models with enhanced generative capabilities. Recently, diffusion models have emerged as a prominent technique in image and video generation due to their impressive performance. In this paper, we introduce DifuEraser, a video inpainting model based on stable diffusion, designed to fill masked regions with greater details and more coherent structures. We incorporate prior information to provide initialization and weak conditioning,
29
+
30
+ which helps mitigate noisy artifacts and suppress hallucinations. Additionally, to improve temporal consistency during long-sequence inference, we expand the temporal receptive fields of both the prior model and DiffuEraser, and further enhance consistency by leveraging the temporal smoothing property of Video Diffusion Models. Experimental results demonstrate that our proposed method outperforms state-of-the-art techniques in both content completeness and temporal consistency while maintaining acceptable efficiency.
31
+
32
+ # 1. Introduction
33
+
34
+ Video inpainting aims to complete masked regions with content that is both plausible and temporally consistent. Previous video inpainting algorithms primarily rely on two mechanisms:
35
+
36
+ 1) Flow-based pixel propagation methods, which utilize optical flow to restore texture details and objects by leveraging information from adjacent frames; and
37
+ 2) Transformer-based video inpainting methods, which excel at completing the structural aspects of objects [26].
38
+
39
+ Current mainstream algorithms typically combine these two approaches, consisting of three modules or stages:
40
+
41
+ 1) Flow completion,
42
+ 2) Feature propagation, and
43
+ 3) Content generation.
44
+
45
+ This solution categorizes masked pixels into two types:
46
+
47
+ 1) Known pixels, which have appeared in some masked frames and can be propagated to other frames through flow completion and feature propagation modules, ensuring consistency between the completed content and the unmasked regions; and
48
+ 2) Unknown pixels, which have never appeared in any masked frames and are generated by the content generation module, thereby enhancing the structural integrity of the results.
49
+
50
+ The state-of-the-art algorithm, Propainter [46], exemplifies this approach and comprises three key modules: recurrent flow completion, dual-domain propagation, and mask-guided sparse Transformer. It effectively propagates known pixels across all frames and demonstrates an initial ability to generate unknown pixels. However, when the mask size is large, the generative capability of the Transformer model proves insufficient, leading to significant artifacts, as illustrated in Figure 1.
51
+
52
+ Consequently, there is a need for more powerful models with enhanced generative capabilities. The Stable Diffusion model, which has recently gained prominence in the field of image and video generation, presents a promising candidate.
53
+
54
+ In this work, we first decompose the video inpainting task into three sub-problems and then propose corresponding solutions for each. Specifically, the three key challenges are: the propagation of known pixels, the generation of unknown pixels, and the temporal consistency of the completed content. Our main contributions are summarized as follows:
55
+
56
+ 1. Video Inpainting Diffusion: We introduce a motion module for the image inpainting model BrushNet, which is based on diffusion models. The powerful generative capability of diffusion models overcomes the blurring and mosaic artifacts associated with Transformer-based models, thereby completing object structures and generating more detailed content.
57
+ 2. Injected Priors: We incorporate priors into the diffusion model, enabling easier initialization to mitigate
58
+
59
+ noisy artifacts and serving as a weak condition to suppress the generation of unwanted objects.
60
+
61
+ 3. Enhanced Temporal Consistency: We improve the temporal consistency of long-sequence inference by expanding the temporal receptive fields of both the prior model and the diffusion model. Additionally, we further enhance temporal continuity at the intersections between clips by leveraging the temporal smoothing property of the Video Diffusion Model.
62
+
63
+ # 2. Related Works
64
+
65
+ Diffusion Models. The advent of diffusion models [14, 32, 34] has significantly enhanced the quality and creativity of image and video generation. In the realm of image synthesis, diffusion models have driven substantial progress across various tasks, including text-to-image generation [5, 29], controllable image generation [24, 43], image editing [1, 12, 22], personalized image generation [6, 28], and image inpainting [27, 16], among others. Building on these advancements, video diffusion models incorporating additional motion modules have also gained significant traction. Key applications in this domain include text-to-video generation [11, 8, 10, 13, 15, 31], controllable video generation [3, 4, 36, 39], video editing [19, 23, 38, 21], and various training-free video synthesis methods [44, 25].
66
+
67
+ Video Inpainting. Video inpainting aims to fill masked regions in videos with plausible content while maintaining temporal consistency. Early approaches based on 3D convolution and shifting operations exhibited limited performance. The emergence of methods leveraging optical flow and Transformer architectures has significantly improved the quality of video inpainting. Flow-based pixel propagation methods [7, 41, 42] excel at restoring textures and details by utilizing information from adjacent frames. In contrast, Transformer-based methods [40, 20, 18, 46] are adept at completing the structural aspects of objects. Among these, Propainter [46] stands out as a representative approach, comprising recurrent flow completion, dual-domain propagation, and a mask-guided sparse Transformer. Propainter effectively propagates known pixels across all frames and demonstrates an initial ability to generate unknown pixels. However, its generative capacity is limited when dealing with large masks, leading to noticeable artifacts.
68
+
69
+ With the rising popularity of diffusion models, diffusion-based video inpainting methods have begun to emerge [17, 37, 30, 9, 45, 47]. These approaches leverage the powerful generative capabilities of diffusion models to enhance both the detail and structural integrity of the inpainted regions, addressing some of the limitations observed in Transformer-based methods. BIVDiff[30] is a training-free framework via bridging image and video diffusion models. AVID[45]
70
+
71
+ ![](images/e4c86f62880d1c434c0679dab98ef6549303cedd0a66dcffe763a8e8b788e7d0.jpg)
72
+ Figure 2. Overview of the proposed video inpainting model DiffuEraser, based on stable diffusion. The main denoising UNet performs the denoising process to generate the final output. The BrushNet branch extracts features from masked images, which are added to the main denoising UNet layer by layer after a zero convolution block. Temporal attention is incorporated after self-attention and cross-attention to improve temporal consistency.
73
+
74
+ and CoCoCo[47] improved text-guided video inpainting by integrating motion module to Text-to-Image(T2I) model. [37] proposes language-driven video inpainting via Multimodal Large Language Models, which uses natural language instructions to guide the inpainting process. Nevertheless, they always suffer from the inherent hallucinations of diffusion models. FloED[9] with less hallucination proposes a dedicated dual-branch architecture that incorporates motion guidance with a multi-scale flow adapter to enhance temporal consistency, focusing on object removal and background restoration. FFF-VDI[17] propagates the noise latent information of future frames to fill the masked area of the first frame's noise latent code, improving temporal consistency and suppressing hallucination effects. However, these methods do not effectively address the temporal consistency and stability needed for long-sequence inference and there is still room for improvement in detail and structural integrity. In contrast, DiffuEraser can generate temporally consistent results with enhanced detail and a more complete structure for long-sequence inference, all without requiring a text prompt.
75
+
76
+ # 3. Methodology
77
+
78
+ # 3.1. Network Overview
79
+
80
+ Our network architecture is inspired byAnimateDiff [11], integrating a motion module into the image inpainting model. For the image inpainting component, we select BrushNet [16], which enhances the main denoising UNet by adding an additional branch to extract features from masked images. An overview of our proposed model, DiffuEraser,
81
+
82
+ is depicted in Figure 2. The architecture comprises the primary denoising UNet and an auxiliary BrushNet. The BrushNet branch receives a conditional latent input composed of masked images, masks, and noisy latents, with dimensions $[n,f,h / 4,w / 4,9]$ . Features extracted by BrushNet are integrated into the denoising UNet layer by layer after a zero convolution block. The denoising UNet processes noisy latents with dimensions $[n,f,h / 4,w / 4,4]$ . To enhance temporal consistency, temporal attention mechanisms are incorporated following both self-attention and cross-attention layers. After denoising, the generated images are blended with the input masked images using blurred masks.
83
+
84
+ We define the video inpainting problem by decomposing it into three sub-problems: propagation of known pixels (pixels that have appeared in some masked frames), generation of unknown pixels (pixels that have never appeared in any masked frames), and maintaining temporal consistency of the completed content. Specifically:
85
+
86
+ 1. Propagation of Known Pixels: The motion module inherently supports temporal propagation, allowing the restoration of texture details and objects in the current frame using information from adjacent frames. Additionally, we leverage the enhanced propagation capabilities of the prior model, which offers a longer propagation range and a more sophisticated propagation mechanism. Specifically, we apply DDIM inversion on the inpainting results from the prior model and incorporate them into the noisy latent. See Section 3.2 for details. We utilize Propainter as our prior model. Beyond supporting the propagation of known pixels, the injected prior facilitates easier initialization
87
+
88
+ for DiffuEraser, enabling the generation of meaningful completed content and suppressing noisy artifacts and visual hallucinations commonly associated with diffusion models.
89
+ 2. Generation of Unknown Pixels: Utilizing the robust generative capabilities of the stable diffusion model, our approach can generate plausible content with more details and textures for unknown pixels.
90
+ 3. Temporal Consistency of Completed Content: While the motion module ensures temporal consistency within individual inferences (each handling a clip of 22 frames in our setting), discrepancies arise at the boundaries between clips during long-sequence processing. To address this, we expand the temporal receptive field of the model. This is achieved by performing pre-inference, where video frames are sampled at an optimal rate and processed collectively as a single clip. This enables the model to "see" frames from a broader temporal context. Subsequently, the insights gained from pre-inference are used to guide the frame-by-frame inference, incorporating information from distant frames and thereby enhancing the overall temporal continuity. See Section 3.3 for details.
91
+
92
+ As demonstrated in other studies, the generative capability of stable diffusion models and the temporal consistency provided by motion modules are well-established. In this paper, we focus on illustrating the advantages of incorporating priors and optimizing temporal consistency across clips during long-sequence inference.
93
+
94
+ # 3.2. Incorporation of Priors
95
+
96
+ As illustrated in Figure 3, our model occasionally generates meaningless noisy artifacts within masked regions. For instance, the masked area above the sea level may appear as random noise instead of coherent content.
97
+
98
+ ![](images/6bf643e9eac95b32b23b37c7600b96d676732a92e76bb601ca6c13a014a839c6.jpg)
99
+ Figure 3. Example of noisy artifacts generated by the model. The masked region above the sea level is not completed correctly and resembles random noise.
100
+
101
+ To address these artifacts, we enhance the noisy latent—an integral part of the model's input. Inspired by DDIM Inversion [33], we introduce priors during inference. Specifically, we perform DDIM Inversion on the outputs of a chosen lightweight inpainting model and incorporate the
102
+
103
+ inverted results into the noisy latent, as depicted in Figure 4. The prior provides initialization information that enables the model to generate meaningful and stable completed content, effectively eliminating the noisy artifacts shown in Figure 3. Additionally, the prior acts as a weak condition to suppress the generation of unwanted objects, mitigating visual hallucinations often encountered in diffusion models.
104
+
105
+ ![](images/06c8213943e1fdd2f300429c635a96a09dc968fdc9c5d4f73652fcdd89c74e2d.jpg)
106
+ Figure 4. Incorporation of priors. We introduce priors during inference by performing DDIM inversion on the outputs of the prior model and adding them to the noisy latent.
107
+
108
+ The selection of the prior model significantly impacts the final results. After experimental comparisons, we selected Propainter as our prior model. Notably, any blur and mosaic artifacts present in the prior do not adversely affect our model's outputs; instead, they are refined and eliminated, resulting in inpainted regions with richer textures and greater detail.
109
+
110
+ Figure 5 compares the results before and after incorporating priors, demonstrating that the introduction of priors effectively suppresses noisy artifacts and the emergence of unwanted objects, thereby significantly enhancing the accuracy and stability of the inpainting results.
111
+
112
+ ![](images/ece35d2d7b7d0f52d96289698367ff67e1a560ff5ae85a40abb1c848b382b962.jpg)
113
+ Figure 5. Comparison of inpainting results before and after incorporating priors.
114
+
115
+ ![](images/a456d5de36d936497eab6db907d20d6126432885ae67ec309d00dcc865bba36c.jpg)
116
+ Figure 6. Utilizing the temporal smoothing property of the Video Diffusion Model (VDM) to enhance consistency at the intersections of clips.
117
+
118
+ # 3.3. Optimizing Temporal Consistency for Long-Sequence Inference
119
+
120
+ While the motion module maintains good temporal consistency within individual clips (for example, 22 frames), noticeable discrepancies emerge at the boundaries between consecutive clips during long-sequence inference, as shown in Figure 7. To ensure seamless temporal consistency across the entire video, we implement the following optimizations.
121
+
122
+ # 3.3.1 Leveraging the Temporal Smoothing Property of the Video Diffusion Model (VDM)
123
+
124
+ The absence of specific temporal conditioning leads to significant changes in completed content between clips, a problem that cannot be resolved by merely overlapping neighboring clips. Inspired by the concept of interpolating between timesteps to obtain intermediate results [9], we adopt a staggered denoising approach along sequential timesteps. This method utilizes the inherent temporal smoothing property of VDM to enhance consistency between clips.
125
+
126
+ During inference, even-numbered timesteps remain inferred from the starting position of the clip, while odd-numbered timesteps are inferred from the midpoint of the clip, Figure 6. This staggered denoising leverages VDM's temporal smoothing property to blend frames at clip intersections smoothly. The underlying rationale is that, despite identical latent inputs, the denoising results for overlapped frames from adjacent clips differ due to VDM's temporal smoothing property, which adjusts overlapped frames to be temporally consistent with the starting frame. By applying this smoothing property at clip intersections, we achieve more seamless transitions.
127
+
128
+ When processing long videos divided into multiple clips, preliminary optimizations lead to multiple adjustments at clip intersections. After optimization, these transitions are smoothed into a single gradual change from the first to the last frame of the entire video. However, complete consistency across the entire video remains unattainable due to
129
+
130
+ inherent inconsistencies between the first and last frames.
131
+
132
+ ![](images/551fa82f28ca4156c47b16d5a5a7670628ec03e9c00d6bc0677f1edf02269c2f.jpg)
133
+ Figure 7. Temporal consistency optimization for long-sequence inference.
134
+
135
+ # 3.3.2 Expanding the Temporal Receptive Field
136
+
137
+ A single inference pass can process only a limited number of frames (for instance, 22 frames in our setting), which restricts the temporal receptive field and prevents the propagation of known pixels from distant frames. Additionally, information sharing between different clips is constrained, resulting in inconsistencies in detailed content despite similar semantics across clips. This leads to frequent and noticeable changes during long-sequence inference, as illustrated in Figure 7. To mitigate this, we expand the temporal receptive field of the inference process through the following two strategies.
138
+
139
+ # 1. Enhancing Priors for Comprehensive Pixel Propagation
140
+
141
+ Using Propainter as an example, we first sample the input video frames and perform pre-propagation to extend known pixels across the entire time domain, surpassing the temporal limitations of a single propagation pass (which typically handles dozens of frames), as shown in Figure 8(a). Full propagation of known pixels ensures that the completed content remains consistent with the unmasked regions, thereby stabilizing the results.
142
+
143
+ Subsequently, the inpainting results of the sampled frames guide frame-by-frame propagation, allowing the information obtained from pre-propagation to be integrated into every frame, as depicted in Figure 9(a).
144
+
145
+ This optimization enables Propainter to utilize information from distant frames more effectively, ensuring that known pixels are stably propagated across the entire time domain. Consequently, the prior provided to DiffuEraser is more accurate and stable. Nonetheless, DiffuEraser's limited temporal receptive field still results in significant changes at clip intersections.
146
+
147
+ ![](images/992d21174ef8be4dd3d6f82225b043e935a2cb698ed7077ce08ae9679ee73ff1.jpg)
148
+ (a) Pre-propagation
149
+
150
+ ![](images/2aeea5110bb1da2ec9fd097599a37a9e87e8f02824c7b219fa70849a875afc7f.jpg)
151
+ (b) Pre-inference
152
+ Figure 8. Perform pre-propagation or pre-inference to expand the temporal receptive field of model.
153
+
154
+ # 2. Expanding the Temporal Receptive Field of DiffuEraser for consistent generation of unknown pixels
155
+
156
+ To further enhance temporal consistency, we also expand the temporal receptive field of DiffuEraser. Similar to the prior optimization, we introduce a pre-inference step where video frames are sampled and processed as a single inference pass, thereby broadening the temporal context and ensuring consistent content generation across the entire video, as shown in Figure 8(b).
157
+
158
+ Following pre-inference, the results guide frame-by-frame inference, ensuring that the content consistency established during pre-inference is maintained throughout all remaining frames, as illustrated in Figure 9(b).
159
+
160
+ The core principle behind these optimizations—both for priors and DiffuEraser—is to extend the temporal receptive field to encompass the entire video duration, rather than being confined to individual clips. The optimization of prior ensures comprehensive propagation of known pixels, maintaining result correctness, while the optimization of DiffuEraser focuses on the consistent generation of unknown pixels, ensuring overall stability. Together, these enhancements effectively resolve the temporal consistency is-
161
+
162
+ ![](images/4e49403c2aacd5c4f48e3e6920f79d506bbee54a511a79fd5151ff8f5e48fa0a.jpg)
163
+ (a) Frame-by-frame propagation
164
+
165
+ ![](images/1b6487e002ce5cc1bc002e6dce812265c72cdf669ef5b71a54cd9dc78c2b37c9.jpg)
166
+ (b) Frame-by-frame inference
167
+ Figure 9. The temporal consistency obtained from pre-propagation or pre-inference is maintained throughout all remaining frames.
168
+
169
+ sues inherent in long-sequence inference, as demonstrated in Figure 7.
170
+
171
+ # 4. Experiments
172
+
173
+ Datasets. We utilized the Panda-70M dataset [2], splitting videos at scene cuts and filtering them based on matching scores to obtain 3,183,727 short video clips paired with captions. During training, we generated mask sequences with random rates, directions, and shapes to simulate video inpainting and object removal tasks.
174
+
175
+ Training Details and Metrics. We employed a two-stage training strategy with a resolution of 512. In the first stage, we trained the BrushNet and the main denoising UNet without the motion module to enhance content generation capabilities. In the second stage, we trained the motion module of the main denoising UNet to improve temporal consistency. The first stage is trained on 4 NVIDIA A100 GPUs for 100,000 steps with a batch size of 16, and the second stage is trained on 8 NVIDIA A100 GPUs for 80,000 steps with 22-frame video sequences and a batch size of 1. Both models were optimized using the L2 loss function and a learning rate of 1e-5.
176
+
177
+ Efficiency. Leveraging Phased Consistency Models (PCM) [35], our model can generate samples in only two steps, significantly improving inference efficiency. For instance, processing a 10-second video at $540\mathrm{p}$ and 25 FPS using Nvidia GPU L20 requires about 200 seconds.
178
+
179
+ Qualitative Comparison. Figure 1 illustrates a comparison between our model and Propainter both in texture
180
+
181
+ quality and temporal consistency. For more comparison results, see Figure 10,11,12,13. Our model effectively propagates known pixels—those that appear in some masked frames—to all frames, while also generating unknown pixels—those that never appear in any masked frames—with high consistency and stability.
182
+
183
+ # 5. Conclusion and Discussion
184
+
185
+ In this paper, we introduce DiffuEraser, a video inpainting model based on stable diffusion. We address the video inpainting task by decomposing it into three subproblems: propagation of known pixels (pixels appearing in some masked frames), generation of unknown pixels (pixels never appearing in any masked frames), and maintaining temporal consistency of the completed content. For each sub-problem, we propose tailored solutions.
186
+
187
+ For the generation of unknown pixels, the powerful generative capabilities of the stable diffusion model help DiffuEraser effectively overcome the blurring and mosaic issues prevalent in Transformer-based models. Additionally, we mitigate the inherent hallucinations of stable diffusion models by incorporating priors, ensuring more accurate and realistic inpainting results.
188
+
189
+ In terms of propagating known pixels, the motion module within the denoising UNet, combined with the enhanced propagation properties provided by priors, ensures the sufficient and consistent propagation of known pixels across frames. This prevents conflicts between the completed content and the unmasked regions, thereby improving the correctness and stability of the results.
190
+
191
+ To address temporal inconsistencies between clips for long-sequence inference, we expand the temporal receptive field for both prior model and DiffuEraser, significantly enhancing the consistency of completed content across all frames. Furthermore, we leverage the temporal smoothing property of VDM to further enhance temporal coherence at the intersections between clips.
192
+
193
+ The concepts of incorporating priors and the methods to improve temporal consistency for long-sequence inference are also applicable to a variety of other video editing tasks, such as object replacement and local stylization. These applications will be further explored in future works. Experimental results demonstrate that DiffuEraser outperforms state-of-the-art methods in both content completeness and temporal consistency, establishing it as a superior approach for video inpainting tasks.
194
+
195
+ # References
196
+
197
+ [1] Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. arXiv preprint arXiv:2211.09800, 2022.
198
+ [2] Tsai-Shien Chen, Aliaksandr Siarohin, Willi Menapace, Ekaterina Deyneka, Hsiang-wei Chao, Byung Eun Jeon,
199
+
200
+ Yuwei Fang, Hsin-Ying Lee, Jian Ren, Ming-Hsuan Yang, and Sergey Tulyakov. Panda-70m: Captioning 70m videos with multiple cross-modality teachers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024.
201
+ [3] Weifeng Chen, Jie Wu, Pan Xie, Hefeng Wu, Jiashi Li, Xin Xia, Xuefeng Xiao, and Liang Lin. Control-a-video: Controllable text-to-video generation with diffusion models, 2023.
202
+ [4] Patrick Esser, Johnathan Chiu, Parmida Atighechian, Jonathan Granskog, and Anastasis Germanidis. Structure and content-guided video synthesis with diffusion models, 2023.
203
+ [5] Aditya Ramesh et al. Hierarchical text-conditional image generation with clip latents, 2022.
204
+ [6] Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H. Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion, 2022.
205
+ [7] Chen Gao, Ayush Saraf, Jia-Bin Huang, and Johannes Kopf. Flow-edge guided video completion. In Proc. European Conference on Computer Vision (ECCV), 2020.
206
+ [8] Songwei Ge, Seungjun Nah, Guilin Liu, Tyler Poon, Andrew Tao, Bryan Catanzaro, David Jacobs, Jia-Bin Huang, Ming-Yu Liu, and Yogesh Balaji. Preserve your own correlation: A noise prior for video diffusion models, 2024.
207
+ [9] Bohai Gu, Hao Luo, Song Guo, and Peiran Dong. Advanced video inpainting using optical flow-guided efficient diffusion. arXiv preprint arXiv:2412.00857, 2024.
208
+ [10] Jiaxi Gu, Shicong Wang, Haoyu Zhao, Tianyi Lu, Xing Zhang, Zuxuan Wu, Songcen Xu, Wei Zhang, Yu-Gang Jiang, and Hang Xu. Reuse and diffuse: Iterative denoising for text-to-video generation.
209
+ [11] Yuwei Guo, Ceyuan Yang, Anyi Rao, Zhengyang Liang, Yaohui Wang, Yu Qiao, Maneesh Agrawala, Dahua Lin, and Bo Dai. Animatediff: Animate your personalized text-to-image diffusion models without specific tuning. International Conference on Learning Representations, 2024.
210
+ [12] Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626, 2022.
211
+ [13] Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P. Kingma, Ben Poole, Mohammad Norouzi, David J. Fleet, and Tim Salimans. Imagen video: High definition video generation with diffusion models, 2022.
212
+ [14] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. arXiv preprint arxiv:2006.11239, 2020.
213
+ [15] Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. arXiv:2204.03458, 2022.
214
+ [16] Xuan Ju, Xian Liu, Xintao Wang, Yuxuan Bian, Ying Shan, and Qiang Xu. Brushnet: A plug-and-play image inpainting model with decomposed dual-branch diffusion, 2024.
215
+
216
+ ![](images/d3bc7ebbccf09fd4e1c9e17d35e2046263513c0baf8988931df18bde087afe80.jpg)
217
+ Masked Frames
218
+
219
+ ![](images/98e8e9ff32e4f294d8d178275d75f7c0d1a7a8a3b11af938e704f11ccd40ff16.jpg)
220
+ Propainter
221
+
222
+ ![](images/a93cd07329c0d84ca839cf34943af72d1dcc6063198e532eac9a991c00a07a92.jpg)
223
+ Ours
224
+
225
+ ![](images/11418655095a5b62846078b0cbfb6f34edc1b93930dc408eeeff4f7a88a9a6c7.jpg)
226
+ Masked Frames
227
+
228
+ ![](images/6e847a85323e143c3bc063b73b6b5cf327ce01f60632d339f5e002c8952ecaf4.jpg)
229
+ Propainter
230
+
231
+ ![](images/6fdacd3b8e8b72b621da2f80ca2ddfd517831ab88a609e94baf54a8df2c43beb.jpg)
232
+ Ours
233
+
234
+ ![](images/43f357df08024d9ac07ff5849aea6ec639318ac82e8f47daedb73647fcfb934a.jpg)
235
+
236
+ ![](images/28b6ebccac20543b867d9a6ba97205e651fbaab2d093cd7059bbf64fc1dfd32c.jpg)
237
+
238
+ ![](images/ecf75d7d5fd96e43203dc690083293b9e4f26e67df0f8fbc190e2e63b3155b68.jpg)
239
+
240
+ ![](images/02bdfb7fe858f8dacc4aa80281c833792dfc26b5f14769cd1460b3dc08a024a6.jpg)
241
+
242
+ ![](images/fb50e89052b20739b4236421a0b0af93dcc01f7b55e2eb91f8f29d1fac75b18e.jpg)
243
+
244
+ ![](images/fa92b6b82223760863095401c8d4f567afa5d87207110ef2a2967b49fd8fae7b.jpg)
245
+
246
+ ![](images/301786ce1cc9ba2c85fcadac39b1f02133566f30176edfbfe010657a33c8ea67.jpg)
247
+
248
+ ![](images/eecb89909217669e40b6abacdac0711be70055ed1680ee093e2fe664820a50f2.jpg)
249
+
250
+ ![](images/d4ed5d25a3ac258c013ae2352602519d8f0b61bde2e586cc734d007296f43b06.jpg)
251
+
252
+ ![](images/a92c094784b41cc3e31946d216f47e5beb550e828acfcff39963701032b9c4aa.jpg)
253
+
254
+ ![](images/47118d1f93aa3f5b6a0bef6c9b9b458ed5620d4dfd61d439a4a60ad2d3d537de.jpg)
255
+
256
+ ![](images/17a32a532767ea8d847634d3cd377f1c381caa36c55290db0094d126c9462275.jpg)
257
+
258
+ ![](images/3b2c16f3867803a648be0eddcadd96d170c51551092aa903345740fa959cba42.jpg)
259
+ Figure 10. Texture quality comparison between DiffuEraser and Propainter.
260
+
261
+ ![](images/77f2e0c6bb1b670359b8423e3bc6a7f8b5477fca1bc7b6afdb82f93b91cede43.jpg)
262
+
263
+ ![](images/a4acc2c94d7f84a6a2558a359f9e44a96f15f100cb8d1609f449a80ee1cc2396.jpg)
264
+
265
+ ![](images/fdecb4ade98a1f1c1762c796fcaddbf24a8ceb9f4b7a61c4ed631a04b04b618d.jpg)
266
+
267
+ ![](images/9a560ea480cbafa4d264df0dd864cb926ff48141032322d91add0d88f66d43db.jpg)
268
+
269
+ ![](images/4a7ba9903024c0b291b1204fc0600f39db2485953cab56c00be93052b57956f7.jpg)
270
+
271
+ ![](images/675cd0f6ec16ef5cde532b3fde32290602ec89b4d40b4a40bf06ab6467f5bfaf.jpg)
272
+ Masked Frames
273
+
274
+ ![](images/0236dc24af1832b4dba15ab654a97dd33feb90b0906965560240cd68db0c2791.jpg)
275
+ Propainter
276
+
277
+ ![](images/40a085a8d3817ba09f05d0e2212f0a1e76dd68ae3db2cdc78768b66b2cd95ee8.jpg)
278
+ Ours
279
+
280
+ ![](images/587e7352f647a8e985618f7301b6a372805533ff26471a9682762a67c42b7d3d.jpg)
281
+
282
+ ![](images/8ee5ec11e7d0726a49540d0c999f472ce9b6eda13371b24fe9ed0e275a61c03f.jpg)
283
+
284
+ ![](images/04b263f49307ef0c9201cbb4a923bd66757952d3565f409435b7df9adca672a1.jpg)
285
+
286
+ ![](images/f6f8994b31c384693ad5747ad644c6ac510c29cb478a3d09d45931ebc0e61ad1.jpg)
287
+
288
+ ![](images/500f3b0cc6f00bc469d2d59818d10b6798f0f8ec3a99f00108bfcb6d87e583e7.jpg)
289
+
290
+ ![](images/74ab6cd90dbb119bcdb32b26f6b7fc1bff0436ae04ae4421a56e51350a078b90.jpg)
291
+
292
+ ![](images/d7c0cd08cba41d04c3ca293d8116f3ca101e1ad02fb3cd182b601e1c124501a7.jpg)
293
+
294
+ ![](images/c6606cde700ab79b43d941192a6e4cdcf402a529fb46019f7eb3c565817c0902.jpg)
295
+
296
+ ![](images/b1a3499022b3c185936f308a44e1fad115ab1dfae0b4e17b9dc26f3a2d9f09fc.jpg)
297
+
298
+ ![](images/3432449190a09e2e9cf554bd513e5559188b4a0e68d88a5fd28e1aa322f0be91.jpg)
299
+
300
+ ![](images/fb802a6349e40deb83613e5f4264bed8e555afe8db980b894721a81acbbf1b93.jpg)
301
+
302
+ ![](images/3435540395fcaa6c13d62c9d734998717745532cb3bef601e118d938a3b52b28.jpg)
303
+
304
+ ![](images/dd11c3a4b5fe4be76cecdeb43b2ddad446caab267f2a575c29f1ef8d35110321.jpg)
305
+ Figure 11. Texture quality comparison between DiffuEraser and Propainter.
306
+
307
+ ![](images/12581ba0d1893d85450e733f8c781255b5e1ab400ad30a57843f20cb12902196.jpg)
308
+
309
+ ![](images/acd6f5a68c784151e7d1cf5fef786c47f342fe780418155cd9b787dc01654ec9.jpg)
310
+
311
+ ![](images/03784d385d4e9d3840d208b1ebb4406daed6989dae605b97f493e0bb4303cdaa.jpg)
312
+ Figure 12. Temporal consistency comparison between DiffuEraser and Propainter.
313
+
314
+ ![](images/1a6d7eb5424d91352b089f3858db3c45460bcb263bbd637dade7a53ecfde0782.jpg)
315
+ Figure 13. Temporal consistency comparison between DiffuEraser and Propainter.
316
+
317
+ [17] Minhyeok Lee, Suhwan Cho, Chajin Shin, Jungho Lee, Sunghun Yang, and Sangyoun Lee. Video diffusion models are strong video inpainter, 2024.
318
+ [18] Zhen Li, Cheng-Ze Lu, Jianhua Qin, Chun-Le Guo, and Ming-Ming Cheng. Towards an end-to-end framework for flow-guided video inpainting. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
319
+ [19] Jun Hao Liew, Hanshu Yan, Jianfeng Zhang, Zhongcong Xu,
320
+
321
+ and Jiashi Feng. Magicedit: High-fidelity and temporally coherent video editing. In arXiv, 2023.
322
+ [20] Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu Sun, Xiaogang Wang, Jifeng Dai, and Hongsheng Li. Fuseformer: Fusing fine-grained information in transformers for video inpainting. In International Conference on Computer Vision (ICCV), 2021.
323
+ [21] Shaoteng Liu, Yuechen Zhang, Wenbo Li, Zhe Lin, and Jiaya
324
+
325
+ Jia. Video-p2p: Video editing with cross-attention control, 2023.
326
+ [22] Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. arXiv preprint arXiv:2211.09794, 2022.
327
+ [23] Eyal Molad, Eliahu Horwitz, Dani Valevski, Alex Rav Acha, Yossi Matias, Yael Pritch, Yaniv Leviathan, and Yedid Hoshen. Dreamix: Video diffusion models are general video editors, 2023.
328
+ [24] Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, and Xiaohu Qie. T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. arXiv preprint arXiv:2302.08453, 2023.
329
+ [25] Chenyang Qi, Xiaodong Cun, Yong Zhang, Chenyang Lei, Xintao Wang, Ying Shan, and Qifeng Chen. Fatezero: Fusing attentions for zero-shot text-based video editing. arXiv:2303.09535, 2023.
330
+ [26] Weize Quan, Jiaxi Chen, Yanli Liu, Dong-Ming Yan, and Peter Wonka. Deep learning-based image and video inpainting: A survey, 2024.
331
+ [27] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models, 2021.
332
+ [28] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In arXiv preprint arxiv:2208.12242, 2022.
333
+ [29] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. Photorealistic text-to-image diffusion models with deep language understanding, 2022.
334
+ [30] Fengyuan Shi, Jiaxi Gu, Hang Xu, Songcen Xu, Wei Zhang, and Limin Wang. Bivdiff: A training-free framework for general-purpose video synthesis via bridging image and video diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7393-7402, June 2024.
335
+ [31] Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, Devi Parikh, Sonal Gupta, and Yaniv Taigman. Make-a-video: Text-to-video generation without text-video data, 2022.
336
+ [32] Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics, 2015.
337
+ [33] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv:2010.02502, October 2020.
338
+ [34] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021.
339
+
340
+ [35] Fu-Yun Wang, Zhaoyang Huang, Alexander William Bergman, Dazhong Shen, Peng Gao, Michael Lingelbach, Keqiang Sun, Weikang Bian, Guanglu Song, Yu Liu, et al. Phased consistency model. arXiv preprint arXiv:2405.18407, 2024.
341
+ [36] Xiang Wang, Hangjie Yuan, Shiwei Zhang, Dayou Chen, Jiuniu Wang, Yingya Zhang, Yujun Shen, Deli Zhao, and Jingren Zhou. Videocomposer: Compositional video synthesis with motion controllability, 2023.
342
+ [37] Jianzong Wu, Xiangtai Li, Chenyang Si, Shangchen Zhou, Jingkang Yang, Jiangning Zhang, Yining Li, Kai Chen, Yunhai Tong, Ziwei Liu, et al. Towards language-driven video inpainting via multimodal large language models. arXiv preprint arXiv:2401.10226, 2024.
343
+ [38] Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, Stan Weixian Lei, Yuchao Gu, Yufei Shi, Wynne Hsu, Ying Shan, Xiaohu Qie, and Mike Zheng Shou. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7623-7633, 2023.
344
+ [39] Jinbo Xing, Menghan Xia, Yuxin Liu, Yuechen Zhang, Yong Zhang, Yingqing He, Hanyuan Liu, Haoxin Chen, Xiaodong Cun, Xintao Wang, et al. Make-your-video: Customized video generation using textual and structural guidance. arXiv preprint arXiv:2306.00943, 2023.
345
+ [40] Yanhong Zeng, Jianlong Fu, and Hongyang Chao. Learning joint spatial-temporal transformations for video inpainting. In The Proceedings of the European Conference on Computer Vision (ECCV), 2020.
346
+ [41] Kaidong Zhang, Jingjing Fu, and Dong Liu. Flow-guided transformer for video inpainting. In European Conference on Computer Vision, pages 74-90. Springer, 2022.
347
+ [42] Kaidong Zhang, Jingjing Fu, and Dong Liu. Inertia-guided flow completion and style fusion for video inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5982-5991, June 2022.
348
+ [43] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models, 2023.
349
+ [44] Yabo Zhang, Yuxiang Wei, Dongsheng Jiang, Xiaopeng Zhang, Wangmeng Zuo, and Qi Tian. Controlvideo: Training-free controllable text-to-video generation. arXiv preprint arXiv:2305.13077, 2023.
350
+ [45] Zhixing Zhang, Bichen Wu, Xiaoyan Wang, Yaqiao Luo, Luxin Zhang, Yinan Zhao, Peter Vajda, Dimitris Metaxas, and Licheng Yu. Avid: Any-length video inpainting with diffusion model. arXiv preprint arXiv:2312.03816, 2023.
351
+ [46] Shangchen Zhou, Chongyi Li, Kelvin C.K Chan, and Chen Change Loy. ProPainter: Improving propagation and transformer for video inpainting. In Proceedings of IEEE International Conference on Computer Vision (ICCV), 2023.
352
+ [47] Bojia Zi, Shihao Zhao, Xianbiao Qi, Jianan Wang, Yukai Shi, Qianyu Chen, Bin Liang, Kam-Fai Wong, and Lei Zhang. Cococo: Improving text-guided video inpainting for better consistency, controllability and compatibility. ArXiv, abs/2403.12035, 2024.
2501.10xxx/2501.10018/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7be9deb82c9db708241f126af14777c8fc275924c6b1b6a9a305830155548546
3
+ size 1236743
2501.10xxx/2501.10018/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2501.10xxx/2501.10040/36e18068-f16d-4586-b181-9c17384f5431_content_list.json ADDED
The diff for this file is too large to render. See raw diff