SlowGuess commited on
Commit
b526dcd
·
verified ·
1 Parent(s): 38b3ec8

Add Batch f437104a-1b19-49e4-bd66-b357b0d8eeb1

Browse files
.gitattributes CHANGED
@@ -8908,3 +8908,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
8908
  2203.11xxx/2203.11941/3d477d2f-3b43-4365-a4c7-3ee962e7266d_origin.pdf filter=lfs diff=lfs merge=lfs -text
8909
  2201.07xxx/2201.07211/6f4f7845-69d0-47da-a6c8-c27b41ddefb8_origin.pdf filter=lfs diff=lfs merge=lfs -text
8910
  2201.04xxx/2201.04944/9304506d-347f-4b61-b827-6c63e9db0b54_origin.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
8908
  2203.11xxx/2203.11941/3d477d2f-3b43-4365-a4c7-3ee962e7266d_origin.pdf filter=lfs diff=lfs merge=lfs -text
8909
  2201.07xxx/2201.07211/6f4f7845-69d0-47da-a6c8-c27b41ddefb8_origin.pdf filter=lfs diff=lfs merge=lfs -text
8910
  2201.04xxx/2201.04944/9304506d-347f-4b61-b827-6c63e9db0b54_origin.pdf filter=lfs diff=lfs merge=lfs -text
8911
+ 2203.02xxx/2203.02697/9f953933-84bb-40cc-b218-d16d7f5c68c3_origin.pdf filter=lfs diff=lfs merge=lfs -text
8912
+ 2203.02xxx/2203.02700/3d69ca7f-39a5-4c99-b9df-bcb921fe9d04_origin.pdf filter=lfs diff=lfs merge=lfs -text
8913
+ 2203.02xxx/2203.02719/6a2a0cc6-43cc-4438-b5c9-38412d95a76d_origin.pdf filter=lfs diff=lfs merge=lfs -text
2203.02xxx/2203.02697/9f953933-84bb-40cc-b218-d16d7f5c68c3_content_list.json ADDED
@@ -0,0 +1,1986 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "Pareto Optimization or Cascaded Weighted Sum: A Comparison of Concepts",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 77,
8
+ 198,
9
+ 722,
10
+ 247
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Wilfried Jakob $^{1, *}$ and Christian Blume",
17
+ "bbox": [
18
+ 77,
19
+ 263,
20
+ 436,
21
+ 280
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "list",
27
+ "sub_type": "text",
28
+ "list_items": [
29
+ "$^{1}$ Karlsruhe Institute of Technology (KIT), Institute of Applied Computer Science (IAI), P.O. Box 3640, Karlsruhe 76021, Germany",
30
+ "$^{2}$ Cologne University of Applied Sciences, Institute of Automation and Industrial IT, Steinmüllerallee 1, Gummersbach 51643, Germany; E-Mail: blume@gm.fh-koeln.de",
31
+ "* Author to whom correspondence should be addressed; E-Mail: wilfried.jakob@kit.edu; Tel.: +49-721-608-24663; Fax: +49-721-608-22602."
32
+ ],
33
+ "bbox": [
34
+ 77,
35
+ 297,
36
+ 810,
37
+ 430
38
+ ],
39
+ "page_idx": 0
40
+ },
41
+ {
42
+ "type": "text",
43
+ "text": "Received: 22 January 2014; in revised form: 3 March 2014 / Accepted: 14 March 2014 /",
44
+ "bbox": [
45
+ 77,
46
+ 449,
47
+ 803,
48
+ 466
49
+ ],
50
+ "page_idx": 0
51
+ },
52
+ {
53
+ "type": "text",
54
+ "text": "Published: 21 March 2014",
55
+ "bbox": [
56
+ 80,
57
+ 469,
58
+ 300,
59
+ 483
60
+ ],
61
+ "page_idx": 0
62
+ },
63
+ {
64
+ "type": "text",
65
+ "text": "Abstract: Looking at articles or conference papers published since the turn of the century, Pareto optimization is the dominating assessment method for multi-objective nonlinear optimization problems. However, is it always the method of choice for real-world applications, where either more than four objectives have to be considered, or the same type of task is repeated again and again with only minor modifications, in an automated optimization or planning process? This paper presents a classification of application scenarios and compares the Pareto approach with an extended version of the weighted sum, called cascaded weighted sum, for the different scenarios. Its range of application within the field of multi-objective optimization is discussed as well as its strengths and weaknesses.",
66
+ "bbox": [
67
+ 124,
68
+ 526,
69
+ 870,
70
+ 703
71
+ ],
72
+ "page_idx": 0
73
+ },
74
+ {
75
+ "type": "text",
76
+ "text": "Keywords: multi-criteria optimization; Pareto optimization; weighted sum; cascaded weighted sum; global optimization; population based optimization; evolutionary algorithm",
77
+ "bbox": [
78
+ 124,
79
+ 721,
80
+ 870,
81
+ 760
82
+ ],
83
+ "page_idx": 0
84
+ },
85
+ {
86
+ "type": "text",
87
+ "text": "1. Introduction",
88
+ "text_level": 1,
89
+ "bbox": [
90
+ 78,
91
+ 813,
92
+ 213,
93
+ 829
94
+ ],
95
+ "page_idx": 0
96
+ },
97
+ {
98
+ "type": "text",
99
+ "text": "Most nonlinear real-world optimization problems require the optimization of several objectives and usually at least some of them are contradictory. A simple example of two conflicting criteria is the payload and the traveling distance with a given amount of fuel, which cannot be maximized both at the same time. The typical solution of such a problem is a compromise. A good compromise is one",
100
+ "bbox": [
101
+ 75,
102
+ 847,
103
+ 917,
104
+ 926
105
+ ],
106
+ "page_idx": 0
107
+ },
108
+ {
109
+ "type": "header",
110
+ "text": "Algorithms 2014, 7, 166-185; doi:10.3390/a7010166",
111
+ "bbox": [
112
+ 77,
113
+ 61,
114
+ 509,
115
+ 80
116
+ ],
117
+ "page_idx": 0
118
+ },
119
+ {
120
+ "type": "header",
121
+ "text": "OPEN ACCESS",
122
+ "bbox": [
123
+ 783,
124
+ 77,
125
+ 917,
126
+ 91
127
+ ],
128
+ "page_idx": 0
129
+ },
130
+ {
131
+ "type": "header",
132
+ "text": "algorithms",
133
+ "bbox": [
134
+ 702,
135
+ 97,
136
+ 914,
137
+ 130
138
+ ],
139
+ "page_idx": 0
140
+ },
141
+ {
142
+ "type": "header",
143
+ "text": "ISSN 1999-4893",
144
+ "bbox": [
145
+ 776,
146
+ 131,
147
+ 915,
148
+ 145
149
+ ],
150
+ "page_idx": 0
151
+ },
152
+ {
153
+ "type": "header",
154
+ "text": "www.mdpi.com/journal/algorithms",
155
+ "bbox": [
156
+ 628,
157
+ 148,
158
+ 915,
159
+ 164
160
+ ],
161
+ "page_idx": 0
162
+ },
163
+ {
164
+ "type": "header",
165
+ "text": "Article",
166
+ "bbox": [
167
+ 77,
168
+ 167,
169
+ 139,
170
+ 181
171
+ ],
172
+ "page_idx": 0
173
+ },
174
+ {
175
+ "type": "text",
176
+ "text": "where one of the criteria can be improved only by worsening at least one of the others. This approach is called Pareto optimization [1], and the set of all good compromises is called Pareto optimal solutions or non-dominated solutions. In practice, usually only one solution is required. Thus, multi-objective optimization based on Pareto optimality is divided into two phases: At first, the set of Pareto optimal solutions is determined, out of which one must be chosen as the final result by a human decision maker according to more or less subjective preferences. This is in contrast to single-objective optimization tasks, where no second selection step is required.",
177
+ "bbox": [
178
+ 80,
179
+ 93,
180
+ 917,
181
+ 231
182
+ ],
183
+ "page_idx": 1
184
+ },
185
+ {
186
+ "type": "text",
187
+ "text": "Most population-based search procedures, like evolutionary algorithms, particle swarm or ant colony optimization, require a single quality value called e.g., fitness in the context of evolutionary algorithms. This may be one reason for the frequent aggregation of different optimization criteria to a single quality value. Two methods, the frequently used weighted sum and the $\\varepsilon$ -constrained method, are described briefly. Another commonly used method is to express everything in costs. On closer inspection, it becomes apparent that this is equal to the weighted sum approach using suitable weights. Additionally, the conversion into costs requires an artificial redefinition of the original goals and this is often not really appropriate. Thus, most multi-objective optimization problems have meanwhile been solved based on Pareto optimization, at least in academia.",
188
+ "bbox": [
189
+ 80,
190
+ 235,
191
+ 917,
192
+ 413
193
+ ],
194
+ "page_idx": 1
195
+ },
196
+ {
197
+ "type": "text",
198
+ "text": "The computational effort to determine all or at least most of the Pareto front increases significantly with the number of conflicting objectives, as will be shown later in the paper. However, what if the complete Pareto front is not needed at all, because the area of interest is already known? In this paper we will introduce an aggregation method called the cascaded weighted sum (CWS) and discuss application scenarios, where aggregation methods like the CWS can compete with Pareto-optimality-based approaches. Not to be misunderstood: We agree that in many fields of application, Pareto optimization is the appropriate method for multi-objective problems. Although we will concentrate on evolutionary multi-objective optimization later in the paper, the issues discussed here can be applied to other global optimization procedures as well and especially to those, which optimize a set of solutions simultaneously instead of just one.",
199
+ "bbox": [
200
+ 80,
201
+ 417,
202
+ 917,
203
+ 617
204
+ ],
205
+ "page_idx": 1
206
+ },
207
+ {
208
+ "type": "text",
209
+ "text": "The paper is organized as follows: In Section 2 the basics of Pareto optimization are described, followed by the weighted sum and the $\\varepsilon$ -constrained method, including a brief discussion of their properties. In Section 3, the cascaded weighted sum is introduced. Section 4 starts with a classification of application scenarios, gives some examples, and discusses the question for which scenario which method is suited better or how they can complement each other. The paper closes in Section 5 with a summary and a conclusion.",
210
+ "bbox": [
211
+ 80,
212
+ 619,
213
+ 917,
214
+ 737
215
+ ],
216
+ "page_idx": 1
217
+ },
218
+ {
219
+ "type": "text",
220
+ "text": "2. Short Introduction to Pareto Optimization and Two Aggregation Methods",
221
+ "text_level": 1,
222
+ "bbox": [
223
+ 82,
224
+ 753,
225
+ 741,
226
+ 772
227
+ ],
228
+ "page_idx": 1
229
+ },
230
+ {
231
+ "type": "text",
232
+ "text": "Based on Hoffmeister and Bäck [2], and the notation of Branke et al. [3], a multi-objective optimization problem is the task of maximizing a set of $k (>1)$ usually conflicting objective functions $f_{i}$ simultaneously, denoted by maximize $\\{...\\}$ :",
233
+ "bbox": [
234
+ 82,
235
+ 788,
236
+ 917,
237
+ 844
238
+ ],
239
+ "page_idx": 1
240
+ },
241
+ {
242
+ "type": "equation",
243
+ "text": "\n$$\n\\begin{array}{l} \\text {m a x i m i z e} \\left\\{f _ {1} (x), f _ {2} (x), \\dots , f _ {k} (x) \\right\\}, x \\in S \\\\ f _ {i}: S \\subseteq S _ {1} \\times \\dots \\times S _ {n} \\rightarrow \\Re , S \\neq \\emptyset \\tag {1} \\\\ \\end{array}\n$$\n",
244
+ "text_format": "latex",
245
+ "bbox": [
246
+ 310,
247
+ 854,
248
+ 912,
249
+ 901
250
+ ],
251
+ "page_idx": 1
252
+ },
253
+ {
254
+ "type": "header",
255
+ "text": "Algorithms 2014, 7",
256
+ "bbox": [
257
+ 78,
258
+ 54,
259
+ 238,
260
+ 70
261
+ ],
262
+ "page_idx": 1
263
+ },
264
+ {
265
+ "type": "page_number",
266
+ "text": "167",
267
+ "bbox": [
268
+ 884,
269
+ 54,
270
+ 915,
271
+ 68
272
+ ],
273
+ "page_idx": 1
274
+ },
275
+ {
276
+ "type": "text",
277
+ "text": "The focus on maximization is without loss of generality, because $\\min \\{f(x)\\} = -\\max \\{-f(x)\\}$ . The nonempty set $S$ is called the feasible region and a member of it is called a decision (variable) vector $x = (x_{1}, x_{2}, \\ldots, x_{n})^{T}$ . As it is of no further interest here, we do not describe the constraints forming $S$ in more detail. Frequently, the $S_{i}$ are the set of real or whole numbers or a subset thereof, but they can be any arbitrary set as well. Objective vectors are images of decision vectors, consisting of objective (function) values $z = f(x) = (f_{1}(x), f_{2}(x), \\ldots, f_{k}(x))^{T}$ . Accordingly, the image of the feasible region in the objective space is called the feasible objective region $Z = f(S)$ . Figure 1 illustrates this.",
278
+ "bbox": [
279
+ 75,
280
+ 89,
281
+ 919,
282
+ 235
283
+ ],
284
+ "page_idx": 2
285
+ },
286
+ {
287
+ "type": "image",
288
+ "img_path": "images/0235fe3a5ca0707d49866992d60c0adb72dc14f557ef17efa0789b8ea11194b5.jpg",
289
+ "image_caption": [
290
+ "Figure 1. Feasible region $S$ and its image, the feasible objective region $Z$ for $n = k = 2$ . The set of weakly Pareto optimal solutions is shown as a bold green line in the diagram on the right. The subset of Pareto optimal solutions is the part of the green line between the black circles. The ideal objective vector $z^*$ consists of the upper bounds of the Pareto set."
291
+ ],
292
+ "image_footnote": [],
293
+ "bbox": [
294
+ 132,
295
+ 341,
296
+ 867,
297
+ 546
298
+ ],
299
+ "page_idx": 2
300
+ },
301
+ {
302
+ "type": "text",
303
+ "text": "In the following sections Pareto optimization and two frequently used aggregation methods, which turn a multi-objective problem into a single-objective task, are introduced and compared in the end.",
304
+ "bbox": [
305
+ 75,
306
+ 563,
307
+ 915,
308
+ 602
309
+ ],
310
+ "page_idx": 2
311
+ },
312
+ {
313
+ "type": "text",
314
+ "text": "2.1. Pareto Optimization",
315
+ "text_level": 1,
316
+ "bbox": [
317
+ 77,
318
+ 618,
319
+ 284,
320
+ 634
321
+ ],
322
+ "page_idx": 2
323
+ },
324
+ {
325
+ "type": "text",
326
+ "text": "A decision vector $x \\in S$ dominates another vector $y \\in S$ , if",
327
+ "bbox": [
328
+ 100,
329
+ 650,
330
+ 588,
331
+ 667
332
+ ],
333
+ "page_idx": 2
334
+ },
335
+ {
336
+ "type": "equation",
337
+ "text": "\n$$\n\\begin{array}{l} \\forall i \\in \\{1, 2, \\dots , k \\}: f _ {i} (x) \\geq f _ {i} (y) a n d \\\\ \\exists j \\in \\{1, 2, \\dots , k \\}: f _ {j} (x) > f _ {j} (y) \\tag {2} \\\\ \\end{array}\n$$\n",
338
+ "text_format": "latex",
339
+ "bbox": [
340
+ 349,
341
+ 673,
342
+ 912,
343
+ 718
344
+ ],
345
+ "page_idx": 2
346
+ },
347
+ {
348
+ "type": "text",
349
+ "text": "A decision vector $x' \\in S$ , which is not dominated by any other $x \\in S$ , is called Pareto optimal. The objective vector $z' = f(x')$ is Pareto optimal, if the corresponding decision vector is Pareto optimal and the corresponding sets can be denoted by $P(S)$ and $P(Z)$ . The set of weakly Pareto optimal solutions, which is a superset of the set of Pareto optimal solutions, is formed by decision vectors, for which the following applies: An $x' \\in S$ is called weakly Pareto optimal, if no other $x \\in S$ exists such that $f_i(x) > f_i(x')$ for all $i = 1, \\ldots, k$ . As the set of Pareto optimal solutions consists of decision vectors only, which are not dominated, they can be regarded as the set of good compromises mentioned in the introduction. It follows from the definition that they are located on the border of the feasible objective region, as shown in the right part of Figure 1. The figure also illustrates the concept of weakly Pareto optimal solutions lying on the part of the green line outside of the section bounded by the black circles",
350
+ "bbox": [
351
+ 75,
352
+ 728,
353
+ 919,
354
+ 932
355
+ ],
356
+ "page_idx": 2
357
+ },
358
+ {
359
+ "type": "header",
360
+ "text": "Algorithms 2014, 7",
361
+ "bbox": [
362
+ 77,
363
+ 53,
364
+ 240,
365
+ 70
366
+ ],
367
+ "page_idx": 2
368
+ },
369
+ {
370
+ "type": "page_number",
371
+ "text": "168",
372
+ "bbox": [
373
+ 882,
374
+ 54,
375
+ 915,
376
+ 68
377
+ ],
378
+ "page_idx": 2
379
+ },
380
+ {
381
+ "type": "text",
382
+ "text": "in the given example. It should be stated that the set of Pareto optimal solutions does not need to be as nicely shaped as shown in Figure 1; it may also be non-convex and disconnected.",
383
+ "bbox": [
384
+ 80,
385
+ 93,
386
+ 917,
387
+ 130
388
+ ],
389
+ "page_idx": 3
390
+ },
391
+ {
392
+ "type": "text",
393
+ "text": "The upper bounds of the Pareto optimal set can be obtained by maximizing the $f_{i}$ individually with respect to the feasible region. This results in the ideal objective vector $z^{*}\\in \\Re^{k}$ , an example of which is shown for the two-dimensional case in the right part of Figure 1. The lower bounds are usually hard to determine, see [3]. Although Pareto-based search methods can provide valuable estimations of the ranges of the objectives for practical applications, they are not suited for an exact determination of their lower and upper bounds.",
394
+ "bbox": [
395
+ 78,
396
+ 134,
397
+ 917,
398
+ 253
399
+ ],
400
+ "page_idx": 3
401
+ },
402
+ {
403
+ "type": "text",
404
+ "text": "According to [4], constraints in the objective space are handled as follows: A solution $x$ constrained-dominates a solution $y$ , if any of the three conditions is satisfied:",
405
+ "bbox": [
406
+ 80,
407
+ 255,
408
+ 917,
409
+ 292
410
+ ],
411
+ "page_idx": 3
412
+ },
413
+ {
414
+ "type": "list",
415
+ "sub_type": "text",
416
+ "list_items": [
417
+ "- Solution $x$ is feasible and $y$ is not.",
418
+ "- Both solutions are feasible and $x$ dominates $y$ .",
419
+ "- Both solutions are infeasible, but $x$ has a smaller constrained violation than $y$ . If more than one constraint is violated, the violations are normalized, summed up, and compared."
420
+ ],
421
+ "bbox": [
422
+ 100,
423
+ 303,
424
+ 914,
425
+ 379
426
+ ],
427
+ "page_idx": 3
428
+ },
429
+ {
430
+ "type": "text",
431
+ "text": "Hereinafter, the term Pareto optimization is used for an optimization procedure employing Pareto optimality to assess and compare generated solutions.",
432
+ "bbox": [
433
+ 80,
434
+ 390,
435
+ 917,
436
+ 428
437
+ ],
438
+ "page_idx": 3
439
+ },
440
+ {
441
+ "type": "text",
442
+ "text": "2.2. Weighted Sum",
443
+ "text_level": 1,
444
+ "bbox": [
445
+ 80,
446
+ 445,
447
+ 235,
448
+ 462
449
+ ],
450
+ "page_idx": 3
451
+ },
452
+ {
453
+ "type": "text",
454
+ "text": "One of the probably most often used assessment methods besides Pareto optimality is the weighted sum, which aggregates the objective values to a single quality measure. As the objective functions frequently have different scales, they are usually normalized. This can be done for example by using Equations (3) or (4) when minimizing and maximizing the objectives, respectively:",
455
+ "bbox": [
456
+ 80,
457
+ 479,
458
+ 917,
459
+ 558
460
+ ],
461
+ "page_idx": 3
462
+ },
463
+ {
464
+ "type": "equation",
465
+ "text": "\n$$\nf _ {i} ^ {\\text {n o r m}} = \\frac {\\max \\left(f _ {i}\\right) - f _ {i}}{\\max \\left(f _ {i}\\right) - \\min \\left(f _ {i}\\right)} \\text {f o r o b j e c t i v e s t o b e m i n i m i z e d} \\tag {3}\n$$\n",
466
+ "text_format": "latex",
467
+ "bbox": [
468
+ 267,
469
+ 564,
470
+ 912,
471
+ 604
472
+ ],
473
+ "page_idx": 3
474
+ },
475
+ {
476
+ "type": "equation",
477
+ "text": "\n$$\nf _ {i} ^ {\\text {n o r m}} = 1 - \\frac {\\max \\left(f _ {i}\\right) - f _ {i}}{\\max \\left(f _ {i}\\right) - \\min \\left(f _ {i}\\right)} \\text {f o r o b j e c t i v e s t o b e m a x i m i z e d} \\tag {4}\n$$\n",
478
+ "text_format": "latex",
479
+ "bbox": [
480
+ 252,
481
+ 613,
482
+ 912,
483
+ 652
484
+ ],
485
+ "page_idx": 3
486
+ },
487
+ {
488
+ "type": "text",
489
+ "text": "The bounds of the objective function $fi$ can be estimated or are the result of a maximization of each function individually in case of $\\max(f_i)$ . For the calculation of the weighted sum as shown in Equation (5), a weight $w_i$ has to be chosen for every objective:",
490
+ "bbox": [
491
+ 80,
492
+ 665,
493
+ 917,
494
+ 722
495
+ ],
496
+ "page_idx": 3
497
+ },
498
+ {
499
+ "type": "equation",
500
+ "text": "\n$$\n\\text {m a x i m i z e} \\sum_ {i = 1} ^ {k} w _ {i} f _ {i} ^ {\\text {n o r m}} (x), \\quad x \\in S \\text {w h e r e} w _ {i} > 0 \\text {f o r a l l} i = 1, \\dots , k \\text {a n d} \\sum_ {i = 1} ^ {k} w _ {i} = 1 \\tag {5}\n$$\n",
501
+ "text_format": "latex",
502
+ "bbox": [
503
+ 176,
504
+ 731,
505
+ 912,
506
+ 766
507
+ ],
508
+ "page_idx": 3
509
+ },
510
+ {
511
+ "type": "text",
512
+ "text": "By varying the weights, any point of a convex Pareto front can be obtained. Figure 2 illustrates this: The straight line corresponding to the chosen weights $w_{1}$ and $w_{2}$ is moved towards the border of the feasible objective region during the optimization process and becomes a tangent in point P. The solutions found are Pareto optimal, see [5].",
513
+ "bbox": [
514
+ 80,
515
+ 780,
516
+ 917,
517
+ 858
518
+ ],
519
+ "page_idx": 3
520
+ },
521
+ {
522
+ "type": "text",
523
+ "text": "On the other hand, it is possible that parts of the Pareto front cannot be found in case of a non-convex problem. This is illustrated in Figure 3: the part between points A and B of the Pareto front cannot be obtained for any weights. This is a serious drawback.",
524
+ "bbox": [
525
+ 80,
526
+ 860,
527
+ 917,
528
+ 917
529
+ ],
530
+ "page_idx": 3
531
+ },
532
+ {
533
+ "type": "header",
534
+ "text": "Algorithms 2014, 7",
535
+ "bbox": [
536
+ 78,
537
+ 54,
538
+ 238,
539
+ 70
540
+ ],
541
+ "page_idx": 3
542
+ },
543
+ {
544
+ "type": "page_number",
545
+ "text": "169",
546
+ "bbox": [
547
+ 884,
548
+ 54,
549
+ 915,
550
+ 68
551
+ ],
552
+ "page_idx": 3
553
+ },
554
+ {
555
+ "type": "image",
556
+ "img_path": "images/b4f9e3bf301ec55325cc194f2ba779a15611d8fe6a771daa5b8d4fd1469e3ddf.jpg",
557
+ "image_caption": [
558
+ "Figure 2. By using appropriate weights, every point of a convex Pareto front can be achieved by the weighted sum. Here, point $\\mathbf{P}$ can be obtained for the weights $w_{1}$ and $w_{2}$ . The arrows show the movement direction of points where the largest quality gain is obtained."
559
+ ],
560
+ "image_footnote": [],
561
+ "bbox": [
562
+ 329,
563
+ 162,
564
+ 663,
565
+ 370
566
+ ],
567
+ "page_idx": 4
568
+ },
569
+ {
570
+ "type": "image",
571
+ "img_path": "images/6bab592db324588f3c9cf05c876c9f1d43677f1a32cd1ee911d349aad6f40dfe.jpg",
572
+ "image_caption": [
573
+ "Figure 3. For non-convex Pareto fronts, it is possible that parts of the front can not be obtained by the weighted sum. The region between points $\\mathbf{A}$ and $\\mathbf{B}$ is an example of this serious draw back of this aggregation method."
574
+ ],
575
+ "image_footnote": [],
576
+ "bbox": [
577
+ 329,
578
+ 460,
579
+ 663,
580
+ 659
581
+ ],
582
+ "page_idx": 4
583
+ },
584
+ {
585
+ "type": "text",
586
+ "text": "As mentioned above, the weighted sum is often used for practical applications. Reasons are the simplicity of its application and the easy way to integrate restrictions, which are beyond pure limitations of the feasible region. Examples are scheduling tasks, where the jobs to be scheduled have due dates for finalization. Thus, delays can occur and it is not sufficient to tell the search procedure that this constrained violation is an infeasible solution by e.g., rejecting it. Instead, the search must be guided out of the infeasible region by rewarding a reduction of the violation. In the example given, this can be done by counting the number of jobs involved and summing up the amounts of delays, see e.g., [6]. These two key figures can either become new objectives or can be treated as penalty functions. As they do not represent wanted properties and as a low number of objectives is preferable, penalty functions are the method of choice. They can be designed to yield values between zero (maximal violation) and one (no violation). The results of all penalty functions serve as factors by which the weighted sum is multiplied. As a result, the pure weighted sum turns into a raw quality measure, which represents the solution quality of the problem without constraints, while the final",
587
+ "bbox": [
588
+ 80,
589
+ 678,
590
+ 917,
591
+ 938
592
+ ],
593
+ "page_idx": 4
594
+ },
595
+ {
596
+ "type": "header",
597
+ "text": "Algorithms 2014, 7",
598
+ "bbox": [
599
+ 78,
600
+ 54,
601
+ 238,
602
+ 70
603
+ ],
604
+ "page_idx": 4
605
+ },
606
+ {
607
+ "type": "page_number",
608
+ "text": "170",
609
+ "bbox": [
610
+ 884,
611
+ 54,
612
+ 915,
613
+ 68
614
+ ],
615
+ "page_idx": 4
616
+ },
617
+ {
618
+ "type": "text",
619
+ "text": "product represents the solution for the task with its constraints. Figure 4 shows a typical example of such a penalty function. If the maximum value of constraint violation is hard to predict, as it is often the case, an exponential function can be chosen. A value of delay $dp$ for poor solutions can usually be estimated roughly and the exponential function is attributed such that it yields a value of $\\frac{1}{3}$ in this case.",
620
+ "bbox": [
621
+ 75,
622
+ 93,
623
+ 917,
624
+ 173
625
+ ],
626
+ "page_idx": 5
627
+ },
628
+ {
629
+ "type": "image",
630
+ "img_path": "images/ab2a667072480f1ad928a31d483490be74c166aa2075ce68a412d48af4ca7f27.jpg",
631
+ "image_caption": [
632
+ "Figure 4. Example of a penalty function. It turns constraint violations into a penalty value between 1 and 0, which serves as a factor for decreasing the weighted sum."
633
+ ],
634
+ "image_footnote": [],
635
+ "bbox": [
636
+ 257,
637
+ 243,
638
+ 742,
639
+ 450
640
+ ],
641
+ "page_idx": 5
642
+ },
643
+ {
644
+ "type": "text",
645
+ "text": "2.3. $\\varepsilon$ -Constrained Method",
646
+ "text_level": 1,
647
+ "bbox": [
648
+ 78,
649
+ 473,
650
+ 302,
651
+ 489
652
+ ],
653
+ "page_idx": 5
654
+ },
655
+ {
656
+ "type": "text",
657
+ "text": "The $\\varepsilon$ -constrained method is based on the optimization of one selected objective function $f_{j}$ and treating the others as constraints [7]. The optimization problem now has the form",
658
+ "bbox": [
659
+ 75,
660
+ 508,
661
+ 917,
662
+ 546
663
+ ],
664
+ "page_idx": 5
665
+ },
666
+ {
667
+ "type": "equation",
668
+ "text": "\n$$\n\\begin{array}{l} \\text {m a x i m i z e} f _ {j} (x), \\quad x \\in S, \\quad j \\in \\{1, \\dots , k \\} \\\\ f _ {i} (x) \\geq \\varepsilon_ {i} \\text {f o r a l l} i = 1, \\dots , k, i \\neq j \\tag {6} \\\\ \\end{array}\n$$\n",
669
+ "text_format": "latex",
670
+ "bbox": [
671
+ 317,
672
+ 551,
673
+ 912,
674
+ 601
675
+ ],
676
+ "page_idx": 5
677
+ },
678
+ {
679
+ "type": "text",
680
+ "text": "The $\\varepsilon_{j}$ are the lower bounds for those objective functions that are treated as constraints. For practical applications, the appropriate bounds must be selected carefully. In particular, it must be ensured that the bounds are within the feasible objective region, because otherwise the resulting problem would have no solutions. Ośczyka gives suggestions for a systematic selection of values for the $\\varepsilon_{j}$ and illustrates them with sample applications [8].",
681
+ "bbox": [
682
+ 75,
683
+ 613,
684
+ 917,
685
+ 712
686
+ ],
687
+ "page_idx": 5
688
+ },
689
+ {
690
+ "type": "text",
691
+ "text": "Figure 5 gives an example based on the feasible region shown in Figure 3. In Figure $5f_{2}$ is treated as a constraint with the lower bound $\\varepsilon_{2}$ . Thus, the remaining Pareto front is the section between the points F1 and F2. The figure also shows the main movement direction of solutions in $Z$ that have exceeded the threshold $\\varepsilon_{2}$ . The main movement direction results from the optimization. Move components up- and downwards are also possible, but are not considered by the assessment procedure of this method as long as they do not drop below $\\varepsilon_{2}$ . A too large value of the constraint like $\\varepsilon_{bad}$ would make the problem unsolvable.",
692
+ "bbox": [
693
+ 75,
694
+ 714,
695
+ 917,
696
+ 853
697
+ ],
698
+ "page_idx": 5
699
+ },
700
+ {
701
+ "type": "text",
702
+ "text": "A decision vector $x' \\in S$ is Pareto-optimal, if and only if it solves Equation (6) for every $j = 1, \\dots, k$ , where $\\varepsilon_i = f_i(x')$ for $i = 1, \\dots, k, i \\neq j$ , see [9]. This means that $k$ different problems must be solved for every member of the Pareto front, which can be expected to be computationally costly. If the task can be relaxed to weak Pareto optimality, only one solution of Equation (6) per member of",
703
+ "bbox": [
704
+ 77,
705
+ 854,
706
+ 919,
707
+ 936
708
+ ],
709
+ "page_idx": 5
710
+ },
711
+ {
712
+ "type": "header",
713
+ "text": "Algorithms 2014, 7",
714
+ "bbox": [
715
+ 78,
716
+ 53,
717
+ 240,
718
+ 71
719
+ ],
720
+ "page_idx": 5
721
+ },
722
+ {
723
+ "type": "page_number",
724
+ "text": "171",
725
+ "bbox": [
726
+ 882,
727
+ 54,
728
+ 915,
729
+ 68
730
+ ],
731
+ "page_idx": 5
732
+ },
733
+ {
734
+ "type": "text",
735
+ "text": "the front is required [9]. On the other hand, the method does not require convexity for finding any Pareto-optimal solution.",
736
+ "bbox": [
737
+ 80,
738
+ 93,
739
+ 917,
740
+ 130
741
+ ],
742
+ "page_idx": 6
743
+ },
744
+ {
745
+ "type": "image",
746
+ "img_path": "images/50165a7bfbef8f28979176d94d7757146949ad85567743afeac10e6955a98b83.jpg",
747
+ "image_caption": [
748
+ "Figure 5. Restricted objective region using the $\\varepsilon$ -constrained method. The hatched region is excluded due to the lower bound $\\varepsilon_{2}$ . The remaining Pareto front is limited by F1 and F2. For too large bounds like $\\varepsilon_{bad}$ , the problem becomes unsolvable."
749
+ ],
750
+ "image_footnote": [],
751
+ "bbox": [
752
+ 327,
753
+ 219,
754
+ 673,
755
+ 419
756
+ ],
757
+ "page_idx": 6
758
+ },
759
+ {
760
+ "type": "text",
761
+ "text": "2.4. Summary",
762
+ "text_level": 1,
763
+ "bbox": [
764
+ 80,
765
+ 439,
766
+ 194,
767
+ 456
768
+ ],
769
+ "page_idx": 6
770
+ },
771
+ {
772
+ "type": "text",
773
+ "text": "The two aggregation procedures (More aggregation methods can be found in [9].) can both find any Pareto-optimal solution for convex problems and the $\\varepsilon$ -constrained method can do that for non-convex tasks, too. This advantage of the $\\varepsilon$ -constrained method goes at the expense of higher computational costs: For finding a Pareto-optimal solution, the $\\varepsilon$ -constrained method needs to solve $k$ different problems, whereas the weighted sum needs to solve just one per element of the Pareto set.",
774
+ "bbox": [
775
+ 80,
776
+ 473,
777
+ 917,
778
+ 571
779
+ ],
780
+ "page_idx": 6
781
+ },
782
+ {
783
+ "type": "text",
784
+ "text": "Another important issue is the manageability. Depending on the problem, it can be assumed that it is easier for experts in the application area to estimate lower bounds for objectives than weights, which are more abstract. Experts are usually familiar with objective values and they are intelligible for them as such.",
785
+ "bbox": [
786
+ 80,
787
+ 574,
788
+ 917,
789
+ 652
790
+ ],
791
+ "page_idx": 6
792
+ },
793
+ {
794
+ "type": "text",
795
+ "text": "For optimization procedures working with a set of solutions, frequently called population, like Evolutionary Algorithms (EAs), Ant Colony (ACO) or Particle Swarm Optimization (PSO), or the like, the advantage for multi-objective optimization is the ability to determine, in principle, the entire Pareto front at once. Of course, an optimization procedure must be adapted such that it spreads the population to the Pareto front as best as possible instead of concentrating on some areas of it. Examples of such adapted procedures in the EA field are the non-dominated sorting Genetic Algorithm (NSGA-II) [10], the Strength Pareto EA (SPEA-2) [11], or the S-metric selection evolutionary multi-objective optimization algorithm (SMS-EMOA) [12], to mention only a few. These combinations of population-based optimization procedures and Pareto optimality in general will estimate the Pareto front roughly at the minimum, with less computational effort than it can be done using the aggregation methods introduced for the assessment of the individuals of the same population-based methods. In the latter case, about as many runs would be required as solutions should occupy the Pareto front. Thus, these algorithms specialized for finding as much of the Pareto front as possible in one run are the",
796
+ "bbox": [
797
+ 80,
798
+ 656,
799
+ 919,
800
+ 915
801
+ ],
802
+ "page_idx": 6
803
+ },
804
+ {
805
+ "type": "header",
806
+ "text": "Algorithms 2014, 7",
807
+ "bbox": [
808
+ 78,
809
+ 54,
810
+ 238,
811
+ 70
812
+ ],
813
+ "page_idx": 6
814
+ },
815
+ {
816
+ "type": "page_number",
817
+ "text": "172",
818
+ "bbox": [
819
+ 884,
820
+ 54,
821
+ 915,
822
+ 68
823
+ ],
824
+ "page_idx": 6
825
+ },
826
+ {
827
+ "type": "text",
828
+ "text": "methods of choice for new problems, where little is known in advance and where a human decision maker is available to make the final choice.",
829
+ "bbox": [
830
+ 80,
831
+ 93,
832
+ 917,
833
+ 129
834
+ ],
835
+ "page_idx": 7
836
+ },
837
+ {
838
+ "type": "text",
839
+ "text": "3. Cascaded Weighted Sum",
840
+ "text_level": 1,
841
+ "bbox": [
842
+ 80,
843
+ 147,
844
+ 317,
845
+ 165
846
+ ],
847
+ "page_idx": 7
848
+ },
849
+ {
850
+ "type": "text",
851
+ "text": "We have been using this extended version of the weighted sum since the inception of our Evolutionary Algorithm GLEAM (General Learning and Evolutionary Algorithm and Method) in 1990 [13], as we considered this assessment method a convenient way to steer the evolutionary search process for evolving improved collision-free robot move trajectories [13,14]. As we did not regard it as something special, we did not publish it in English (A detailed description in German of the cascaded weighted sum and all variants of the application of GLEAM to industrial robot path planning for several industrial robots can be found in [15]). until comments of reviewers of other publications on GLEAM changed our mind and revealed the need for a general discussion. Some impacts of the robot application on the evaluation of solutions are discussed later in Sections 4.2 and 4.3.3. Before the cascaded weighted sum is described, we will shortly introduce Evolutionary Algorithms and GLEAM in particular to the extent necessary for a better understanding of the following sections.",
852
+ "bbox": [
853
+ 80,
854
+ 181,
855
+ 917,
856
+ 401
857
+ ],
858
+ "page_idx": 7
859
+ },
860
+ {
861
+ "type": "text",
862
+ "text": "3.1. Short Introduction to Evolutionary Algorithms and GLEAM",
863
+ "text_level": 1,
864
+ "bbox": [
865
+ 80,
866
+ 418,
867
+ 601,
868
+ 435
869
+ ],
870
+ "page_idx": 7
871
+ },
872
+ {
873
+ "type": "text",
874
+ "text": "Evolutionary Algorithms (EA) typically conduct a search in the feasible region, with this search being guided by a quality function usually called fitness function. The search is done in parallel by a set of solution candidates, called individuals, forming a population. If an individual is outside of the feasible region, it will be guided back by one or more penalty functions or comparable techniques. The fitness function may be one of the aggregation methods described above. Alternatively, it is guided by Pareto optimality and is therefore based on the amount of dominated solutions and possibly some other measure, which rewards a good spread of the non-dominated solutions along the Pareto front, see e.g., [10,12]. New solutions are generated by stochastic algorithmic counterparts of the two biological archetypes mutation and recombination, for which an individual selects a partner frequently influenced by the fitness. Thus, the generated offspring inherits properties from both parents. The third principle of evolution, the survival of the fittest, occurs when deciding about who is included in the next iteration. In elitist forms of EAs, the best individual survives always and therefore, the quality of the population can increase only. On the other hand, convergence cannot be ensured within limited time. To avoid premature convergence, various attempts have been made to maintain genotypic diversity for a longer period of time by establishing niches within the population, see e.g., [16-18]. GLEAM uses one of these methods [16] (An actual description of the diffusion model and its integration into GLEAM can be found in [19].) and, thus, frequently yields some distinct solutions of more or less comparable quality. Iterative, stochastic, and population-based optimization procedures in general tend to produce some variants of the best solution. How much they differ in properties and quality depends on the algorithm, the actions taken for maintaining diversity, and the length of the run.",
875
+ "bbox": [
876
+ 80,
877
+ 454,
878
+ 919,
879
+ 854
880
+ ],
881
+ "page_idx": 7
882
+ },
883
+ {
884
+ "type": "header",
885
+ "text": "Algorithms 2014, 7",
886
+ "bbox": [
887
+ 80,
888
+ 54,
889
+ 238,
890
+ 70
891
+ ],
892
+ "page_idx": 7
893
+ },
894
+ {
895
+ "type": "page_number",
896
+ "text": "173",
897
+ "bbox": [
898
+ 884,
899
+ 54,
900
+ 915,
901
+ 68
902
+ ],
903
+ "page_idx": 7
904
+ },
905
+ {
906
+ "type": "text",
907
+ "text": "3.2. Definition of the Cascaded Weighted Sum",
908
+ "text_level": 1,
909
+ "bbox": [
910
+ 78,
911
+ 93,
912
+ 455,
913
+ 111
914
+ ],
915
+ "page_idx": 8
916
+ },
917
+ {
918
+ "type": "text",
919
+ "text": "In the cascaded weighted sum (CWS) each objective is assigned a weight $w_{i}$ as with the pure weighted sum and a priority starting with 1 as the highest one. If desired, some objectives may have the same priority. All objectives but those with the lowest priority receive a user-given threshold $\\varepsilon_{i}$ . In the beginning, only the objectives of the highest priority are active and contribute to the weighted sum. The others are activated according to the following priority rule:",
920
+ "bbox": [
921
+ 75,
922
+ 128,
923
+ 917,
924
+ 227
925
+ ],
926
+ "page_idx": 8
927
+ },
928
+ {
929
+ "type": "text",
930
+ "text": "If all objectives with the same priority exceed their threshold, the objectives of the next lower priority are activated and their values are added to the sum.",
931
+ "bbox": [
932
+ 121,
933
+ 237,
934
+ 875,
935
+ 278
936
+ ],
937
+ "page_idx": 8
938
+ },
939
+ {
940
+ "type": "text",
941
+ "text": "As the objectives are grouped by the priorities and the groups are considered one after the other, the method is called cascaded weighted sum. A group whose members exceed their threshold is called a satisfied group. If at least one objective of a satisfied group drops below its threshold, the group is not satisfied anymore and consequently, all groups with lower priorities are deactivated, which will significantly reduce the resulting weighted sum.",
942
+ "bbox": [
943
+ 75,
944
+ 294,
945
+ 917,
946
+ 393
947
+ ],
948
+ "page_idx": 8
949
+ },
950
+ {
951
+ "type": "text",
952
+ "text": "For the formal definition of the CWS given in Equation (7), the original $f_{i}(x)$ are used for the threshold value checks rather than their normalized counterparts, as we assume that this is more convenient for experts of the application. The $k$ objectives are sorted according to their priorities and we have $g$ objective groups, where $1 < g \\leq k$ . For $g = 1$ , the CWS would be identical with the weighted sum. Each group consists of $m_{j}$ objectives, the sum of which is $k$ . As with the original weighted sum, each $w_{i} > 0$ and $\\sum_{i=1}^{k} w_{i} = 1$ .",
953
+ "bbox": [
954
+ 75,
955
+ 395,
956
+ 917,
957
+ 532
958
+ ],
959
+ "page_idx": 8
960
+ },
961
+ {
962
+ "type": "text",
963
+ "text": "As there are differences for the first and the last priority group, Equation (7) shows the objectives contributing to the weighted sum for the highest priority 1, the general case of priority $j$ , and the lowest priority $g$ .",
964
+ "bbox": [
965
+ 75,
966
+ 537,
967
+ 917,
968
+ 596
969
+ ],
970
+ "page_idx": 8
971
+ },
972
+ {
973
+ "type": "text",
974
+ "text": "Priority 1: if not all $f_{i}(x) \\geq \\varepsilon_{i} \\forall i = 1, \\ldots, m_{1}$",
975
+ "bbox": [
976
+ 77,
977
+ 602,
978
+ 512,
979
+ 621
980
+ ],
981
+ "page_idx": 8
982
+ },
983
+ {
984
+ "type": "text",
985
+ "text": "(highest priority) maximize $\\sum_{i = 1}^{m}w_{i}f_{i}^{\\text{norm}}(x), x\\in S, m = m_{1}$",
986
+ "bbox": [
987
+ 80,
988
+ 625,
989
+ 623,
990
+ 661
991
+ ],
992
+ "page_idx": 8
993
+ },
994
+ {
995
+ "type": "text",
996
+ "text": "Priority $j$ : if all $f_{i}(x)\\geq \\varepsilon_{i}\\quad \\forall i = 1,\\ldots ,l_{j}$ and $l_{j} = \\sum_{l = 1}^{j - 1}m_{l}$ (satisfied groups)",
997
+ "bbox": [
998
+ 78,
999
+ 665,
1000
+ 779,
1001
+ 703
1002
+ ],
1003
+ "page_idx": 8
1004
+ },
1005
+ {
1006
+ "type": "text",
1007
+ "text": "not all $f_{i}(x)\\geq \\varepsilon_{i}\\quad \\forall i = l_{j} + 1,\\ldots ,l_{j} + m_{j}$ maximize $\\sum_{i = 1}^{m}w_{i}f_{i}^{\\mathrm{norm}}(x),\\quad x\\in S,\\quad m = l_{j} + m_{j}$ (7)",
1008
+ "bbox": [
1009
+ 80,
1010
+ 703,
1011
+ 902,
1012
+ 763
1013
+ ],
1014
+ "page_idx": 8
1015
+ },
1016
+ {
1017
+ "type": "text",
1018
+ "text": "Priority $g$ : if all $f_{i}(x)\\geq \\varepsilon_{i}\\quad \\forall i = 1,\\ldots ,l_{g},\\quad l_{g} = \\sum_{l = 1}^{g - 1}m_{l}$ (satisfied groups) (lowest priority)",
1019
+ "bbox": [
1020
+ 78,
1021
+ 768,
1022
+ 779,
1023
+ 807
1024
+ ],
1025
+ "page_idx": 8
1026
+ },
1027
+ {
1028
+ "type": "equation",
1029
+ "text": "\n$$\n\\text {m a x i m i z e} \\sum_ {i = 1} ^ {k} w _ {i} f _ {i} ^ {\\text {n o r m}} (x), x \\in S\n$$\n",
1030
+ "text_format": "latex",
1031
+ "bbox": [
1032
+ 280,
1033
+ 809,
1034
+ 544,
1035
+ 847
1036
+ ],
1037
+ "page_idx": 8
1038
+ },
1039
+ {
1040
+ "type": "text",
1041
+ "text": "Once a group is satisfied, the total quality value is increased abruptly by the values from the next activated group, which they can lose, if only one objective of a group with higher priority undergoes its $\\varepsilon_{i}$ . This makes it very unlikely for more successful solutions that the once gained values of the already contributing objectives drop below their thresholds in the course of further search.",
1042
+ "bbox": [
1043
+ 75,
1044
+ 859,
1045
+ 917,
1046
+ 938
1047
+ ],
1048
+ "page_idx": 8
1049
+ },
1050
+ {
1051
+ "type": "header",
1052
+ "text": "Algorithms 2014, 7",
1053
+ "bbox": [
1054
+ 78,
1055
+ 53,
1056
+ 240,
1057
+ 71
1058
+ ],
1059
+ "page_idx": 8
1060
+ },
1061
+ {
1062
+ "type": "page_number",
1063
+ "text": "174",
1064
+ "bbox": [
1065
+ 882,
1066
+ 54,
1067
+ 915,
1068
+ 68
1069
+ ],
1070
+ "page_idx": 8
1071
+ },
1072
+ {
1073
+ "type": "text",
1074
+ "text": "The selection of appropriate weights and threshold values requires some knowledge about the problem at hand, including one or more preparative Pareto optimization runs as illustrated in the next section. Thus, neither the original weighted sum nor the CWS are a priori methods. We will come back to this later.",
1075
+ "bbox": [
1076
+ 80,
1077
+ 93,
1078
+ 917,
1079
+ 170
1080
+ ],
1081
+ "page_idx": 9
1082
+ },
1083
+ {
1084
+ "type": "text",
1085
+ "text": "3.3. Example of the CWS",
1086
+ "text_level": 1,
1087
+ "bbox": [
1088
+ 82,
1089
+ 187,
1090
+ 285,
1091
+ 206
1092
+ ],
1093
+ "page_idx": 9
1094
+ },
1095
+ {
1096
+ "type": "text",
1097
+ "text": "Table 1 gives an example of the usage of weights and thresholds for a problem of scheduling jobs organized in workflows of elementary operations to heterogeneous equipment comparable to the one described in [6]. All operations can be assigned to alternatively usable equipment at different costs and processing times. The task is to produce schedules, where the job processing is as cheap and as fast as possible and each job observes a given budget and a given due date. The rate of utilization of the equipment should be as high and the total makespan of all jobs as low as possible. Additionally, the schedules must be updated frequently, because e.g., new jobs arrive or waiting jobs are cancelled. As described in Section 2.2, all objectives are normalized according to Equation (3). The required limits are obtained as follows: The bounds of job time and costs are calculated by determination of the critical path of the workflow of that job and by the assignment of the fastest/slowest or costliest/cheapest equipment suited for the operations of a job. The user-given due dates and cost limits are checked against these bounds so that the subsequent scheduling is based on goals which are achievable in principle. The lower bound of the makespan is the duration of the longest lasting job using the fastest equipment and the upper bound is the sum of the duration of all jobs using the slowest equipment divided by the smallest number of alternatively usable equipment. As the rate of utilization already yields a value to be maximized between zero and one, there is no need for bounds.",
1098
+ "bbox": [
1099
+ 80,
1100
+ 223,
1101
+ 917,
1102
+ 542
1103
+ ],
1104
+ "page_idx": 9
1105
+ },
1106
+ {
1107
+ "type": "table",
1108
+ "img_path": "images/376598e1d44207217b1adf5eea053dda4cedc4bab00d4759bb1c2e95452b1729.jpg",
1109
+ "table_caption": [
1110
+ "Table 1. Example of the use of the cascaded weighted sum (CWS) and the effect of objective group weights. The objectives with the highest priority are always active and contribute to the weighted sum. They are marked here by a light green background."
1111
+ ],
1112
+ "table_footnote": [],
1113
+ "table_body": "<table><tr><td>Priority</td><td>Objective</td><td>Weight [%]</td><td>Threshold εi</td></tr><tr><td>1</td><td>job time</td><td>30</td><td>0.4</td></tr><tr><td>1</td><td>job costs</td><td>40</td><td>0.25</td></tr><tr><td>2</td><td>makespan</td><td>20</td><td>-</td></tr><tr><td>2</td><td>utilization rate</td><td>10</td><td>-</td></tr></table>",
1114
+ "bbox": [
1115
+ 221,
1116
+ 621,
1117
+ 773,
1118
+ 718
1119
+ ],
1120
+ "page_idx": 9
1121
+ },
1122
+ {
1123
+ "type": "text",
1124
+ "text": "Job time and costs are most conflicting, while short processing times support a short makespan and tend to increase the utilization rate. Faster equipment typically is more expensive than slower, and the ratio between costs and duration of the use of equipment will play an important role. Thus, lower costs require the usage of equipment with a lower ratio of costs and duration. This tends to increase the duration and to decrease the workload of less cost-effective equipment. Additionally, shorter job times are also rewarded to some extent by the makespan and possibly by the utilization rate, but costs are not. These considerations are supported by the observation that the processing times are easier to reduce than costs. Thus, job time and costs should compete from the beginning and both should receive a larger portion of the weights. Therefore, they are grouped together at the highest priority so that they always contribute to the weighted sum, as shown in Table 1. As a rule of thumb, further objectives,",
1125
+ "bbox": [
1126
+ 80,
1127
+ 734,
1128
+ 919,
1129
+ 934
1130
+ ],
1131
+ "page_idx": 9
1132
+ },
1133
+ {
1134
+ "type": "header",
1135
+ "text": "Algorithms 2014, 7",
1136
+ "bbox": [
1137
+ 80,
1138
+ 55,
1139
+ 238,
1140
+ 70
1141
+ ],
1142
+ "page_idx": 9
1143
+ },
1144
+ {
1145
+ "type": "page_number",
1146
+ "text": "175",
1147
+ "bbox": [
1148
+ 884,
1149
+ 54,
1150
+ 915,
1151
+ 68
1152
+ ],
1153
+ "page_idx": 9
1154
+ },
1155
+ {
1156
+ "type": "text",
1157
+ "text": "which are less conflicting with each other and those of higher priorities, can go into the same group. After having determined the priority and grouping structure based on experience and considerations about the relationships between the objectives, appropriate weights and thresholds must be chosen.",
1158
+ "bbox": [
1159
+ 80,
1160
+ 93,
1161
+ 917,
1162
+ 151
1163
+ ],
1164
+ "page_idx": 10
1165
+ },
1166
+ {
1167
+ "type": "text",
1168
+ "text": "Based on these considerations and a representative scheduling task, a schedule can be produced based on Pareto optimality for the identification of the region of interest, from which thresholds and weights can be derived. In the given example, this can be done in a first step for a reduced set of objectives by omitting a less conflicting one, e.g., the utilization rate. The Pareto fronts for good makespans of the two remaining objectives can be plotted. From it, the thresholds and the ratio of the weights between them can be derived, see Figure 2 for the relationship between Pareto front and the weights and Figure 5 for the usage of thresholds. This results in a ratio of 3:4 between the averaged job times and costs in the given example. The threshold values $\\varepsilon_{i}$ are used as percentage values related to the available scale between $\\min(f_i)$ and $\\max(f_i)$ , as shown in Equation (8):",
1169
+ "bbox": [
1170
+ 80,
1171
+ 154,
1172
+ 917,
1173
+ 332
1174
+ ],
1175
+ "page_idx": 10
1176
+ },
1177
+ {
1178
+ "type": "equation",
1179
+ "text": "\n$$\nf _ {i, \\varepsilon} = \\min \\left(f _ {i}\\right) + \\varepsilon_ {i} \\left(\\max \\left(f _ {i}\\right) - \\min \\left(f _ {i}\\right)\\right) \\tag {8}\n$$\n",
1180
+ "text_format": "latex",
1181
+ "bbox": [
1182
+ 347,
1183
+ 341,
1184
+ 912,
1185
+ 360
1186
+ ],
1187
+ "page_idx": 10
1188
+ },
1189
+ {
1190
+ "type": "text",
1191
+ "text": "In this case, the objectives of the next group are activated for schedules where the costs are below $75\\%$ of their available scales on the average and the finishing times are below $60\\%$ of their spendable time frames on the average. This approach can be repeated for the rest of the objectives or the remaining weights are assigned according to experience. For the given example, it was decided based on previous observations that about $70\\%$ of the weight should go to the first two objectives and the rest should go mainly to the makespan, as its reduction tends to increase the utilization rate. Table 1 shows the resulting weights. The suitability of the settings can be verified by the generation of a Pareto optimal schedule using all objectives and the comparison with the results obtained when using the CWS instead. Depending on the task at hand and the first setting of weights and thresholds, this may result in an iterative refinement.",
1192
+ "bbox": [
1193
+ 80,
1194
+ 372,
1195
+ 917,
1196
+ 571
1197
+ ],
1198
+ "page_idx": 10
1199
+ },
1200
+ {
1201
+ "type": "text",
1202
+ "text": "To sum up, weights and thresholds are derived from experience and/or from previous estimations of the Pareto front of a representative task. In Section 4 we will discuss the range of meaningful applications of the CWS.",
1203
+ "bbox": [
1204
+ 80,
1205
+ 574,
1206
+ 917,
1207
+ 632
1208
+ ],
1209
+ "page_idx": 10
1210
+ },
1211
+ {
1212
+ "type": "text",
1213
+ "text": "3.4. The Effect of the CWS on the Search",
1214
+ "text_level": 1,
1215
+ "bbox": [
1216
+ 82,
1217
+ 649,
1218
+ 413,
1219
+ 667
1220
+ ],
1221
+ "page_idx": 10
1222
+ },
1223
+ {
1224
+ "type": "text",
1225
+ "text": "The effect of the cascaded assessment on population-based search procedures like EAs, PSO or ACO is illustrated in Figure 6 for two objectives and the example of the feasible objective region used in Figure 2. Based on previous knowledge, the sources of which are discussed in Section 4.3, a region of interest is defined for every objective group and the weights are set accordingly. Additionally, threshold values $\\varepsilon_{i}$ are defined for all objectives but those of the group with the lowest priority. Care must be taken for the accessibility of the region of interest being not affected by these thresholds. In the example of Figure 6, objective two has a higher priority than objective one and a threshold value $\\varepsilon_{2}$ . In the beginning of the search, a quality gain can be achieved for upward moves only. For those solutions that have surpassed $\\varepsilon_{2}$ , the result of the first objective is added according to the weights changing the average movement direction towards the tangent and the region of interest. If the search runs long enough to come more or less close to convergence, most solutions will be found in the region of interest. The best of them will be at the intersection of the tangent and the Pareto front or at least",
1226
+ "bbox": [
1227
+ 80,
1228
+ 684,
1229
+ 919,
1230
+ 923
1231
+ ],
1232
+ "page_idx": 10
1233
+ },
1234
+ {
1235
+ "type": "header",
1236
+ "text": "Algorithms 2014, 7",
1237
+ "bbox": [
1238
+ 78,
1239
+ 54,
1240
+ 238,
1241
+ 70
1242
+ ],
1243
+ "page_idx": 10
1244
+ },
1245
+ {
1246
+ "type": "page_number",
1247
+ "text": "176",
1248
+ "bbox": [
1249
+ 884,
1250
+ 54,
1251
+ 915,
1252
+ 68
1253
+ ],
1254
+ "page_idx": 10
1255
+ },
1256
+ {
1257
+ "type": "text",
1258
+ "text": "close to it. Especially for EAs, which preserve genotypic diversity to some extent, good but suboptimal solutions close to the best one covering at least parts of the area of interest are very likely to be found. This means that a run is stopped when stagnation occurs over a longer period of time and not when the entire population has (nearly) converged.",
1259
+ "bbox": [
1260
+ 75,
1261
+ 93,
1262
+ 917,
1263
+ 173
1264
+ ],
1265
+ "page_idx": 11
1266
+ },
1267
+ {
1268
+ "type": "image",
1269
+ "img_path": "images/bc4466dbc4d4b1cde5c14e6ac3a6ad1de60037ecbe7155acc7a896c415e14c49.jpg",
1270
+ "image_caption": [
1271
+ "Figure 6. Cascaded weighted sum for $k = 2$ and objective two having a higher priority than objective one. Thus, solutions in the hatched area are bettered according to $f_{2}$ only and will find the largest quality gain in upward moves (red arrow). This changes, if $\\varepsilon_{2}$ is exceeded and $f_{1}$ starts to contribute to the resulting sum, as shown by the black arrows."
1272
+ ],
1273
+ "image_footnote": [],
1274
+ "bbox": [
1275
+ 329,
1276
+ 278,
1277
+ 668,
1278
+ 488
1279
+ ],
1280
+ "page_idx": 11
1281
+ },
1282
+ {
1283
+ "type": "text",
1284
+ "text": "An example of a Pareto front with a non-convex section is shown in Figure 7 using the objective region and the threshold value of Figure 5. As the part between F2 and the rightmost end of the Pareto front is quasi excluded, it is possible now to obtain solutions in the marked area of interest. This would not be the case for the original weighted sum. On the other hand, if the region of interest was located between the magenta dot and F2, most of the Pareto front would still be missed.",
1285
+ "bbox": [
1286
+ 75,
1287
+ 506,
1288
+ 917,
1289
+ 605
1290
+ ],
1291
+ "page_idx": 11
1292
+ },
1293
+ {
1294
+ "type": "image",
1295
+ "img_path": "images/1dbfe304cae12f98c4baf10cd2f1ca0452a93817e33d3f6c72e1477bc15bd78b.jpg",
1296
+ "image_caption": [
1297
+ "Figure 7. Cascaded weighted sum and region of interest for the example with a non-convex Pareto front given in Figure 5."
1298
+ ],
1299
+ "image_footnote": [],
1300
+ "bbox": [
1301
+ 334,
1302
+ 671,
1303
+ 663,
1304
+ 878
1305
+ ],
1306
+ "page_idx": 11
1307
+ },
1308
+ {
1309
+ "type": "header",
1310
+ "text": "Algorithms 2014, 7",
1311
+ "bbox": [
1312
+ 78,
1313
+ 54,
1314
+ 240,
1315
+ 70
1316
+ ],
1317
+ "page_idx": 11
1318
+ },
1319
+ {
1320
+ "type": "page_number",
1321
+ "text": "177",
1322
+ "bbox": [
1323
+ 882,
1324
+ 54,
1325
+ 915,
1326
+ 68
1327
+ ],
1328
+ "page_idx": 11
1329
+ },
1330
+ {
1331
+ "type": "text",
1332
+ "text": "Normalization according to Equations (3) or (4) is done linearly with the same ratio for the entire interval $[\\min(f_i), \\max(f_i)]$ . If previous knowledge is available for defining an area of interest for the Pareto front, corresponding subintervals are also known for the single objectives. This information can be used to tune the normalization function, as is shown exemplarily in Figure 8. More normalization functions can be found in [15].",
1333
+ "bbox": [
1334
+ 75,
1335
+ 93,
1336
+ 917,
1337
+ 192
1338
+ ],
1339
+ "page_idx": 12
1340
+ },
1341
+ {
1342
+ "type": "image",
1343
+ "img_path": "images/89768f1ec4562c6ae09740de3232f4a36b9f8c4dc34b67b5c287ee832d694a25.jpg",
1344
+ "image_caption": [
1345
+ "Figure 8. Tuning the normalization of Equation (3) (blue straight line) to the interval of interest of one objective $f_{i}$ . The decline outside of this interval is reduced drastically to allow for a strong increase inside, as shown by the green graph."
1346
+ ],
1347
+ "image_footnote": [],
1348
+ "bbox": [
1349
+ 218,
1350
+ 279,
1351
+ 779,
1352
+ 488
1353
+ ],
1354
+ "page_idx": 12
1355
+ },
1356
+ {
1357
+ "type": "text",
1358
+ "text": "3.5. Summary",
1359
+ "text_level": 1,
1360
+ "bbox": [
1361
+ 80,
1362
+ 506,
1363
+ 194,
1364
+ 523
1365
+ ],
1366
+ "page_idx": 12
1367
+ },
1368
+ {
1369
+ "type": "text",
1370
+ "text": "The grouping of the CWS reduces the amount of objectives considered at once and makes weighting easier. The CWS integrates objective thresholds comparable to those of the $\\varepsilon$ -constrained method, which are easier to handle for experts in the application field than weights, which now play a relatively minor role. The CWS allows for obtaining parts of a non-convex Pareto front, which were unreachable for the original weighted sum. However, it is still possible that some of these parts remain unattainable. These arguments underline the superiority of the CWS over the pure weighted sum. All aggregation methods alone are not suited as a priori approaches, as they require some previous knowledge to be parameterized.",
1371
+ "bbox": [
1372
+ 75,
1373
+ 539,
1374
+ 919,
1375
+ 700
1376
+ ],
1377
+ "page_idx": 12
1378
+ },
1379
+ {
1380
+ "type": "text",
1381
+ "text": "4. Cascaded Weighted Sum and Its Field of Application",
1382
+ "text_level": 1,
1383
+ "bbox": [
1384
+ 77,
1385
+ 715,
1386
+ 559,
1387
+ 734
1388
+ ],
1389
+ "page_idx": 12
1390
+ },
1391
+ {
1392
+ "type": "text",
1393
+ "text": "Optimization problems can be classified according to different criteria, such as the number of decision variables or of objectives, or the nature of the search space, where the number of (expected) suboptima or continuity plays an important role, or the type of project to which the optimization project belongs. The latter is often ignored in scientific literature, although it plays a significant role in real-world applications. Thus, we will take a closer look at that issue in the next sections. We will also consider the amount of objectives, as both properties are well suited to compare both assessment methods.",
1394
+ "bbox": [
1395
+ 75,
1396
+ 750,
1397
+ 919,
1398
+ 888
1399
+ ],
1400
+ "page_idx": 12
1401
+ },
1402
+ {
1403
+ "type": "header",
1404
+ "text": "Algorithms 2014, 7",
1405
+ "bbox": [
1406
+ 78,
1407
+ 54,
1408
+ 238,
1409
+ 70
1410
+ ],
1411
+ "page_idx": 12
1412
+ },
1413
+ {
1414
+ "type": "page_number",
1415
+ "text": "178",
1416
+ "bbox": [
1417
+ 884,
1418
+ 54,
1419
+ 915,
1420
+ 68
1421
+ ],
1422
+ "page_idx": 12
1423
+ },
1424
+ {
1425
+ "type": "text",
1426
+ "text": "4.1. Number of Objectives",
1427
+ "text_level": 1,
1428
+ "bbox": [
1429
+ 80,
1430
+ 93,
1431
+ 294,
1432
+ 111
1433
+ ],
1434
+ "page_idx": 13
1435
+ },
1436
+ {
1437
+ "type": "text",
1438
+ "text": "As already discussed in Section 3.3 objectives can conflict more or less. We consider here only objectives, the decision maker regards as conflicting in the sense that they shall be part of Pareto optimality. The amount of these objectives plays an important role for the practical applicability of the Pareto method. The Pareto front of up to three objectives can be visualized easily. For up to four or five objectives, decision maps, polyhedral approximation, or other visualization techniques can be used, see [20]. Interactive visualization techniques may support perception for more than three objectives, \"but this requires more cognitive effort if the number of objectives increases\", as Lotov and Miettinen summarize their chapter on visualization of the Pareto frontier in [20]. Thus, we can conclude that from five and in particular from more criteria, the perception and the comprehension of the Pareto front become increasingly difficult and turn into a business for experienced experts.",
1439
+ "bbox": [
1440
+ 80,
1441
+ 127,
1442
+ 919,
1443
+ 326
1444
+ ],
1445
+ "page_idx": 13
1446
+ },
1447
+ {
1448
+ "type": "image",
1449
+ "img_path": "images/0450b4aba2b2c319bfbc17b937ade1132c3fcb8c2077e942162cf78b7f215051.jpg",
1450
+ "image_caption": [
1451
+ "Figure 9. The number of required data points (Pareto-optimal solutions) of an approximation of a Pareto front increases exponentially with a growing number of conflicting objectives. The green line is based on a resolution of 7 data points per additional objective (axis), while the blue one uses 5 only."
1452
+ ],
1453
+ "image_footnote": [],
1454
+ "bbox": [
1455
+ 211,
1456
+ 435,
1457
+ 784,
1458
+ 694
1459
+ ],
1460
+ "page_idx": 13
1461
+ },
1462
+ {
1463
+ "type": "text",
1464
+ "text": "Another question is the effort to determine the Pareto front. For an acceptable visualization of the Pareto front, approximations like the one described in [21] may be used. Depending on the desired quality of approximation, a number of 5 to 7, in general $s$ , Pareto-optimal solutions may be sufficient for two objectives. Assuming that the same quality of interpolation and granularity of support points shall be maintained when further objectives are added, $s^{(k - 1)}$ support points are required for the interpolation of the hyperplane of the Pareto front, provided that all areas shall be examined. With interactive approaches, this can be reduced to some extent, but at the risk of missing promising regions. Figure 9 illustrates this growth of required Pareto optimal solutions. For 5 objectives, for example, 625 solutions are required to interpolate the entire hyperplane with 5 data points per axis. For a better interpolation quality obtained from 7 data points per axis 2401 solutions are needed. It should be noted that every data point requires several evaluations of solutions according to the optimization",
1465
+ "bbox": [
1466
+ 80,
1467
+ 714,
1468
+ 919,
1469
+ 935
1470
+ ],
1471
+ "page_idx": 13
1472
+ },
1473
+ {
1474
+ "type": "header",
1475
+ "text": "Algorithms 2014, 7",
1476
+ "bbox": [
1477
+ 78,
1478
+ 54,
1479
+ 238,
1480
+ 68
1481
+ ],
1482
+ "page_idx": 13
1483
+ },
1484
+ {
1485
+ "type": "page_number",
1486
+ "text": "179",
1487
+ "bbox": [
1488
+ 884,
1489
+ 54,
1490
+ 915,
1491
+ 67
1492
+ ],
1493
+ "page_idx": 13
1494
+ },
1495
+ {
1496
+ "type": "text",
1497
+ "text": "or approximation procedure used. Depending on the application, evaluations may be based on time-consuming simulations and last several seconds or minutes each. This clearly limits the practical applicability of the Pareto method for growing numbers of conflicting objectives. One solution is to reduce the number of objectives by an aggregation of less conflicting objectives to one.",
1498
+ "bbox": [
1499
+ 80,
1500
+ 93,
1501
+ 917,
1502
+ 171
1503
+ ],
1504
+ "page_idx": 14
1505
+ },
1506
+ {
1507
+ "type": "text",
1508
+ "text": "4.2. Classification of Application Scenarios and Examples",
1509
+ "text_level": 1,
1510
+ "bbox": [
1511
+ 80,
1512
+ 189,
1513
+ 552,
1514
+ 206
1515
+ ],
1516
+ "page_idx": 14
1517
+ },
1518
+ {
1519
+ "type": "text",
1520
+ "text": "Optimization projects can be classified into three different types:",
1521
+ "bbox": [
1522
+ 104,
1523
+ 223,
1524
+ 628,
1525
+ 240
1526
+ ],
1527
+ "page_idx": 14
1528
+ },
1529
+ {
1530
+ "type": "list",
1531
+ "sub_type": "text",
1532
+ "list_items": [
1533
+ "I. The nonrecurring type, which is performed once with little or no prior knowledge of e.g., the impact and relevance of decision variables or the behavior of objectives. This type requires many decisions regarding e.g., the number and ranges of decision variables, the number and kind of objectives, of restrictions, and more.",
1534
+ "II. The extended nonrecurring project, where some variants of the first optimization task are handled as well. Frequently, the modifications of the original project are motivated by the experience gained in the first optimization runs. As in the first type, decisions are usually made by humans.",
1535
+ "III. The recurring type, usually based on experience gained from a predecessor project and frequently part of an automated process without or with minor human interaction only."
1536
+ ],
1537
+ "bbox": [
1538
+ 100,
1539
+ 250,
1540
+ 917,
1541
+ 450
1542
+ ],
1543
+ "page_idx": 14
1544
+ },
1545
+ {
1546
+ "type": "text",
1547
+ "text": "Examples of Types I and II are design optimization tasks like the design of micro-optical devices as described in [22] or problems from such a challenging field like aerodynamic design optimization, see e.g., [23]. A typical example of type III is the task of scheduling jobs to be processed on a computational grid as introduced in the last section and described in detail in [6]. Normally, nobody is interested in the details of the actual schedule, which usually will be replaced soon by a new one due to a replanning event like a new job to be planned or the introduction of new resources. Another example is the planning of collision-free paths for movements of industrial robots, as described in detail and for different industrial robot types in [14,15,24,25]. This example also shows that in some cases human judgment is possible in addition to a pure consideration of the achieved objective figures of a generated robot move path. The decision maker can take a look at the resulting movement using a robot simulator or the real device. Assessing this movement is much more impressive and illustrative than reading objective figures. On the other hand, such a well-fitting visualization is not always available.",
1548
+ "bbox": [
1549
+ 80,
1550
+ 460,
1551
+ 917,
1552
+ 699
1553
+ ],
1554
+ "page_idx": 14
1555
+ },
1556
+ {
1557
+ "type": "text",
1558
+ "text": "4.3. Comparison of Pareto Optimization and CWS in Different Application Scenarios",
1559
+ "text_level": 1,
1560
+ "bbox": [
1561
+ 80,
1562
+ 715,
1563
+ 769,
1564
+ 733
1565
+ ],
1566
+ "page_idx": 14
1567
+ },
1568
+ {
1569
+ "type": "text",
1570
+ "text": "4.3.1. Individual Optimization Project",
1571
+ "text_level": 1,
1572
+ "bbox": [
1573
+ 80,
1574
+ 750,
1575
+ 389,
1576
+ 766
1577
+ ],
1578
+ "page_idx": 14
1579
+ },
1580
+ {
1581
+ "type": "text",
1582
+ "text": "For the first project type, the ranges of possibly achievable objective values usually are not known in advance. In this case, an estimation of them can be obtained from a Pareto optimization. From these data and the resulting Pareto front, a human decision maker can opt for additional and modified optimization or select the final solution. This type of optimization project clearly belongs to the domain of Pareto-based optimization.",
1583
+ "bbox": [
1584
+ 80,
1585
+ 785,
1586
+ 917,
1587
+ 883
1588
+ ],
1589
+ "page_idx": 14
1590
+ },
1591
+ {
1592
+ "type": "header",
1593
+ "text": "Algorithms 2014, 7",
1594
+ "bbox": [
1595
+ 78,
1596
+ 54,
1597
+ 238,
1598
+ 70
1599
+ ],
1600
+ "page_idx": 14
1601
+ },
1602
+ {
1603
+ "type": "page_number",
1604
+ "text": "180",
1605
+ "bbox": [
1606
+ 884,
1607
+ 55,
1608
+ 915,
1609
+ 68
1610
+ ],
1611
+ "page_idx": 14
1612
+ },
1613
+ {
1614
+ "type": "text",
1615
+ "text": "4.3.2. Optimization Project with Some Task Variants",
1616
+ "text_level": 1,
1617
+ "bbox": [
1618
+ 77,
1619
+ 93,
1620
+ 512,
1621
+ 111
1622
+ ],
1623
+ "page_idx": 15
1624
+ },
1625
+ {
1626
+ "type": "text",
1627
+ "text": "In many cases, the above statement also applies to the second project type, as experience is still limited and there must be good reasons to change the assessment method. Such a reason may be more than five objectives and if one or few areas of interest can be identified. In such cases, the computational effort can be reduced significantly. As mentioned before, an assessment of one solution in real world applications is frequently done by a simulation run, the duration of which strongly depends on the application at hand. One simulation may require seconds or even minutes and more. In such cases, the reduction of the number of evaluations is critical and an early concentration on the area of interest by using the CWS can be essential for the success of the project. On the other hand, the impact of the optimization is another important and application-dependent issue: If the savings expected from optimization justify the computational effort, Pareto optimization should be used until the areas of interest are reliably identified. Based on that, these areas can be explored in greater detail by optimization runs using the CWS, as illustrated in Figure 10. These considerations show that according to the project conditions, both methods may complement each other.",
1628
+ "bbox": [
1629
+ 75,
1630
+ 128,
1631
+ 922,
1632
+ 388
1633
+ ],
1634
+ "page_idx": 15
1635
+ },
1636
+ {
1637
+ "type": "image",
1638
+ "img_path": "images/3f12a0652514fdf41070930abd560c4cb3e319f17d304332fb21977608bbf040.jpg",
1639
+ "image_caption": [
1640
+ "Figure 10. Both diagrams show a sample population of an advanced search more or less shortly before convergence. The CWS concentrates the best individuals (black dots) more or less on the region of interest, as shown in the left diagram. In contrast to that, Pareto-based optimization procedures attempt to distribute their solutions along the Pareto front as best as they can, see the right diagram. Thus, fewer solutions will be found in the area of interest."
1641
+ ],
1642
+ "image_footnote": [],
1643
+ "bbox": [
1644
+ 154,
1645
+ 535,
1646
+ 845,
1647
+ 741
1648
+ ],
1649
+ "page_idx": 15
1650
+ },
1651
+ {
1652
+ "type": "text",
1653
+ "text": "4.3.3. Repeated Optimization, also as Part of an Automated Process",
1654
+ "text_level": 1,
1655
+ "bbox": [
1656
+ 77,
1657
+ 760,
1658
+ 631,
1659
+ 778
1660
+ ],
1661
+ "page_idx": 15
1662
+ },
1663
+ {
1664
+ "type": "text",
1665
+ "text": "Another domain of CWS-based optimization or planning is project type III, where the same task is executed repeatedly with minor modifications and, thus, known areas of interest. If a major change occurs, the area of interest can be adapted by using Pareto optimization. A typical example is the scheduling of jobs to the heterogeneous resources of a computational grid, as was introduced before and described in detail in [6]. It is a permanent replanning process, because events demanding a change of the schedule may occur long before a plan is completed. Examples are the introduction of new jobs or new resources, unexpected deactivations of resources, changes of the cost profiles of",
1666
+ "bbox": [
1667
+ 75,
1668
+ 796,
1669
+ 922,
1670
+ 936
1671
+ ],
1672
+ "page_idx": 15
1673
+ },
1674
+ {
1675
+ "type": "header",
1676
+ "text": "Algorithms 2014, 7",
1677
+ "bbox": [
1678
+ 78,
1679
+ 54,
1680
+ 240,
1681
+ 70
1682
+ ],
1683
+ "page_idx": 15
1684
+ },
1685
+ {
1686
+ "type": "page_number",
1687
+ "text": "181",
1688
+ "bbox": [
1689
+ 884,
1690
+ 54,
1691
+ 915,
1692
+ 68
1693
+ ],
1694
+ "page_idx": 15
1695
+ },
1696
+ {
1697
+ "type": "text",
1698
+ "text": "resources, early completion of jobs, or the like. As described in [6], five objectives are optimized and four penalty functions are used to handle the restrictions. Because planning time is limited and thousands of jobs and hundreds of resources must be handled, the planning must be stopped (long) before the used Evolutionary Algorithm converges. Thus, it is important to explore the region of interest as well as possible, see Figure 10. Additionally, there is no human expert to check the results several times per hour. For this automated scheduling process, the determination of the Pareto front makes no sense and the CWS is a meaningful alternative. These considerations also apply to many other scheduling tasks like the one described in [15,19].",
1699
+ "bbox": [
1700
+ 80,
1701
+ 93,
1702
+ 917,
1703
+ 252
1704
+ ],
1705
+ "page_idx": 16
1706
+ },
1707
+ {
1708
+ "type": "text",
1709
+ "text": "Another example already mentioned is the planning of collision-free movement paths for an industrial robot [14,15,24,25]. Depending on the task at hand, we have four or five objectives and at the minimum one penalty function to handle collisions. As robot movements can be simulated and visualized, the results are checked by a human expert mostly on the level of robot movements rather than of objective figures. As areas of interest usually are also known in advance and new solutions should be generated fast, the CWS is suited here as well and for the same reasons as with the task before.",
1710
+ "bbox": [
1711
+ 80,
1712
+ 255,
1713
+ 917,
1714
+ 373
1715
+ ],
1716
+ "page_idx": 16
1717
+ },
1718
+ {
1719
+ "type": "text",
1720
+ "text": "5. Conclusions",
1721
+ "text_level": 1,
1722
+ "bbox": [
1723
+ 82,
1724
+ 390,
1725
+ 206,
1726
+ 407
1727
+ ],
1728
+ "page_idx": 16
1729
+ },
1730
+ {
1731
+ "type": "text",
1732
+ "text": "In Section 4.1 it was shown that the amount of solutions required to approximate a Pareto front increases exponentially with a growing number of conflicting objectives. As illustrated in Figure 9, the amount of evaluations increases considerably for more than five objectives. This limits the applicability of the Pareto approach for real-world applications, which frequently require time-consuming evaluations especially when based on simulation.",
1733
+ "bbox": [
1734
+ 80,
1735
+ 425,
1736
+ 917,
1737
+ 523
1738
+ ],
1739
+ "page_idx": 16
1740
+ },
1741
+ {
1742
+ "type": "text",
1743
+ "text": "We have introduced the cascaded weighted sum (CWS), which can be described roughly as a combination of the weighted sum and the $\\varepsilon$ -constrained method. The major drawback of the pure weighted sum, the inaccessibility of parts of the Pareto front in non-convex cases, can be reduced to some extent by the CWS, see Section 3 and Figure 7. As the pure weighted sum, the CWS is not an a priori method. The major advantage of the CWS is its ability to concentrate solutions on the region of interest with less computational effort than Pareto optimization. In addition, this effort difference increases immensely with an increasing number of objectives, in particular for more than five. The region of interest can be a result of previous experience or knowledge, of a first Pareto-based optimization, or a combination thereof.",
1744
+ "bbox": [
1745
+ 80,
1746
+ 526,
1747
+ 917,
1748
+ 703
1749
+ ],
1750
+ "page_idx": 16
1751
+ },
1752
+ {
1753
+ "type": "text",
1754
+ "text": "In Section 4.2, optimization projects were divided into three types, the individual project type, projects treating some variants of the task, and the type of repeated optimization of the same task with more or less small variations. The unquestioned domain of Pareto optimization is the first type of optimization projects. For the second type and five or more objectives, a combination of both methods can be advantageous as was described in Section 4.3.2. For the third project type of repeated optimization of task variants and with no or only minor human interaction, the Pareto front is not required, as the regions of interest are already known from previous solutions or initial Pareto optimization. The concentration of the CWS on that region is beneficial, as the computational effort can be reduced significantly. This is of importance especially in those cases, where a fast solution is required or the amount of evaluations is limited due to long run times.",
1755
+ "bbox": [
1756
+ 80,
1757
+ 708,
1758
+ 917,
1759
+ 906
1760
+ ],
1761
+ "page_idx": 16
1762
+ },
1763
+ {
1764
+ "type": "header",
1765
+ "text": "Algorithms 2014, 7",
1766
+ "bbox": [
1767
+ 80,
1768
+ 55,
1769
+ 238,
1770
+ 68
1771
+ ],
1772
+ "page_idx": 16
1773
+ },
1774
+ {
1775
+ "type": "page_number",
1776
+ "text": "182",
1777
+ "bbox": [
1778
+ 885,
1779
+ 55,
1780
+ 915,
1781
+ 68
1782
+ ],
1783
+ "page_idx": 16
1784
+ },
1785
+ {
1786
+ "type": "text",
1787
+ "text": "Thus, we can conclude that both methods have their place and their field of application. Additionally, they can complement each other.",
1788
+ "bbox": [
1789
+ 80,
1790
+ 93,
1791
+ 915,
1792
+ 130
1793
+ ],
1794
+ "page_idx": 17
1795
+ },
1796
+ {
1797
+ "type": "text",
1798
+ "text": "Acknowledgments",
1799
+ "text_level": 1,
1800
+ "bbox": [
1801
+ 80,
1802
+ 148,
1803
+ 240,
1804
+ 165
1805
+ ],
1806
+ "page_idx": 17
1807
+ },
1808
+ {
1809
+ "type": "text",
1810
+ "text": "We acknowledge support by the Deutsche Forschungsgemeinschaft and Open Access Publishing Fund of Karlsruhe Institute of Technology.",
1811
+ "bbox": [
1812
+ 80,
1813
+ 181,
1814
+ 915,
1815
+ 219
1816
+ ],
1817
+ "page_idx": 17
1818
+ },
1819
+ {
1820
+ "type": "text",
1821
+ "text": "Conflicts of Interest",
1822
+ "text_level": 1,
1823
+ "bbox": [
1824
+ 80,
1825
+ 237,
1826
+ 253,
1827
+ 253
1828
+ ],
1829
+ "page_idx": 17
1830
+ },
1831
+ {
1832
+ "type": "text",
1833
+ "text": "The authors declare no conflict of interest.",
1834
+ "bbox": [
1835
+ 104,
1836
+ 271,
1837
+ 448,
1838
+ 288
1839
+ ],
1840
+ "page_idx": 17
1841
+ },
1842
+ {
1843
+ "type": "text",
1844
+ "text": "References",
1845
+ "text_level": 1,
1846
+ "bbox": [
1847
+ 80,
1848
+ 306,
1849
+ 176,
1850
+ 322
1851
+ ],
1852
+ "page_idx": 17
1853
+ },
1854
+ {
1855
+ "type": "list",
1856
+ "sub_type": "ref_text",
1857
+ "list_items": [
1858
+ "1. Pareto, V. Cours d'Économie Politique, (in French); F. Rouge: Lausanne, Switzerland, 1896.",
1859
+ "2. Hoffmeister, F.; Bäck, T. Genetic Algorithms and Evolution Strategies: Similarities and Differences; Technical Report SYS-1/92; FB Informatik, University of Dortmund: Dortmund, Germany, 1992.",
1860
+ "3. Multiobjective Optimization: Interactive and Evolutionary Approaches; Lecture notes in computer science 5252; Branke, J., Deb, K., Miettinen, K., Sławinski, R., Eds.; Springer: Berlin, Germany, 2008.",
1861
+ "4. Deb, K. Introduction to evolutionary multiobjective optimization. In Multiobjective Optimization: Interactive and Evolutionary Approaches; Branke, J., Deb, K., Miettinen, K., Slosinski, R., Eds.; Lecture notes in computer science 5252; Springer: Berlin, Germany, 2008; pp. 58-96.",
1862
+ "5. Miettinen, K. Nonlinear Multiobjective Optimization; International series in operations research & management science 12; Kluwer Academic Publishers: Boston, MA, USA, 1999.",
1863
+ "6. Jakob, W.; Strack, S.; Quinte, A.; Bengel, G.; Stucky, K.-U.; Süß, W. Fast rescheduling of multiple workflows to constrained heterogeneous resources using multi-criteria memetic computing. Algorithms 2013, 2, 245-277.",
1864
+ "7. Haimes, Y.Y.; Lasdon, L.S.; Wismer, D.A. On a bicriterion formulation of the problems of integrated system identification and system optimization. IEEE Trans. Syst. Man Cybern. 1971, 3, 296-297.",
1865
+ "8. Osyczka, A. Multicriterion Optimization in Engineering with FORTRAN Programs; Ellis Horwood series in mechanical engineering; E. Horwood: London, UK, 1984.",
1866
+ "9. Miettinen, K. Introduction to multiobjective optimization: Noninteractive approaches. In Multiobjective Optimization: Interactive and Evolutionary Approaches; Branke, J., Deb, K., Miettinen, K., Słowyński, R., Eds.; Lecture notes in computer science 5252; Springer: Berlin, Germany, 2008; pp. 1-26.",
1867
+ "10. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 2, 182-197.",
1868
+ "11. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the strength pareto evolutionary algorithm for multiobjective optimization. In Evolutionary Methods for Design Optimization and Control with Applications to Industrial Problems, Proceedings of the EUROGEN'2001 Conference, Athens, Greece, 19-21 September 2001; Giannakoglou, K.C., Tsahalis, D.T., Périaux,"
1869
+ ],
1870
+ "bbox": [
1871
+ 78,
1872
+ 340,
1873
+ 922,
1874
+ 923
1875
+ ],
1876
+ "page_idx": 17
1877
+ },
1878
+ {
1879
+ "type": "header",
1880
+ "text": "Algorithms 2014, 7",
1881
+ "bbox": [
1882
+ 78,
1883
+ 54,
1884
+ 238,
1885
+ 71
1886
+ ],
1887
+ "page_idx": 17
1888
+ },
1889
+ {
1890
+ "type": "page_number",
1891
+ "text": "183",
1892
+ "bbox": [
1893
+ 882,
1894
+ 54,
1895
+ 915,
1896
+ 68
1897
+ ],
1898
+ "page_idx": 17
1899
+ },
1900
+ {
1901
+ "type": "list",
1902
+ "sub_type": "ref_text",
1903
+ "list_items": [
1904
+ "J., Papailiou, K.D., Fogarty, T., Eds.; International Center for Numerical Methods in Engineering: Athens, Greece: 2001; pp. 95-100.",
1905
+ "12. Beume, N.; Naujoks, B.; Emmerich, M. SMS-EMOA: Multiobjective selection based on dominated hypervolume. Eur. J. Oper. Res. 2007, 3, 1653–1669.",
1906
+ "13. Blume, C. GLEAM—A system for simulated \"intuitive learning\". In Parallel Problem Solving from Nature: Proceedings of the 1st Workshop, Dortmund, Germany, 1-3 October 1990; Schwefel, H.-P., Manner, R., Eds.; Lecture notes in computer science 496; Springer: Berlin, Germany, 1991; pp. 48-54.",
1907
+ "14. Blume, C.; Jakob, W.; Krisch, S. Robot trajectory planning with collision avoidance using genetic algorithms and simulation. In Proceedings of the 25th International Symposium on Industrial Robots (ISIR), Hanover, Germany, 25-27 April, 1994; pp. 169-175.",
1908
+ "15. Blume, C.; Jakob, W. GLEAM—General Learning Evolutionary Algorithm and Method. Ein evolutionärer Algorithmus und seine Anwendungen, (in German); Schriftenreihe des Instituts für Angewandte Informatik, Automatisierungstechnik am Karlsruhe Institut für Technologie 32; KIT Scientific Publishing: Karlsruhe, Germany, 2009.",
1909
+ "16. Gorges-Schleuter, M. Explicit parallelism of genetic algorithms through population structures. In Proceedings of the 1st Workshop on Parallel Problem Solving from Nature (PPSN I), Dortmund, Germany, 1-3 October 1990; Schwefel, H.-P., Manner, R., Eds.; LNCS 496, Springer: Berlin, Germany, 1991; pp. 150-159.",
1910
+ "17. Sarma, K.; de Jong, K. An analysis of the effects of neighborhood size and shape on local selection algorithms. In Proceedings of the 4th International Conference on Parallel Problem Solving from Nature (PPSN IV), LNCS 1141, Berlin, Germany, 22-26 September 1996; Voigt, H.-M., Ebeling, W., Rechenberg, I., Schwefel, H.-P., Eds.; Springer: Berlin, Germany, 1996; pp. 236-244.",
1911
+ "18. Nguyen, Q.H.; Ong, Y.-S.; Lim, M.H.; Krasnogor, N. Adaptive cellular memetic algorithms. Evol. Comput. 2009, 17, 231-256.",
1912
+ "19. Jakob, W. A general cost-benefit-based adaptation framework for multimeme algorithms. Memet. Comput. 2010, 3, 201-218.",
1913
+ "20. Lotov, A.V.; Miettinen, K. Visualizing the Pareto Frontier. In Multiobjective Optimization: Interactive and Evolutionary Approaches; Branke, J., Deb, K., Miettinen, K., Słowyński, R., Eds.; Lecture notes in computer science 5252; Springer: Berlin, Germany, 2008; pp. 213-243.",
1914
+ "21. Klamroth, K.; Tind, J.; Wiecek, M.M. Unbiased approximation in multicriteria optimization. Math. Method Oper. Res. 2003, 3, 413-437.",
1915
+ "22. Jakob, W.; Gorges-Schleuter, M.; Sieber, I.; Suß, W.; Eggert, H. Solving a highly multi-modal design optimization problem using the extended genetic algorithm GLEAM. In Computer Aided Optimum Design of Structures VI: Conf. Proc. OPTI 99; Hernandez, S., Kassab, A.J., Brebbia, C.A., Eds.; WIT Press: Southampton, UK, 1999; pp. 205-214."
1916
+ ],
1917
+ "bbox": [
1918
+ 80,
1919
+ 92,
1920
+ 919,
1921
+ 839
1922
+ ],
1923
+ "page_idx": 18
1924
+ },
1925
+ {
1926
+ "type": "header",
1927
+ "text": "Algorithms 2014, 7",
1928
+ "bbox": [
1929
+ 78,
1930
+ 54,
1931
+ 240,
1932
+ 71
1933
+ ],
1934
+ "page_idx": 18
1935
+ },
1936
+ {
1937
+ "type": "page_number",
1938
+ "text": "184",
1939
+ "bbox": [
1940
+ 882,
1941
+ 54,
1942
+ 915,
1943
+ 68
1944
+ ],
1945
+ "page_idx": 18
1946
+ },
1947
+ {
1948
+ "type": "list",
1949
+ "sub_type": "ref_text",
1950
+ "list_items": [
1951
+ "23. Stewart, T.; Bandte, O.; Braun, H.; Chakraborti, N.; Ehrgott, M.; Göbelt, M.; Jin, Y.; Nakayama, H.; Poles, S.; di Stefano, D. Real-world applications of multiobjective optimization. In Multiobjective Optimization: Interactive and Evolutionary Approaches; Branke, J., Deb, K., Miettinen, K., Słowyński, R., Eds.; Lecture notes in computer science 5252; Springer-Verlag: Berlin, Germany, 2008; pp. 285-327.",
1952
+ "24. Blume, C. Automatic Generation of Collision Free Moves for the ABB Industrial Robot Control. In Proceedings of the 1997 First International Conference on Knowledge-Based Intelligent Electronic Systems (KES '97), Adelaide, SA, Australia, 21-23 May 1997; Volume 2, pp. 672-683.",
1953
+ "25. Blume, C. Optimized Collision Free Robot Move Statement Generation by the Evolutionary Software GLEAM. In Real World Applications of Evolutionary Computing: Proceedings; Cagnoni, S., Ed.; Lecture notes in computer science 1803; Springer: Berlin, Germany, 2000; pp. 327-338.",
1954
+ "© 2014 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/)."
1955
+ ],
1956
+ "bbox": [
1957
+ 78,
1958
+ 93,
1959
+ 917,
1960
+ 409
1961
+ ],
1962
+ "page_idx": 19
1963
+ },
1964
+ {
1965
+ "type": "header",
1966
+ "text": "Algorithms 2014, 7",
1967
+ "bbox": [
1968
+ 78,
1969
+ 54,
1970
+ 240,
1971
+ 71
1972
+ ],
1973
+ "page_idx": 19
1974
+ },
1975
+ {
1976
+ "type": "page_number",
1977
+ "text": "185",
1978
+ "bbox": [
1979
+ 882,
1980
+ 54,
1981
+ 915,
1982
+ 68
1983
+ ],
1984
+ "page_idx": 19
1985
+ }
1986
+ ]
2203.02xxx/2203.02697/9f953933-84bb-40cc-b218-d16d7f5c68c3_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2203.02xxx/2203.02697/9f953933-84bb-40cc-b218-d16d7f5c68c3_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:528663c9747a0fc44acc11a2bd1b9fbb62c598024f91f821ac781615a9f325bc
3
+ size 546361
2203.02xxx/2203.02697/full.md ADDED
@@ -0,0 +1,307 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Pareto Optimization or Cascaded Weighted Sum: A Comparison of Concepts
2
+
3
+ Wilfried Jakob $^{1, *}$ and Christian Blume
4
+
5
+ $^{1}$ Karlsruhe Institute of Technology (KIT), Institute of Applied Computer Science (IAI), P.O. Box 3640, Karlsruhe 76021, Germany
6
+ $^{2}$ Cologne University of Applied Sciences, Institute of Automation and Industrial IT, Steinmüllerallee 1, Gummersbach 51643, Germany; E-Mail: blume@gm.fh-koeln.de
7
+ * Author to whom correspondence should be addressed; E-Mail: wilfried.jakob@kit.edu; Tel.: +49-721-608-24663; Fax: +49-721-608-22602.
8
+
9
+ Received: 22 January 2014; in revised form: 3 March 2014 / Accepted: 14 March 2014 /
10
+
11
+ Published: 21 March 2014
12
+
13
+ Abstract: Looking at articles or conference papers published since the turn of the century, Pareto optimization is the dominating assessment method for multi-objective nonlinear optimization problems. However, is it always the method of choice for real-world applications, where either more than four objectives have to be considered, or the same type of task is repeated again and again with only minor modifications, in an automated optimization or planning process? This paper presents a classification of application scenarios and compares the Pareto approach with an extended version of the weighted sum, called cascaded weighted sum, for the different scenarios. Its range of application within the field of multi-objective optimization is discussed as well as its strengths and weaknesses.
14
+
15
+ Keywords: multi-criteria optimization; Pareto optimization; weighted sum; cascaded weighted sum; global optimization; population based optimization; evolutionary algorithm
16
+
17
+ # 1. Introduction
18
+
19
+ Most nonlinear real-world optimization problems require the optimization of several objectives and usually at least some of them are contradictory. A simple example of two conflicting criteria is the payload and the traveling distance with a given amount of fuel, which cannot be maximized both at the same time. The typical solution of such a problem is a compromise. A good compromise is one
20
+
21
+ where one of the criteria can be improved only by worsening at least one of the others. This approach is called Pareto optimization [1], and the set of all good compromises is called Pareto optimal solutions or non-dominated solutions. In practice, usually only one solution is required. Thus, multi-objective optimization based on Pareto optimality is divided into two phases: At first, the set of Pareto optimal solutions is determined, out of which one must be chosen as the final result by a human decision maker according to more or less subjective preferences. This is in contrast to single-objective optimization tasks, where no second selection step is required.
22
+
23
+ Most population-based search procedures, like evolutionary algorithms, particle swarm or ant colony optimization, require a single quality value called e.g., fitness in the context of evolutionary algorithms. This may be one reason for the frequent aggregation of different optimization criteria to a single quality value. Two methods, the frequently used weighted sum and the $\varepsilon$ -constrained method, are described briefly. Another commonly used method is to express everything in costs. On closer inspection, it becomes apparent that this is equal to the weighted sum approach using suitable weights. Additionally, the conversion into costs requires an artificial redefinition of the original goals and this is often not really appropriate. Thus, most multi-objective optimization problems have meanwhile been solved based on Pareto optimization, at least in academia.
24
+
25
+ The computational effort to determine all or at least most of the Pareto front increases significantly with the number of conflicting objectives, as will be shown later in the paper. However, what if the complete Pareto front is not needed at all, because the area of interest is already known? In this paper we will introduce an aggregation method called the cascaded weighted sum (CWS) and discuss application scenarios, where aggregation methods like the CWS can compete with Pareto-optimality-based approaches. Not to be misunderstood: We agree that in many fields of application, Pareto optimization is the appropriate method for multi-objective problems. Although we will concentrate on evolutionary multi-objective optimization later in the paper, the issues discussed here can be applied to other global optimization procedures as well and especially to those, which optimize a set of solutions simultaneously instead of just one.
26
+
27
+ The paper is organized as follows: In Section 2 the basics of Pareto optimization are described, followed by the weighted sum and the $\varepsilon$ -constrained method, including a brief discussion of their properties. In Section 3, the cascaded weighted sum is introduced. Section 4 starts with a classification of application scenarios, gives some examples, and discusses the question for which scenario which method is suited better or how they can complement each other. The paper closes in Section 5 with a summary and a conclusion.
28
+
29
+ # 2. Short Introduction to Pareto Optimization and Two Aggregation Methods
30
+
31
+ Based on Hoffmeister and Bäck [2], and the notation of Branke et al. [3], a multi-objective optimization problem is the task of maximizing a set of $k (>1)$ usually conflicting objective functions $f_{i}$ simultaneously, denoted by maximize $\{...\}$ :
32
+
33
+ $$
34
+ \begin{array}{l} \text {m a x i m i z e} \left\{f _ {1} (x), f _ {2} (x), \dots , f _ {k} (x) \right\}, x \in S \\ f _ {i}: S \subseteq S _ {1} \times \dots \times S _ {n} \rightarrow \Re , S \neq \emptyset \tag {1} \\ \end{array}
35
+ $$
36
+
37
+ The focus on maximization is without loss of generality, because $\min \{f(x)\} = -\max \{-f(x)\}$ . The nonempty set $S$ is called the feasible region and a member of it is called a decision (variable) vector $x = (x_{1}, x_{2}, \ldots, x_{n})^{T}$ . As it is of no further interest here, we do not describe the constraints forming $S$ in more detail. Frequently, the $S_{i}$ are the set of real or whole numbers or a subset thereof, but they can be any arbitrary set as well. Objective vectors are images of decision vectors, consisting of objective (function) values $z = f(x) = (f_{1}(x), f_{2}(x), \ldots, f_{k}(x))^{T}$ . Accordingly, the image of the feasible region in the objective space is called the feasible objective region $Z = f(S)$ . Figure 1 illustrates this.
38
+
39
+ ![](images/0235fe3a5ca0707d49866992d60c0adb72dc14f557ef17efa0789b8ea11194b5.jpg)
40
+ Figure 1. Feasible region $S$ and its image, the feasible objective region $Z$ for $n = k = 2$ . The set of weakly Pareto optimal solutions is shown as a bold green line in the diagram on the right. The subset of Pareto optimal solutions is the part of the green line between the black circles. The ideal objective vector $z^*$ consists of the upper bounds of the Pareto set.
41
+
42
+ In the following sections Pareto optimization and two frequently used aggregation methods, which turn a multi-objective problem into a single-objective task, are introduced and compared in the end.
43
+
44
+ # 2.1. Pareto Optimization
45
+
46
+ A decision vector $x \in S$ dominates another vector $y \in S$ , if
47
+
48
+ $$
49
+ \begin{array}{l} \forall i \in \{1, 2, \dots , k \}: f _ {i} (x) \geq f _ {i} (y) a n d \\ \exists j \in \{1, 2, \dots , k \}: f _ {j} (x) > f _ {j} (y) \tag {2} \\ \end{array}
50
+ $$
51
+
52
+ A decision vector $x' \in S$ , which is not dominated by any other $x \in S$ , is called Pareto optimal. The objective vector $z' = f(x')$ is Pareto optimal, if the corresponding decision vector is Pareto optimal and the corresponding sets can be denoted by $P(S)$ and $P(Z)$ . The set of weakly Pareto optimal solutions, which is a superset of the set of Pareto optimal solutions, is formed by decision vectors, for which the following applies: An $x' \in S$ is called weakly Pareto optimal, if no other $x \in S$ exists such that $f_i(x) > f_i(x')$ for all $i = 1, \ldots, k$ . As the set of Pareto optimal solutions consists of decision vectors only, which are not dominated, they can be regarded as the set of good compromises mentioned in the introduction. It follows from the definition that they are located on the border of the feasible objective region, as shown in the right part of Figure 1. The figure also illustrates the concept of weakly Pareto optimal solutions lying on the part of the green line outside of the section bounded by the black circles
53
+
54
+ in the given example. It should be stated that the set of Pareto optimal solutions does not need to be as nicely shaped as shown in Figure 1; it may also be non-convex and disconnected.
55
+
56
+ The upper bounds of the Pareto optimal set can be obtained by maximizing the $f_{i}$ individually with respect to the feasible region. This results in the ideal objective vector $z^{*}\in \Re^{k}$ , an example of which is shown for the two-dimensional case in the right part of Figure 1. The lower bounds are usually hard to determine, see [3]. Although Pareto-based search methods can provide valuable estimations of the ranges of the objectives for practical applications, they are not suited for an exact determination of their lower and upper bounds.
57
+
58
+ According to [4], constraints in the objective space are handled as follows: A solution $x$ constrained-dominates a solution $y$ , if any of the three conditions is satisfied:
59
+
60
+ - Solution $x$ is feasible and $y$ is not.
61
+ - Both solutions are feasible and $x$ dominates $y$ .
62
+ - Both solutions are infeasible, but $x$ has a smaller constrained violation than $y$ . If more than one constraint is violated, the violations are normalized, summed up, and compared.
63
+
64
+ Hereinafter, the term Pareto optimization is used for an optimization procedure employing Pareto optimality to assess and compare generated solutions.
65
+
66
+ # 2.2. Weighted Sum
67
+
68
+ One of the probably most often used assessment methods besides Pareto optimality is the weighted sum, which aggregates the objective values to a single quality measure. As the objective functions frequently have different scales, they are usually normalized. This can be done for example by using Equations (3) or (4) when minimizing and maximizing the objectives, respectively:
69
+
70
+ $$
71
+ f _ {i} ^ {\text {n o r m}} = \frac {\max \left(f _ {i}\right) - f _ {i}}{\max \left(f _ {i}\right) - \min \left(f _ {i}\right)} \text {f o r o b j e c t i v e s t o b e m i n i m i z e d} \tag {3}
72
+ $$
73
+
74
+ $$
75
+ f _ {i} ^ {\text {n o r m}} = 1 - \frac {\max \left(f _ {i}\right) - f _ {i}}{\max \left(f _ {i}\right) - \min \left(f _ {i}\right)} \text {f o r o b j e c t i v e s t o b e m a x i m i z e d} \tag {4}
76
+ $$
77
+
78
+ The bounds of the objective function $fi$ can be estimated or are the result of a maximization of each function individually in case of $\max(f_i)$ . For the calculation of the weighted sum as shown in Equation (5), a weight $w_i$ has to be chosen for every objective:
79
+
80
+ $$
81
+ \text {m a x i m i z e} \sum_ {i = 1} ^ {k} w _ {i} f _ {i} ^ {\text {n o r m}} (x), \quad x \in S \text {w h e r e} w _ {i} > 0 \text {f o r a l l} i = 1, \dots , k \text {a n d} \sum_ {i = 1} ^ {k} w _ {i} = 1 \tag {5}
82
+ $$
83
+
84
+ By varying the weights, any point of a convex Pareto front can be obtained. Figure 2 illustrates this: The straight line corresponding to the chosen weights $w_{1}$ and $w_{2}$ is moved towards the border of the feasible objective region during the optimization process and becomes a tangent in point P. The solutions found are Pareto optimal, see [5].
85
+
86
+ On the other hand, it is possible that parts of the Pareto front cannot be found in case of a non-convex problem. This is illustrated in Figure 3: the part between points A and B of the Pareto front cannot be obtained for any weights. This is a serious drawback.
87
+
88
+ ![](images/b4f9e3bf301ec55325cc194f2ba779a15611d8fe6a771daa5b8d4fd1469e3ddf.jpg)
89
+ Figure 2. By using appropriate weights, every point of a convex Pareto front can be achieved by the weighted sum. Here, point $\mathbf{P}$ can be obtained for the weights $w_{1}$ and $w_{2}$ . The arrows show the movement direction of points where the largest quality gain is obtained.
90
+
91
+ ![](images/6bab592db324588f3c9cf05c876c9f1d43677f1a32cd1ee911d349aad6f40dfe.jpg)
92
+ Figure 3. For non-convex Pareto fronts, it is possible that parts of the front can not be obtained by the weighted sum. The region between points $\mathbf{A}$ and $\mathbf{B}$ is an example of this serious draw back of this aggregation method.
93
+
94
+ As mentioned above, the weighted sum is often used for practical applications. Reasons are the simplicity of its application and the easy way to integrate restrictions, which are beyond pure limitations of the feasible region. Examples are scheduling tasks, where the jobs to be scheduled have due dates for finalization. Thus, delays can occur and it is not sufficient to tell the search procedure that this constrained violation is an infeasible solution by e.g., rejecting it. Instead, the search must be guided out of the infeasible region by rewarding a reduction of the violation. In the example given, this can be done by counting the number of jobs involved and summing up the amounts of delays, see e.g., [6]. These two key figures can either become new objectives or can be treated as penalty functions. As they do not represent wanted properties and as a low number of objectives is preferable, penalty functions are the method of choice. They can be designed to yield values between zero (maximal violation) and one (no violation). The results of all penalty functions serve as factors by which the weighted sum is multiplied. As a result, the pure weighted sum turns into a raw quality measure, which represents the solution quality of the problem without constraints, while the final
95
+
96
+ product represents the solution for the task with its constraints. Figure 4 shows a typical example of such a penalty function. If the maximum value of constraint violation is hard to predict, as it is often the case, an exponential function can be chosen. A value of delay $dp$ for poor solutions can usually be estimated roughly and the exponential function is attributed such that it yields a value of $\frac{1}{3}$ in this case.
97
+
98
+ ![](images/ab2a667072480f1ad928a31d483490be74c166aa2075ce68a412d48af4ca7f27.jpg)
99
+ Figure 4. Example of a penalty function. It turns constraint violations into a penalty value between 1 and 0, which serves as a factor for decreasing the weighted sum.
100
+
101
+ # 2.3. $\varepsilon$ -Constrained Method
102
+
103
+ The $\varepsilon$ -constrained method is based on the optimization of one selected objective function $f_{j}$ and treating the others as constraints [7]. The optimization problem now has the form
104
+
105
+ $$
106
+ \begin{array}{l} \text {m a x i m i z e} f _ {j} (x), \quad x \in S, \quad j \in \{1, \dots , k \} \\ f _ {i} (x) \geq \varepsilon_ {i} \text {f o r a l l} i = 1, \dots , k, i \neq j \tag {6} \\ \end{array}
107
+ $$
108
+
109
+ The $\varepsilon_{j}$ are the lower bounds for those objective functions that are treated as constraints. For practical applications, the appropriate bounds must be selected carefully. In particular, it must be ensured that the bounds are within the feasible objective region, because otherwise the resulting problem would have no solutions. Ośczyka gives suggestions for a systematic selection of values for the $\varepsilon_{j}$ and illustrates them with sample applications [8].
110
+
111
+ Figure 5 gives an example based on the feasible region shown in Figure 3. In Figure $5f_{2}$ is treated as a constraint with the lower bound $\varepsilon_{2}$ . Thus, the remaining Pareto front is the section between the points F1 and F2. The figure also shows the main movement direction of solutions in $Z$ that have exceeded the threshold $\varepsilon_{2}$ . The main movement direction results from the optimization. Move components up- and downwards are also possible, but are not considered by the assessment procedure of this method as long as they do not drop below $\varepsilon_{2}$ . A too large value of the constraint like $\varepsilon_{bad}$ would make the problem unsolvable.
112
+
113
+ A decision vector $x' \in S$ is Pareto-optimal, if and only if it solves Equation (6) for every $j = 1, \dots, k$ , where $\varepsilon_i = f_i(x')$ for $i = 1, \dots, k, i \neq j$ , see [9]. This means that $k$ different problems must be solved for every member of the Pareto front, which can be expected to be computationally costly. If the task can be relaxed to weak Pareto optimality, only one solution of Equation (6) per member of
114
+
115
+ the front is required [9]. On the other hand, the method does not require convexity for finding any Pareto-optimal solution.
116
+
117
+ ![](images/50165a7bfbef8f28979176d94d7757146949ad85567743afeac10e6955a98b83.jpg)
118
+ Figure 5. Restricted objective region using the $\varepsilon$ -constrained method. The hatched region is excluded due to the lower bound $\varepsilon_{2}$ . The remaining Pareto front is limited by F1 and F2. For too large bounds like $\varepsilon_{bad}$ , the problem becomes unsolvable.
119
+
120
+ # 2.4. Summary
121
+
122
+ The two aggregation procedures (More aggregation methods can be found in [9].) can both find any Pareto-optimal solution for convex problems and the $\varepsilon$ -constrained method can do that for non-convex tasks, too. This advantage of the $\varepsilon$ -constrained method goes at the expense of higher computational costs: For finding a Pareto-optimal solution, the $\varepsilon$ -constrained method needs to solve $k$ different problems, whereas the weighted sum needs to solve just one per element of the Pareto set.
123
+
124
+ Another important issue is the manageability. Depending on the problem, it can be assumed that it is easier for experts in the application area to estimate lower bounds for objectives than weights, which are more abstract. Experts are usually familiar with objective values and they are intelligible for them as such.
125
+
126
+ For optimization procedures working with a set of solutions, frequently called population, like Evolutionary Algorithms (EAs), Ant Colony (ACO) or Particle Swarm Optimization (PSO), or the like, the advantage for multi-objective optimization is the ability to determine, in principle, the entire Pareto front at once. Of course, an optimization procedure must be adapted such that it spreads the population to the Pareto front as best as possible instead of concentrating on some areas of it. Examples of such adapted procedures in the EA field are the non-dominated sorting Genetic Algorithm (NSGA-II) [10], the Strength Pareto EA (SPEA-2) [11], or the S-metric selection evolutionary multi-objective optimization algorithm (SMS-EMOA) [12], to mention only a few. These combinations of population-based optimization procedures and Pareto optimality in general will estimate the Pareto front roughly at the minimum, with less computational effort than it can be done using the aggregation methods introduced for the assessment of the individuals of the same population-based methods. In the latter case, about as many runs would be required as solutions should occupy the Pareto front. Thus, these algorithms specialized for finding as much of the Pareto front as possible in one run are the
127
+
128
+ methods of choice for new problems, where little is known in advance and where a human decision maker is available to make the final choice.
129
+
130
+ # 3. Cascaded Weighted Sum
131
+
132
+ We have been using this extended version of the weighted sum since the inception of our Evolutionary Algorithm GLEAM (General Learning and Evolutionary Algorithm and Method) in 1990 [13], as we considered this assessment method a convenient way to steer the evolutionary search process for evolving improved collision-free robot move trajectories [13,14]. As we did not regard it as something special, we did not publish it in English (A detailed description in German of the cascaded weighted sum and all variants of the application of GLEAM to industrial robot path planning for several industrial robots can be found in [15]). until comments of reviewers of other publications on GLEAM changed our mind and revealed the need for a general discussion. Some impacts of the robot application on the evaluation of solutions are discussed later in Sections 4.2 and 4.3.3. Before the cascaded weighted sum is described, we will shortly introduce Evolutionary Algorithms and GLEAM in particular to the extent necessary for a better understanding of the following sections.
133
+
134
+ # 3.1. Short Introduction to Evolutionary Algorithms and GLEAM
135
+
136
+ Evolutionary Algorithms (EA) typically conduct a search in the feasible region, with this search being guided by a quality function usually called fitness function. The search is done in parallel by a set of solution candidates, called individuals, forming a population. If an individual is outside of the feasible region, it will be guided back by one or more penalty functions or comparable techniques. The fitness function may be one of the aggregation methods described above. Alternatively, it is guided by Pareto optimality and is therefore based on the amount of dominated solutions and possibly some other measure, which rewards a good spread of the non-dominated solutions along the Pareto front, see e.g., [10,12]. New solutions are generated by stochastic algorithmic counterparts of the two biological archetypes mutation and recombination, for which an individual selects a partner frequently influenced by the fitness. Thus, the generated offspring inherits properties from both parents. The third principle of evolution, the survival of the fittest, occurs when deciding about who is included in the next iteration. In elitist forms of EAs, the best individual survives always and therefore, the quality of the population can increase only. On the other hand, convergence cannot be ensured within limited time. To avoid premature convergence, various attempts have been made to maintain genotypic diversity for a longer period of time by establishing niches within the population, see e.g., [16-18]. GLEAM uses one of these methods [16] (An actual description of the diffusion model and its integration into GLEAM can be found in [19].) and, thus, frequently yields some distinct solutions of more or less comparable quality. Iterative, stochastic, and population-based optimization procedures in general tend to produce some variants of the best solution. How much they differ in properties and quality depends on the algorithm, the actions taken for maintaining diversity, and the length of the run.
137
+
138
+ # 3.2. Definition of the Cascaded Weighted Sum
139
+
140
+ In the cascaded weighted sum (CWS) each objective is assigned a weight $w_{i}$ as with the pure weighted sum and a priority starting with 1 as the highest one. If desired, some objectives may have the same priority. All objectives but those with the lowest priority receive a user-given threshold $\varepsilon_{i}$ . In the beginning, only the objectives of the highest priority are active and contribute to the weighted sum. The others are activated according to the following priority rule:
141
+
142
+ If all objectives with the same priority exceed their threshold, the objectives of the next lower priority are activated and their values are added to the sum.
143
+
144
+ As the objectives are grouped by the priorities and the groups are considered one after the other, the method is called cascaded weighted sum. A group whose members exceed their threshold is called a satisfied group. If at least one objective of a satisfied group drops below its threshold, the group is not satisfied anymore and consequently, all groups with lower priorities are deactivated, which will significantly reduce the resulting weighted sum.
145
+
146
+ For the formal definition of the CWS given in Equation (7), the original $f_{i}(x)$ are used for the threshold value checks rather than their normalized counterparts, as we assume that this is more convenient for experts of the application. The $k$ objectives are sorted according to their priorities and we have $g$ objective groups, where $1 < g \leq k$ . For $g = 1$ , the CWS would be identical with the weighted sum. Each group consists of $m_{j}$ objectives, the sum of which is $k$ . As with the original weighted sum, each $w_{i} > 0$ and $\sum_{i=1}^{k} w_{i} = 1$ .
147
+
148
+ As there are differences for the first and the last priority group, Equation (7) shows the objectives contributing to the weighted sum for the highest priority 1, the general case of priority $j$ , and the lowest priority $g$ .
149
+
150
+ Priority 1: if not all $f_{i}(x) \geq \varepsilon_{i} \forall i = 1, \ldots, m_{1}$
151
+
152
+ (highest priority) maximize $\sum_{i = 1}^{m}w_{i}f_{i}^{\text{norm}}(x), x\in S, m = m_{1}$
153
+
154
+ Priority $j$ : if all $f_{i}(x)\geq \varepsilon_{i}\quad \forall i = 1,\ldots ,l_{j}$ and $l_{j} = \sum_{l = 1}^{j - 1}m_{l}$ (satisfied groups)
155
+
156
+ not all $f_{i}(x)\geq \varepsilon_{i}\quad \forall i = l_{j} + 1,\ldots ,l_{j} + m_{j}$ maximize $\sum_{i = 1}^{m}w_{i}f_{i}^{\mathrm{norm}}(x),\quad x\in S,\quad m = l_{j} + m_{j}$ (7)
157
+
158
+ Priority $g$ : if all $f_{i}(x)\geq \varepsilon_{i}\quad \forall i = 1,\ldots ,l_{g},\quad l_{g} = \sum_{l = 1}^{g - 1}m_{l}$ (satisfied groups) (lowest priority)
159
+
160
+ $$
161
+ \text {m a x i m i z e} \sum_ {i = 1} ^ {k} w _ {i} f _ {i} ^ {\text {n o r m}} (x), x \in S
162
+ $$
163
+
164
+ Once a group is satisfied, the total quality value is increased abruptly by the values from the next activated group, which they can lose, if only one objective of a group with higher priority undergoes its $\varepsilon_{i}$ . This makes it very unlikely for more successful solutions that the once gained values of the already contributing objectives drop below their thresholds in the course of further search.
165
+
166
+ The selection of appropriate weights and threshold values requires some knowledge about the problem at hand, including one or more preparative Pareto optimization runs as illustrated in the next section. Thus, neither the original weighted sum nor the CWS are a priori methods. We will come back to this later.
167
+
168
+ # 3.3. Example of the CWS
169
+
170
+ Table 1 gives an example of the usage of weights and thresholds for a problem of scheduling jobs organized in workflows of elementary operations to heterogeneous equipment comparable to the one described in [6]. All operations can be assigned to alternatively usable equipment at different costs and processing times. The task is to produce schedules, where the job processing is as cheap and as fast as possible and each job observes a given budget and a given due date. The rate of utilization of the equipment should be as high and the total makespan of all jobs as low as possible. Additionally, the schedules must be updated frequently, because e.g., new jobs arrive or waiting jobs are cancelled. As described in Section 2.2, all objectives are normalized according to Equation (3). The required limits are obtained as follows: The bounds of job time and costs are calculated by determination of the critical path of the workflow of that job and by the assignment of the fastest/slowest or costliest/cheapest equipment suited for the operations of a job. The user-given due dates and cost limits are checked against these bounds so that the subsequent scheduling is based on goals which are achievable in principle. The lower bound of the makespan is the duration of the longest lasting job using the fastest equipment and the upper bound is the sum of the duration of all jobs using the slowest equipment divided by the smallest number of alternatively usable equipment. As the rate of utilization already yields a value to be maximized between zero and one, there is no need for bounds.
171
+
172
+ Table 1. Example of the use of the cascaded weighted sum (CWS) and the effect of objective group weights. The objectives with the highest priority are always active and contribute to the weighted sum. They are marked here by a light green background.
173
+
174
+ <table><tr><td>Priority</td><td>Objective</td><td>Weight [%]</td><td>Threshold εi</td></tr><tr><td>1</td><td>job time</td><td>30</td><td>0.4</td></tr><tr><td>1</td><td>job costs</td><td>40</td><td>0.25</td></tr><tr><td>2</td><td>makespan</td><td>20</td><td>-</td></tr><tr><td>2</td><td>utilization rate</td><td>10</td><td>-</td></tr></table>
175
+
176
+ Job time and costs are most conflicting, while short processing times support a short makespan and tend to increase the utilization rate. Faster equipment typically is more expensive than slower, and the ratio between costs and duration of the use of equipment will play an important role. Thus, lower costs require the usage of equipment with a lower ratio of costs and duration. This tends to increase the duration and to decrease the workload of less cost-effective equipment. Additionally, shorter job times are also rewarded to some extent by the makespan and possibly by the utilization rate, but costs are not. These considerations are supported by the observation that the processing times are easier to reduce than costs. Thus, job time and costs should compete from the beginning and both should receive a larger portion of the weights. Therefore, they are grouped together at the highest priority so that they always contribute to the weighted sum, as shown in Table 1. As a rule of thumb, further objectives,
177
+
178
+ which are less conflicting with each other and those of higher priorities, can go into the same group. After having determined the priority and grouping structure based on experience and considerations about the relationships between the objectives, appropriate weights and thresholds must be chosen.
179
+
180
+ Based on these considerations and a representative scheduling task, a schedule can be produced based on Pareto optimality for the identification of the region of interest, from which thresholds and weights can be derived. In the given example, this can be done in a first step for a reduced set of objectives by omitting a less conflicting one, e.g., the utilization rate. The Pareto fronts for good makespans of the two remaining objectives can be plotted. From it, the thresholds and the ratio of the weights between them can be derived, see Figure 2 for the relationship between Pareto front and the weights and Figure 5 for the usage of thresholds. This results in a ratio of 3:4 between the averaged job times and costs in the given example. The threshold values $\varepsilon_{i}$ are used as percentage values related to the available scale between $\min(f_i)$ and $\max(f_i)$ , as shown in Equation (8):
181
+
182
+ $$
183
+ f _ {i, \varepsilon} = \min \left(f _ {i}\right) + \varepsilon_ {i} \left(\max \left(f _ {i}\right) - \min \left(f _ {i}\right)\right) \tag {8}
184
+ $$
185
+
186
+ In this case, the objectives of the next group are activated for schedules where the costs are below $75\%$ of their available scales on the average and the finishing times are below $60\%$ of their spendable time frames on the average. This approach can be repeated for the rest of the objectives or the remaining weights are assigned according to experience. For the given example, it was decided based on previous observations that about $70\%$ of the weight should go to the first two objectives and the rest should go mainly to the makespan, as its reduction tends to increase the utilization rate. Table 1 shows the resulting weights. The suitability of the settings can be verified by the generation of a Pareto optimal schedule using all objectives and the comparison with the results obtained when using the CWS instead. Depending on the task at hand and the first setting of weights and thresholds, this may result in an iterative refinement.
187
+
188
+ To sum up, weights and thresholds are derived from experience and/or from previous estimations of the Pareto front of a representative task. In Section 4 we will discuss the range of meaningful applications of the CWS.
189
+
190
+ # 3.4. The Effect of the CWS on the Search
191
+
192
+ The effect of the cascaded assessment on population-based search procedures like EAs, PSO or ACO is illustrated in Figure 6 for two objectives and the example of the feasible objective region used in Figure 2. Based on previous knowledge, the sources of which are discussed in Section 4.3, a region of interest is defined for every objective group and the weights are set accordingly. Additionally, threshold values $\varepsilon_{i}$ are defined for all objectives but those of the group with the lowest priority. Care must be taken for the accessibility of the region of interest being not affected by these thresholds. In the example of Figure 6, objective two has a higher priority than objective one and a threshold value $\varepsilon_{2}$ . In the beginning of the search, a quality gain can be achieved for upward moves only. For those solutions that have surpassed $\varepsilon_{2}$ , the result of the first objective is added according to the weights changing the average movement direction towards the tangent and the region of interest. If the search runs long enough to come more or less close to convergence, most solutions will be found in the region of interest. The best of them will be at the intersection of the tangent and the Pareto front or at least
193
+
194
+ close to it. Especially for EAs, which preserve genotypic diversity to some extent, good but suboptimal solutions close to the best one covering at least parts of the area of interest are very likely to be found. This means that a run is stopped when stagnation occurs over a longer period of time and not when the entire population has (nearly) converged.
195
+
196
+ ![](images/bc4466dbc4d4b1cde5c14e6ac3a6ad1de60037ecbe7155acc7a896c415e14c49.jpg)
197
+ Figure 6. Cascaded weighted sum for $k = 2$ and objective two having a higher priority than objective one. Thus, solutions in the hatched area are bettered according to $f_{2}$ only and will find the largest quality gain in upward moves (red arrow). This changes, if $\varepsilon_{2}$ is exceeded and $f_{1}$ starts to contribute to the resulting sum, as shown by the black arrows.
198
+
199
+ An example of a Pareto front with a non-convex section is shown in Figure 7 using the objective region and the threshold value of Figure 5. As the part between F2 and the rightmost end of the Pareto front is quasi excluded, it is possible now to obtain solutions in the marked area of interest. This would not be the case for the original weighted sum. On the other hand, if the region of interest was located between the magenta dot and F2, most of the Pareto front would still be missed.
200
+
201
+ ![](images/1dbfe304cae12f98c4baf10cd2f1ca0452a93817e33d3f6c72e1477bc15bd78b.jpg)
202
+ Figure 7. Cascaded weighted sum and region of interest for the example with a non-convex Pareto front given in Figure 5.
203
+
204
+ Normalization according to Equations (3) or (4) is done linearly with the same ratio for the entire interval $[\min(f_i), \max(f_i)]$ . If previous knowledge is available for defining an area of interest for the Pareto front, corresponding subintervals are also known for the single objectives. This information can be used to tune the normalization function, as is shown exemplarily in Figure 8. More normalization functions can be found in [15].
205
+
206
+ ![](images/89768f1ec4562c6ae09740de3232f4a36b9f8c4dc34b67b5c287ee832d694a25.jpg)
207
+ Figure 8. Tuning the normalization of Equation (3) (blue straight line) to the interval of interest of one objective $f_{i}$ . The decline outside of this interval is reduced drastically to allow for a strong increase inside, as shown by the green graph.
208
+
209
+ # 3.5. Summary
210
+
211
+ The grouping of the CWS reduces the amount of objectives considered at once and makes weighting easier. The CWS integrates objective thresholds comparable to those of the $\varepsilon$ -constrained method, which are easier to handle for experts in the application field than weights, which now play a relatively minor role. The CWS allows for obtaining parts of a non-convex Pareto front, which were unreachable for the original weighted sum. However, it is still possible that some of these parts remain unattainable. These arguments underline the superiority of the CWS over the pure weighted sum. All aggregation methods alone are not suited as a priori approaches, as they require some previous knowledge to be parameterized.
212
+
213
+ # 4. Cascaded Weighted Sum and Its Field of Application
214
+
215
+ Optimization problems can be classified according to different criteria, such as the number of decision variables or of objectives, or the nature of the search space, where the number of (expected) suboptima or continuity plays an important role, or the type of project to which the optimization project belongs. The latter is often ignored in scientific literature, although it plays a significant role in real-world applications. Thus, we will take a closer look at that issue in the next sections. We will also consider the amount of objectives, as both properties are well suited to compare both assessment methods.
216
+
217
+ # 4.1. Number of Objectives
218
+
219
+ As already discussed in Section 3.3 objectives can conflict more or less. We consider here only objectives, the decision maker regards as conflicting in the sense that they shall be part of Pareto optimality. The amount of these objectives plays an important role for the practical applicability of the Pareto method. The Pareto front of up to three objectives can be visualized easily. For up to four or five objectives, decision maps, polyhedral approximation, or other visualization techniques can be used, see [20]. Interactive visualization techniques may support perception for more than three objectives, "but this requires more cognitive effort if the number of objectives increases", as Lotov and Miettinen summarize their chapter on visualization of the Pareto frontier in [20]. Thus, we can conclude that from five and in particular from more criteria, the perception and the comprehension of the Pareto front become increasingly difficult and turn into a business for experienced experts.
220
+
221
+ ![](images/0450b4aba2b2c319bfbc17b937ade1132c3fcb8c2077e942162cf78b7f215051.jpg)
222
+ Figure 9. The number of required data points (Pareto-optimal solutions) of an approximation of a Pareto front increases exponentially with a growing number of conflicting objectives. The green line is based on a resolution of 7 data points per additional objective (axis), while the blue one uses 5 only.
223
+
224
+ Another question is the effort to determine the Pareto front. For an acceptable visualization of the Pareto front, approximations like the one described in [21] may be used. Depending on the desired quality of approximation, a number of 5 to 7, in general $s$ , Pareto-optimal solutions may be sufficient for two objectives. Assuming that the same quality of interpolation and granularity of support points shall be maintained when further objectives are added, $s^{(k - 1)}$ support points are required for the interpolation of the hyperplane of the Pareto front, provided that all areas shall be examined. With interactive approaches, this can be reduced to some extent, but at the risk of missing promising regions. Figure 9 illustrates this growth of required Pareto optimal solutions. For 5 objectives, for example, 625 solutions are required to interpolate the entire hyperplane with 5 data points per axis. For a better interpolation quality obtained from 7 data points per axis 2401 solutions are needed. It should be noted that every data point requires several evaluations of solutions according to the optimization
225
+
226
+ or approximation procedure used. Depending on the application, evaluations may be based on time-consuming simulations and last several seconds or minutes each. This clearly limits the practical applicability of the Pareto method for growing numbers of conflicting objectives. One solution is to reduce the number of objectives by an aggregation of less conflicting objectives to one.
227
+
228
+ # 4.2. Classification of Application Scenarios and Examples
229
+
230
+ Optimization projects can be classified into three different types:
231
+
232
+ I. The nonrecurring type, which is performed once with little or no prior knowledge of e.g., the impact and relevance of decision variables or the behavior of objectives. This type requires many decisions regarding e.g., the number and ranges of decision variables, the number and kind of objectives, of restrictions, and more.
233
+ II. The extended nonrecurring project, where some variants of the first optimization task are handled as well. Frequently, the modifications of the original project are motivated by the experience gained in the first optimization runs. As in the first type, decisions are usually made by humans.
234
+ III. The recurring type, usually based on experience gained from a predecessor project and frequently part of an automated process without or with minor human interaction only.
235
+
236
+ Examples of Types I and II are design optimization tasks like the design of micro-optical devices as described in [22] or problems from such a challenging field like aerodynamic design optimization, see e.g., [23]. A typical example of type III is the task of scheduling jobs to be processed on a computational grid as introduced in the last section and described in detail in [6]. Normally, nobody is interested in the details of the actual schedule, which usually will be replaced soon by a new one due to a replanning event like a new job to be planned or the introduction of new resources. Another example is the planning of collision-free paths for movements of industrial robots, as described in detail and for different industrial robot types in [14,15,24,25]. This example also shows that in some cases human judgment is possible in addition to a pure consideration of the achieved objective figures of a generated robot move path. The decision maker can take a look at the resulting movement using a robot simulator or the real device. Assessing this movement is much more impressive and illustrative than reading objective figures. On the other hand, such a well-fitting visualization is not always available.
237
+
238
+ # 4.3. Comparison of Pareto Optimization and CWS in Different Application Scenarios
239
+
240
+ # 4.3.1. Individual Optimization Project
241
+
242
+ For the first project type, the ranges of possibly achievable objective values usually are not known in advance. In this case, an estimation of them can be obtained from a Pareto optimization. From these data and the resulting Pareto front, a human decision maker can opt for additional and modified optimization or select the final solution. This type of optimization project clearly belongs to the domain of Pareto-based optimization.
243
+
244
+ # 4.3.2. Optimization Project with Some Task Variants
245
+
246
+ In many cases, the above statement also applies to the second project type, as experience is still limited and there must be good reasons to change the assessment method. Such a reason may be more than five objectives and if one or few areas of interest can be identified. In such cases, the computational effort can be reduced significantly. As mentioned before, an assessment of one solution in real world applications is frequently done by a simulation run, the duration of which strongly depends on the application at hand. One simulation may require seconds or even minutes and more. In such cases, the reduction of the number of evaluations is critical and an early concentration on the area of interest by using the CWS can be essential for the success of the project. On the other hand, the impact of the optimization is another important and application-dependent issue: If the savings expected from optimization justify the computational effort, Pareto optimization should be used until the areas of interest are reliably identified. Based on that, these areas can be explored in greater detail by optimization runs using the CWS, as illustrated in Figure 10. These considerations show that according to the project conditions, both methods may complement each other.
247
+
248
+ ![](images/3f12a0652514fdf41070930abd560c4cb3e319f17d304332fb21977608bbf040.jpg)
249
+ Figure 10. Both diagrams show a sample population of an advanced search more or less shortly before convergence. The CWS concentrates the best individuals (black dots) more or less on the region of interest, as shown in the left diagram. In contrast to that, Pareto-based optimization procedures attempt to distribute their solutions along the Pareto front as best as they can, see the right diagram. Thus, fewer solutions will be found in the area of interest.
250
+
251
+ # 4.3.3. Repeated Optimization, also as Part of an Automated Process
252
+
253
+ Another domain of CWS-based optimization or planning is project type III, where the same task is executed repeatedly with minor modifications and, thus, known areas of interest. If a major change occurs, the area of interest can be adapted by using Pareto optimization. A typical example is the scheduling of jobs to the heterogeneous resources of a computational grid, as was introduced before and described in detail in [6]. It is a permanent replanning process, because events demanding a change of the schedule may occur long before a plan is completed. Examples are the introduction of new jobs or new resources, unexpected deactivations of resources, changes of the cost profiles of
254
+
255
+ resources, early completion of jobs, or the like. As described in [6], five objectives are optimized and four penalty functions are used to handle the restrictions. Because planning time is limited and thousands of jobs and hundreds of resources must be handled, the planning must be stopped (long) before the used Evolutionary Algorithm converges. Thus, it is important to explore the region of interest as well as possible, see Figure 10. Additionally, there is no human expert to check the results several times per hour. For this automated scheduling process, the determination of the Pareto front makes no sense and the CWS is a meaningful alternative. These considerations also apply to many other scheduling tasks like the one described in [15,19].
256
+
257
+ Another example already mentioned is the planning of collision-free movement paths for an industrial robot [14,15,24,25]. Depending on the task at hand, we have four or five objectives and at the minimum one penalty function to handle collisions. As robot movements can be simulated and visualized, the results are checked by a human expert mostly on the level of robot movements rather than of objective figures. As areas of interest usually are also known in advance and new solutions should be generated fast, the CWS is suited here as well and for the same reasons as with the task before.
258
+
259
+ # 5. Conclusions
260
+
261
+ In Section 4.1 it was shown that the amount of solutions required to approximate a Pareto front increases exponentially with a growing number of conflicting objectives. As illustrated in Figure 9, the amount of evaluations increases considerably for more than five objectives. This limits the applicability of the Pareto approach for real-world applications, which frequently require time-consuming evaluations especially when based on simulation.
262
+
263
+ We have introduced the cascaded weighted sum (CWS), which can be described roughly as a combination of the weighted sum and the $\varepsilon$ -constrained method. The major drawback of the pure weighted sum, the inaccessibility of parts of the Pareto front in non-convex cases, can be reduced to some extent by the CWS, see Section 3 and Figure 7. As the pure weighted sum, the CWS is not an a priori method. The major advantage of the CWS is its ability to concentrate solutions on the region of interest with less computational effort than Pareto optimization. In addition, this effort difference increases immensely with an increasing number of objectives, in particular for more than five. The region of interest can be a result of previous experience or knowledge, of a first Pareto-based optimization, or a combination thereof.
264
+
265
+ In Section 4.2, optimization projects were divided into three types, the individual project type, projects treating some variants of the task, and the type of repeated optimization of the same task with more or less small variations. The unquestioned domain of Pareto optimization is the first type of optimization projects. For the second type and five or more objectives, a combination of both methods can be advantageous as was described in Section 4.3.2. For the third project type of repeated optimization of task variants and with no or only minor human interaction, the Pareto front is not required, as the regions of interest are already known from previous solutions or initial Pareto optimization. The concentration of the CWS on that region is beneficial, as the computational effort can be reduced significantly. This is of importance especially in those cases, where a fast solution is required or the amount of evaluations is limited due to long run times.
266
+
267
+ Thus, we can conclude that both methods have their place and their field of application. Additionally, they can complement each other.
268
+
269
+ # Acknowledgments
270
+
271
+ We acknowledge support by the Deutsche Forschungsgemeinschaft and Open Access Publishing Fund of Karlsruhe Institute of Technology.
272
+
273
+ # Conflicts of Interest
274
+
275
+ The authors declare no conflict of interest.
276
+
277
+ # References
278
+
279
+ 1. Pareto, V. Cours d'Économie Politique, (in French); F. Rouge: Lausanne, Switzerland, 1896.
280
+ 2. Hoffmeister, F.; Bäck, T. Genetic Algorithms and Evolution Strategies: Similarities and Differences; Technical Report SYS-1/92; FB Informatik, University of Dortmund: Dortmund, Germany, 1992.
281
+ 3. Multiobjective Optimization: Interactive and Evolutionary Approaches; Lecture notes in computer science 5252; Branke, J., Deb, K., Miettinen, K., Sławinski, R., Eds.; Springer: Berlin, Germany, 2008.
282
+ 4. Deb, K. Introduction to evolutionary multiobjective optimization. In Multiobjective Optimization: Interactive and Evolutionary Approaches; Branke, J., Deb, K., Miettinen, K., Slosinski, R., Eds.; Lecture notes in computer science 5252; Springer: Berlin, Germany, 2008; pp. 58-96.
283
+ 5. Miettinen, K. Nonlinear Multiobjective Optimization; International series in operations research & management science 12; Kluwer Academic Publishers: Boston, MA, USA, 1999.
284
+ 6. Jakob, W.; Strack, S.; Quinte, A.; Bengel, G.; Stucky, K.-U.; Süß, W. Fast rescheduling of multiple workflows to constrained heterogeneous resources using multi-criteria memetic computing. Algorithms 2013, 2, 245-277.
285
+ 7. Haimes, Y.Y.; Lasdon, L.S.; Wismer, D.A. On a bicriterion formulation of the problems of integrated system identification and system optimization. IEEE Trans. Syst. Man Cybern. 1971, 3, 296-297.
286
+ 8. Osyczka, A. Multicriterion Optimization in Engineering with FORTRAN Programs; Ellis Horwood series in mechanical engineering; E. Horwood: London, UK, 1984.
287
+ 9. Miettinen, K. Introduction to multiobjective optimization: Noninteractive approaches. In Multiobjective Optimization: Interactive and Evolutionary Approaches; Branke, J., Deb, K., Miettinen, K., Słowyński, R., Eds.; Lecture notes in computer science 5252; Springer: Berlin, Germany, 2008; pp. 1-26.
288
+ 10. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 2, 182-197.
289
+ 11. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the strength pareto evolutionary algorithm for multiobjective optimization. In Evolutionary Methods for Design Optimization and Control with Applications to Industrial Problems, Proceedings of the EUROGEN'2001 Conference, Athens, Greece, 19-21 September 2001; Giannakoglou, K.C., Tsahalis, D.T., Périaux,
290
+
291
+ J., Papailiou, K.D., Fogarty, T., Eds.; International Center for Numerical Methods in Engineering: Athens, Greece: 2001; pp. 95-100.
292
+ 12. Beume, N.; Naujoks, B.; Emmerich, M. SMS-EMOA: Multiobjective selection based on dominated hypervolume. Eur. J. Oper. Res. 2007, 3, 1653–1669.
293
+ 13. Blume, C. GLEAM—A system for simulated "intuitive learning". In Parallel Problem Solving from Nature: Proceedings of the 1st Workshop, Dortmund, Germany, 1-3 October 1990; Schwefel, H.-P., Manner, R., Eds.; Lecture notes in computer science 496; Springer: Berlin, Germany, 1991; pp. 48-54.
294
+ 14. Blume, C.; Jakob, W.; Krisch, S. Robot trajectory planning with collision avoidance using genetic algorithms and simulation. In Proceedings of the 25th International Symposium on Industrial Robots (ISIR), Hanover, Germany, 25-27 April, 1994; pp. 169-175.
295
+ 15. Blume, C.; Jakob, W. GLEAM—General Learning Evolutionary Algorithm and Method. Ein evolutionärer Algorithmus und seine Anwendungen, (in German); Schriftenreihe des Instituts für Angewandte Informatik, Automatisierungstechnik am Karlsruhe Institut für Technologie 32; KIT Scientific Publishing: Karlsruhe, Germany, 2009.
296
+ 16. Gorges-Schleuter, M. Explicit parallelism of genetic algorithms through population structures. In Proceedings of the 1st Workshop on Parallel Problem Solving from Nature (PPSN I), Dortmund, Germany, 1-3 October 1990; Schwefel, H.-P., Manner, R., Eds.; LNCS 496, Springer: Berlin, Germany, 1991; pp. 150-159.
297
+ 17. Sarma, K.; de Jong, K. An analysis of the effects of neighborhood size and shape on local selection algorithms. In Proceedings of the 4th International Conference on Parallel Problem Solving from Nature (PPSN IV), LNCS 1141, Berlin, Germany, 22-26 September 1996; Voigt, H.-M., Ebeling, W., Rechenberg, I., Schwefel, H.-P., Eds.; Springer: Berlin, Germany, 1996; pp. 236-244.
298
+ 18. Nguyen, Q.H.; Ong, Y.-S.; Lim, M.H.; Krasnogor, N. Adaptive cellular memetic algorithms. Evol. Comput. 2009, 17, 231-256.
299
+ 19. Jakob, W. A general cost-benefit-based adaptation framework for multimeme algorithms. Memet. Comput. 2010, 3, 201-218.
300
+ 20. Lotov, A.V.; Miettinen, K. Visualizing the Pareto Frontier. In Multiobjective Optimization: Interactive and Evolutionary Approaches; Branke, J., Deb, K., Miettinen, K., Słowyński, R., Eds.; Lecture notes in computer science 5252; Springer: Berlin, Germany, 2008; pp. 213-243.
301
+ 21. Klamroth, K.; Tind, J.; Wiecek, M.M. Unbiased approximation in multicriteria optimization. Math. Method Oper. Res. 2003, 3, 413-437.
302
+ 22. Jakob, W.; Gorges-Schleuter, M.; Sieber, I.; Suß, W.; Eggert, H. Solving a highly multi-modal design optimization problem using the extended genetic algorithm GLEAM. In Computer Aided Optimum Design of Structures VI: Conf. Proc. OPTI 99; Hernandez, S., Kassab, A.J., Brebbia, C.A., Eds.; WIT Press: Southampton, UK, 1999; pp. 205-214.
303
+
304
+ 23. Stewart, T.; Bandte, O.; Braun, H.; Chakraborti, N.; Ehrgott, M.; Göbelt, M.; Jin, Y.; Nakayama, H.; Poles, S.; di Stefano, D. Real-world applications of multiobjective optimization. In Multiobjective Optimization: Interactive and Evolutionary Approaches; Branke, J., Deb, K., Miettinen, K., Słowyński, R., Eds.; Lecture notes in computer science 5252; Springer-Verlag: Berlin, Germany, 2008; pp. 285-327.
305
+ 24. Blume, C. Automatic Generation of Collision Free Moves for the ABB Industrial Robot Control. In Proceedings of the 1997 First International Conference on Knowledge-Based Intelligent Electronic Systems (KES '97), Adelaide, SA, Australia, 21-23 May 1997; Volume 2, pp. 672-683.
306
+ 25. Blume, C. Optimized Collision Free Robot Move Statement Generation by the Evolutionary Software GLEAM. In Real World Applications of Evolutionary Computing: Proceedings; Cagnoni, S., Ed.; Lecture notes in computer science 1803; Springer: Berlin, Germany, 2000; pp. 327-338.
307
+ © 2014 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
2203.02xxx/2203.02697/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09d5347591ab0fd367ff009141c216b662bfe4c08fcd3753afb991d9d44bb7b7
3
+ size 321706
2203.02xxx/2203.02697/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2203.02xxx/2203.02700/3d69ca7f-39a5-4c99-b9df-bcb921fe9d04_content_list.json ADDED
@@ -0,0 +1,1543 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "RACE: Retrieval-Augmented Commit Message Generation",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 193,
8
+ 79,
9
+ 803,
10
+ 99
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Ensheng Shi $^{a}$ Yanlin Wang $^{b,\\S,\\dagger}$ Wei Tao $^{c}$ Lun Du $^{d}$",
17
+ "bbox": [
18
+ 292,
19
+ 105,
20
+ 712,
21
+ 123
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "Hongyu Zhang<sup>e</sup> Shi Hand Dongmei Zhang<sup>d</sup> Hongbin Sun<sup>a,§</sup>",
28
+ "bbox": [
29
+ 247,
30
+ 123,
31
+ 754,
32
+ 140
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "$^{a}$ Xi'an Jiaotong University $^{b}$ School of Software Engineering, Sun Yat-sen University",
39
+ "bbox": [
40
+ 149,
41
+ 140,
42
+ 852,
43
+ 156
44
+ ],
45
+ "page_idx": 0
46
+ },
47
+ {
48
+ "type": "text",
49
+ "text": "$^{c}$ Fudan University $^{d}$ Microsoft Research $^{e}$ The University of Newcastle",
50
+ "bbox": [
51
+ 200,
52
+ 156,
53
+ 801,
54
+ 173
55
+ ],
56
+ "page_idx": 0
57
+ },
58
+ {
59
+ "type": "text",
60
+ "text": "s1530129650@stu.xjtu.edu.cn, hsun@mail.xjtu.edu.cn",
61
+ "bbox": [
62
+ 250,
63
+ 175,
64
+ 752,
65
+ 189
66
+ ],
67
+ "page_idx": 0
68
+ },
69
+ {
70
+ "type": "text",
71
+ "text": "wangylin36@mail.sysu.edu.cn,wtao18@fudan.edu.cn",
72
+ "bbox": [
73
+ 257,
74
+ 191,
75
+ 742,
76
+ 206
77
+ ],
78
+ "page_idx": 0
79
+ },
80
+ {
81
+ "type": "text",
82
+ "text": "{lun.du, shihan, dongmeiz}@microsoft.com",
83
+ "bbox": [
84
+ 300,
85
+ 208,
86
+ 704,
87
+ 222
88
+ ],
89
+ "page_idx": 0
90
+ },
91
+ {
92
+ "type": "text",
93
+ "text": "hongyu.zhang@newcastle.edu.au",
94
+ "bbox": [
95
+ 354,
96
+ 224,
97
+ 648,
98
+ 239
99
+ ],
100
+ "page_idx": 0
101
+ },
102
+ {
103
+ "type": "text",
104
+ "text": "Abstract",
105
+ "text_level": 1,
106
+ "bbox": [
107
+ 260,
108
+ 252,
109
+ 339,
110
+ 266
111
+ ],
112
+ "page_idx": 0
113
+ },
114
+ {
115
+ "type": "text",
116
+ "text": "Commit messages are important for software development and maintenance. Many neural network-based approaches have been proposed and shown promising results on automatic commit message generation. However, the generated commit messages could be repetitive or redundant. In this paper, we propose RACE, a new retrieval-augmented neural commit message generation method, which treats the retrieved similar commit as an exemplar and leverages it to generate an accurate commit message. As the retrieved commit message may not always accurately describe the content/intent of the current code diff, we also propose an exemplar guider, which learns the semantic similarity between the retrieved and current code diff and then guides the generation of commit message based on the similarity. We conduct extensive experiments on a large public dataset with five programming languages. Experimental results show that RACE can outperform all baselines. Furthermore, RACE can boost the performance of existing Seq2Seq models in commit message generation. Our data and source code are available at https://github.com/DeepSoftwareAnalytics/RACE.",
117
+ "bbox": [
118
+ 144,
119
+ 279,
120
+ 460,
121
+ 661
122
+ ],
123
+ "page_idx": 0
124
+ },
125
+ {
126
+ "type": "text",
127
+ "text": "1 Introduction",
128
+ "text_level": 1,
129
+ "bbox": [
130
+ 114,
131
+ 674,
132
+ 258,
133
+ 688
134
+ ],
135
+ "page_idx": 0
136
+ },
137
+ {
138
+ "type": "text",
139
+ "text": "In software development and maintenance, source code is frequently changed. In practice, code changes are often documented as natural language commit messages, which summarize what (content) the code changes are or why (intent) the code is changed (Buse and Weimer, 2010; Cortes-Coy et al., 2014). High-quality commit messages are essential to help developers understand the evolution of software without diving into implementation details, which can save a large amount of",
140
+ "bbox": [
141
+ 112,
142
+ 699,
143
+ 489,
144
+ 859
145
+ ],
146
+ "page_idx": 0
147
+ },
148
+ {
149
+ "type": "text",
150
+ "text": "time and effort in software development and maintenance (Dias et al., 2015; Barnett et al., 2015). However, it is difficult to write high-quality commit messages due to lack of time, clear motivation, or experienced skills. Even for seasoned developers, it still poses a considerable amount of extra workload to write a concise and informative commit message for massive code changes (Nie et al., 2021). It is also reported that around $14\\%$ of commit messages over 23,000 projects in SourceForge are left empty (Dyer et al., 2013). Thus, automatically generating commit messages becomes an important task.",
151
+ "bbox": [
152
+ 507,
153
+ 253,
154
+ 884,
155
+ 462
156
+ ],
157
+ "page_idx": 0
158
+ },
159
+ {
160
+ "type": "text",
161
+ "text": "Over the years, many approaches have been proposed to automatically generate commit messages. Early studies (Shen et al., 2016; Cortes-Coy et al., 2014) are mainly based on predefined rules or templates, which may not cover all situations or comprehensively infer the intentions behind code changes. Later, some studies (Liu et al., 2018; Huang et al., 2017, 2020) adopt information retrieval (IR) techniques to reuse commit messages of similar code changes. They can take advantage of similar examples, but the reused commit messages might not correctly describe the content/intent of the current code change. Recently, some Seq2Seq-based neural network models (Loyola et al., 2017; Jiang et al., 2017; Xu et al., 2019; Liu et al., 2019; Jung, 2021) have been proposed to understand code diffs and generate the high-quality commit messages. These approaches show promising performance, but they tend to generate high-frequency and repetitive tokens and the generated commit messages have the problem of insufficient information and poor readability (Wang et al., 2021a; Liu et al., 2018). Some studies (Liu et al., 2020; Wang et al., 2021a) also explore the combination of neural-based and IR-based techniques. Liu et al. (2020) propose an approach to rank the retrieved commit message (obtained by a simple IR-based model) and the generated commit message (ob-",
162
+ "bbox": [
163
+ 507,
164
+ 468,
165
+ 884,
166
+ 919
167
+ ],
168
+ "page_idx": 0
169
+ },
170
+ {
171
+ "type": "aside_text",
172
+ "text": "arXiv:2203.02700v3 [cs.SE] 22 Oct 2022",
173
+ "bbox": [
174
+ 21,
175
+ 312,
176
+ 60,
177
+ 725
178
+ ],
179
+ "page_idx": 0
180
+ },
181
+ {
182
+ "type": "page_footnote",
183
+ "text": "$^{\\S}$ Yanlin Wang and Hongbin Sun are the corresponding authors.",
184
+ "bbox": [
185
+ 112,
186
+ 866,
187
+ 487,
188
+ 891
189
+ ],
190
+ "page_idx": 0
191
+ },
192
+ {
193
+ "type": "page_footnote",
194
+ "text": "$^{\\dagger}$ Work done during the author's employment at Microsoft Research Asia",
195
+ "bbox": [
196
+ 112,
197
+ 892,
198
+ 485,
199
+ 917
200
+ ],
201
+ "page_idx": 0
202
+ },
203
+ {
204
+ "type": "text",
205
+ "text": "tained by a neural network model). Wang et al. (2021a) propose to use the similar code diff as auxiliary information in the inference stage, while the model is not trained to learn how to effectively utilize the information of retrieval results. Therefore, both of them fail to take advantage of the information of retrieved similar results well.",
206
+ "bbox": [
207
+ 112,
208
+ 84,
209
+ 489,
210
+ 197
211
+ ],
212
+ "page_idx": 1
213
+ },
214
+ {
215
+ "type": "text",
216
+ "text": "In this paper, we propose a novel model RACE (Retrieval-Augmented Commit mEssay generation), which retrieves a similar commit message as an exemplar, guides the neural model to learn the content of the code diff and the intent behind the code diff, and generates the readable and informative commit message. The key idea of our approach is retrieval and augmentation. Specifically, we first train a code diff encoder to learn the semantics of code diffs and encode the code diff into high-dimensional semantic space. Then, we retrieve the semantically similar code diff paired with the commit message on a large parallel corpus based on the similarity measured by vectors' distance. Next, we treat the similar commit message as an exemplar and leverage it to guide the neural-based models to generate an accurate commit message. However, the retrieved commit messages may not accurately describe the content/intent of current code diffs and may even contain wrong or irrelevant information. To avoid the retrieved samples dominating the processing of commit message generation, we propose an exemplar guider, which first learns the semantic similarity between the retrieved and current code diff and then leverages the information of the exemplar based on the learned similarity to guide the commit message generation.",
217
+ "bbox": [
218
+ 115,
219
+ 204,
220
+ 490,
221
+ 639
222
+ ],
223
+ "page_idx": 1
224
+ },
225
+ {
226
+ "type": "text",
227
+ "text": "To evaluate the effectiveness of RACE, we conduct experiments on a large-scale dataset MCMD (Tao et al., 2021) with five programming language (Java, C#, C++, Python and JavaScript) and compare RACE with 11 state-of-the-art approaches. Experimental results show that: (1) RACE significantly outperforms existing state-of-the-art approaches in terms of four metrics (BLUE, Meteor, Rouge-L and Cider) on the commit message generation. (2) RACE can boost the performance of existing Seq2Seq models in commit message generation. For example, it can improve the performance of NMTGen (Loyola et al., 2017), CommitBERT (Jung, 2021), CodeT5-small (Wang et al., 2021b) and CodeT5-base (Wang et al., 2021b) by $43\\%$ , $11\\%$ , $15\\%$ , and $16\\%$ on average in terms of BLEU, respectively. In addition,",
228
+ "bbox": [
229
+ 112,
230
+ 645,
231
+ 490,
232
+ 920
233
+ ],
234
+ "page_idx": 1
235
+ },
236
+ {
237
+ "type": "text",
238
+ "text": "we also conduct human evaluation to confirm the effectiveness of RACE.",
239
+ "bbox": [
240
+ 507,
241
+ 84,
242
+ 880,
243
+ 115
244
+ ],
245
+ "page_idx": 1
246
+ },
247
+ {
248
+ "type": "text",
249
+ "text": "We summarize the main contributions of this paper as follows:",
250
+ "bbox": [
251
+ 507,
252
+ 116,
253
+ 880,
254
+ 148
255
+ ],
256
+ "page_idx": 1
257
+ },
258
+ {
259
+ "type": "list",
260
+ "sub_type": "text",
261
+ "list_items": [
262
+ "- We propose a retrieval-augmented neural commit message generation model, which treats the retrieved similar commit as an exemplar and leverages it to guide neural network model to generate informative and readable commit messages.",
263
+ "- We apply our retrieval-augmented framework to four existing neural network-based approaches (NMTGen, CommitBERT, CodeT5-small, and CodeT5-base) and greatly boost their performance.",
264
+ "- We perform extensive experiments including human evaluation on a large multi-programming-language dataset and the results confirm the effectiveness of our approach over state-of-the-art approaches."
265
+ ],
266
+ "bbox": [
267
+ 531,
268
+ 158,
269
+ 884,
270
+ 434
271
+ ],
272
+ "page_idx": 1
273
+ },
274
+ {
275
+ "type": "text",
276
+ "text": "2 Related Work",
277
+ "text_level": 1,
278
+ "bbox": [
279
+ 509,
280
+ 443,
281
+ 665,
282
+ 458
283
+ ],
284
+ "page_idx": 1
285
+ },
286
+ {
287
+ "type": "text",
288
+ "text": "Code intelligence, which leverages machine learning especially deep learning-based method to understand source code, is an emerging topic and has obtained the promising results in many software engineering tasks, such as code summarization (Zhang et al., 2020; Shi et al., 2021a, 2022b; Wang et al., 2020) and code search (Gu et al., 2018; Du et al., 2021; Shi et al., 2022a). Among them, commit message generation plays an important role in the software evolution.",
289
+ "bbox": [
290
+ 507,
291
+ 468,
292
+ 884,
293
+ 627
294
+ ],
295
+ "page_idx": 1
296
+ },
297
+ {
298
+ "type": "text",
299
+ "text": "In early work, information retrieval techniques are introduced to commit message generation (Liu et al., 2018; Huang et al., 2017, 2020). For instance, ChangeDoc (Huang et al., 2020) retrieves the most similar commits according to the syntax or semantics in the code diff and reuses commit messages of similar code diffs. NNGen (Liu et al., 2018) is a simple yet effective retrieval-based method using the nearest neighbor algorithm. It firstly recalls the top-k similar code diffs in the parallel corpus based on cosine similarity between bag-of-words vectors of code diffs. Then select the most similar result based on BLEU scores between each of them (topk results) and the input code diff. These approaches can reuse similar examples and the reused commit messages are usually readable and understandable.",
300
+ "bbox": [
301
+ 507,
302
+ 629,
303
+ 882,
304
+ 885
305
+ ],
306
+ "page_idx": 1
307
+ },
308
+ {
309
+ "type": "text",
310
+ "text": "Recently, many neural-based approaches (Loyola et al., 2017; Jiang et al., 2017; Xu et al., 2019;",
311
+ "bbox": [
312
+ 507,
313
+ 887,
314
+ 882,
315
+ 917
316
+ ],
317
+ "page_idx": 1
318
+ },
319
+ {
320
+ "type": "text",
321
+ "text": "Liu et al., 2019, 2020; Jung, 2021; Dong et al., 2022; Nie et al., 2021; Wang et al., 2021a) have been used to learn the semantic of code diffs and translate them into commit messages. For example, NMTGen (Loyola et al., 2017) and CommitGen (Jiang et al., 2017) treat the code diffs as plain texts and adopt the Seq2Seq neural network with different attention mechanisms to translate them into commit messages. CoDiSum (Xu et al., 2019) extracts both code structure and code semantics from code diffs and jointly models them with a multi-layer bidirectional GRU to better learn the representations of code diffs. PtrGNCMsg (Liu et al., 2019) incorporates the pointer-generator network into the Seq2Seq model to handle out-of-vocabulary (OOV) words. CommitBERT leverage CodeBERT (Feng et al., 2020), a pre-trained language model for source code, to learn the semantic representations of code diffs and adopt a Transformer-based (Vaswani et al., 2017) decoder to generate the commit message. These approaches show promising results on the generation of commit messages.",
322
+ "bbox": [
323
+ 115,
324
+ 84,
325
+ 489,
326
+ 454
327
+ ],
328
+ "page_idx": 2
329
+ },
330
+ {
331
+ "type": "text",
332
+ "text": "Recently, introducing retrieved relevant results into the training process has been found useful in most generation tasks (Lewis et al., 2020; Yu et al., 2021; Wei et al., 2020). Some studies (Liu et al., 2020; Wang et al., 2021a) also explore the combination of neural-based models and IR-based techniques to generate commit messages. ATOM (Liu et al., 2020) ensembles the neural-based model and the IR-based technique through the hybrid ranking. Specifically, it uses BiLSTM to encode ASTs paths extracted from ASTs of code diffs and adopt a decoder to generate commit messages. It also uses TF-IDF technique to represent code diffs as vectors and retrieves the most similar commit message based on cosine similarity. The generated and retrieved commit messages are finally prioritized by a hybrid ranking module. CoRec (Wang et al., 2021a) is also a hybrid model and only considers the retrieved result during the inference. Specifically, at the training stage, they use an encoder-decoder neural model to encode the input code diffs by an encoder and generate commit messages by a decoder. At the inference stage, they first use the trained encoder to retrieve the most similar code diff from the training set. Then they reuse a trained encoder-decoder to encode the input and retrieved code diff, combine the probability distributions (obtained by two decoders) of each word, and generate",
333
+ "bbox": [
334
+ 115,
335
+ 468,
336
+ 487,
337
+ 917
338
+ ],
339
+ "page_idx": 2
340
+ },
341
+ {
342
+ "type": "text",
343
+ "text": "the final commit message step by step. In summary, ATOM does not learn to refine the retrieved results or the generated results, and CoRec is not trained to utilize the information of retrieval results. Therefore, both of them fail to take full advantage of the retrieved similar results. In this paper, we treat the retrieved similar commit as an exemplar and train the model to leverage the exemplar to enhance commit message generation.",
344
+ "bbox": [
345
+ 510,
346
+ 84,
347
+ 884,
348
+ 229
349
+ ],
350
+ "page_idx": 2
351
+ },
352
+ {
353
+ "type": "text",
354
+ "text": "3 Proposed Approach",
355
+ "text_level": 1,
356
+ "bbox": [
357
+ 510,
358
+ 243,
359
+ 714,
360
+ 260
361
+ ],
362
+ "page_idx": 2
363
+ },
364
+ {
365
+ "type": "text",
366
+ "text": "The overview of RACE is shown in Figure 1. It includes two modules: retrieval module and generation module. Specifically, RACE firstly retrieves the most semantically similar code diff paired with the commit message from the large parallel training corpus. The semantic similarity between two code diffs is measured by the cosine similarity of vectors obtained by a code diff encoder. Next, RACE treats the retrieved commit message as an example and uses it to guide the neural network to generate an understandable and concise commit message.",
367
+ "bbox": [
368
+ 510,
369
+ 269,
370
+ 884,
371
+ 447
372
+ ],
373
+ "page_idx": 2
374
+ },
375
+ {
376
+ "type": "text",
377
+ "text": "3.1 Retrieval module",
378
+ "text_level": 1,
379
+ "bbox": [
380
+ 510,
381
+ 460,
382
+ 690,
383
+ 475
384
+ ],
385
+ "page_idx": 2
386
+ },
387
+ {
388
+ "type": "text",
389
+ "text": "In this module, we aim to retrieve the most semantically similar result. Specifically, we first train an encoder-decoder neural network on the large commit message generation dataset. The encoder is used to learn the semantics of code diffs and encode code diffs into a high-dimension semantic space. Then we retrieve the most semantically similar code diff paired with the commit message from the large parallel training corpus. The semantic similarity between two code diffs is measured by the cosine similarity of vectors obtained by a well-trained code diff encoder.",
390
+ "bbox": [
391
+ 510,
392
+ 483,
393
+ 884,
394
+ 674
395
+ ],
396
+ "page_idx": 2
397
+ },
398
+ {
399
+ "type": "text",
400
+ "text": "Recently, encoder-decoder neural network models (Loyola et al., 2017; Jiang et al., 2017; Jung, 2021), which leverage an encoder to learn the semantic of code diff and employ a decoder to generate the commit message, have shown their superiority in the understanding of code offs and commit messages generation. To enable the code diff encoder to understand the semantics of code offs, we train it with a commit message generator on a large commit message generation dataset, which consists of about 0.9 million <code diff, commit message> pairs.",
401
+ "bbox": [
402
+ 510,
403
+ 677,
404
+ 884,
405
+ 869
406
+ ],
407
+ "page_idx": 2
408
+ },
409
+ {
410
+ "type": "text",
411
+ "text": "To capture long-range dependencies (e.g. a variable is initialized before the changed line) and more contextual information of code diffs, we em",
412
+ "bbox": [
413
+ 510,
414
+ 871,
415
+ 882,
416
+ 917
417
+ ],
418
+ "page_idx": 2
419
+ },
420
+ {
421
+ "type": "image",
422
+ "img_path": "images/fd4aef83d51ff54aa0f9e77f50d30f3d1820f9418368e54b218e777453a75071.jpg",
423
+ "image_caption": [],
424
+ "image_footnote": [],
425
+ "bbox": [
426
+ 196,
427
+ 84,
428
+ 803,
429
+ 186
430
+ ],
431
+ "page_idx": 3
432
+ },
433
+ {
434
+ "type": "image",
435
+ "img_path": "images/1f23e86417c1a02989374e6f2ce6197fd5f838385a9bb232fd5ef16290277b89.jpg",
436
+ "image_caption": [
437
+ "Figure 1: The architecture of RACE. It includes two modules: retrieval module and generation module. The retrieval module is used to retrieve the most similar code diff and commit message. The generation module leverages the retrieved result to enhance the performance of neural network models."
438
+ ],
439
+ "image_footnote": [],
440
+ "bbox": [
441
+ 196,
442
+ 189,
443
+ 803,
444
+ 313
445
+ ],
446
+ "page_idx": 3
447
+ },
448
+ {
449
+ "type": "text",
450
+ "text": "ploy a Transformer-based encoder to learn the semantic representations of input code diffs. As shown in Figure 1, a Transformer-based encoder is stacked with multiple encoder layers. Each layer consists of four parts, namely, a multi-head self-attention module, a relative position embedding module, a feed forward network (FFN) and an add & norm module. In $b$ -th attention head, the input $\\mathbf{X}^{\\mathrm{b}} = (\\mathbf{x}_1^{\\mathrm{b}},\\mathbf{x}_2^{\\mathrm{b}},\\dots,\\mathbf{x}_1^{\\mathrm{b}})$ (where $\\mathbf{X}^{\\mathrm{b}} = \\mathbf{X}[(b - 1)*head_{dim}:b*head_{dim}]$ , $\\mathbf{X}$ is the sequence of code diff embedding, $head_{dim}$ is the dimension of each head and $l$ is the input sequence length.) is transformed to $(\\mathbf{Head}^{b} = \\mathbf{head}_{1}^{\\mathrm{b}},\\mathbf{head}_{2}^{\\mathrm{b}},\\dots,\\mathbf{head}_{l}^{\\mathrm{b}})$ by:",
451
+ "bbox": [
452
+ 112,
453
+ 393,
454
+ 489,
455
+ 619
456
+ ],
457
+ "page_idx": 3
458
+ },
459
+ {
460
+ "type": "equation",
461
+ "text": "\n$$\n\\mathbf {h e a d} _ {\\mathrm {i}} ^ {\\mathrm {b}} = \\sum_ {j = 1} ^ {l} \\alpha_ {i j} \\left(\\mathbf {W} _ {\\mathbf {V}} \\mathbf {x} _ {\\mathrm {j}} ^ {\\mathrm {b}} + \\mathbf {p} _ {\\mathrm {i j}} ^ {\\mathbf {V}}\\right) \\tag {1}\n$$\n",
462
+ "text_format": "latex",
463
+ "bbox": [
464
+ 174,
465
+ 626,
466
+ 485,
467
+ 665
468
+ ],
469
+ "page_idx": 3
470
+ },
471
+ {
472
+ "type": "equation",
473
+ "text": "\n$$\ne _ {i j} = \\frac {\\left(\\mathbf {W _ {Q}} \\mathbf {x _ {i} ^ {b}}\\right) ^ {T} \\left(\\mathbf {W _ {K}} \\mathbf {x _ {j} ^ {b}} + \\mathbf {p _ {i j} ^ {K}}\\right)}{\\sqrt {d _ {k}}}\n$$\n",
474
+ "text_format": "latex",
475
+ "bbox": [
476
+ 203,
477
+ 664,
478
+ 423,
479
+ 695
480
+ ],
481
+ "page_idx": 3
482
+ },
483
+ {
484
+ "type": "text",
485
+ "text": "where $\\alpha_{ij} = \\frac{\\exp e_{ij}}{\\sum_{k=1}^{n}\\exp e_{ik}}$ , $\\mathbf{W}_{\\mathbf{Q}}$ , $\\mathbf{W}_{\\mathbf{K}}$ and $\\mathbf{W}_{\\mathbf{V}}$ are learnable matrix for queries, keys and values. $d_k$ is the dimension of queries and keys; $\\mathbf{p}_{\\mathbf{ij}}^{\\mathbf{K}}$ and $\\mathbf{p}_{\\mathbf{ij}}^{\\mathbf{V}}$ are relative positional representations for positions $i$ and $j$ .",
486
+ "bbox": [
487
+ 112,
488
+ 700,
489
+ 487,
490
+ 784
491
+ ],
492
+ "page_idx": 3
493
+ },
494
+ {
495
+ "type": "text",
496
+ "text": "The outputs of all heads are concatenated and then fed to the FFN modules which is a multi-layer perception. The add & norm operation are employed after the multi-head attention and FFN modules. The calculations are as follows:",
497
+ "bbox": [
498
+ 112,
499
+ 785,
500
+ 489,
501
+ 863
502
+ ],
503
+ "page_idx": 3
504
+ },
505
+ {
506
+ "type": "equation",
507
+ "text": "\n$$\n\\begin{array}{l} \\mathbf {H e a d} = C o n c a t \\left(\\mathbf {H e a d} ^ {\\mathbf {1}}, \\mathbf {H e a d} ^ {\\mathbf {d}}, \\mathbf {H e a d} ^ {\\mathbf {B}}\\right) \\\\ \\mathbf {H i d} = a d d \\& n o r m (\\mathbf {H e a d}, \\mathbf {X}) \\end{array} \\tag {2}\n$$\n",
508
+ "text_format": "latex",
509
+ "bbox": [
510
+ 149,
511
+ 869,
512
+ 487,
513
+ 904
514
+ ],
515
+ "page_idx": 3
516
+ },
517
+ {
518
+ "type": "equation",
519
+ "text": "\n$$\n\\mathbf {E n c} = a d d \\& n o r m (\\mathbf {F F N} (\\mathbf {H i d}), \\mathbf {H i d})\n$$\n",
520
+ "text_format": "latex",
521
+ "bbox": [
522
+ 161,
523
+ 907,
524
+ 431,
525
+ 921
526
+ ],
527
+ "page_idx": 3
528
+ },
529
+ {
530
+ "type": "text",
531
+ "text": "where $add\\&norm(\\mathbf{A_1},\\mathbf{A_2}) = LN(\\mathbf{A_1} + \\mathbf{A_2})$ $B$ is the number of heads and $LN$ is layer normalization. The final output of encoder is sent to Transformer-based decoder to generate the commit message step by step. We use cross-entropy as loss function and adopt AdamW (Loshchilov and Hutter, 2019) to optimize the parameters of the code diff encoder and the decoder at the top of Figure 1.",
532
+ "bbox": [
533
+ 507,
534
+ 393,
535
+ 884,
536
+ 521
537
+ ],
538
+ "page_idx": 3
539
+ },
540
+ {
541
+ "type": "text",
542
+ "text": "Next, the retrieval module is used to retrieve the most similar result from a large parallel training corpus. We firstly use the above code diff encoder to map code diffs into a high-dimensional latent space and retrieve the most similar example based on cosine similarity.",
543
+ "bbox": [
544
+ 507,
545
+ 523,
546
+ 882,
547
+ 620
548
+ ],
549
+ "page_idx": 3
550
+ },
551
+ {
552
+ "type": "text",
553
+ "text": "Specifically, after being trained in the commit message generation dataset, the code diff encoder can capture the semantic of code diff well. We use well-trained code diff encoder following a mean-pooling operation to map the code diff into a high dimensional space. Mathematically, given the input code diff embedding $\\mathbf{X} = (\\mathbf{x}_1,\\mathbf{x}_2,\\dots,\\mathbf{x}_l)$ , the code diff encoder can transformed them to $\\mathbf{Enc} = (\\mathbf{enc}_1,\\mathbf{enc}_2,\\dots,\\mathbf{enc}_l)$ . Then we obtain the semantic vector of the code diff by pooling operation:",
554
+ "bbox": [
555
+ 507,
556
+ 621,
557
+ 882,
558
+ 797
559
+ ],
560
+ "page_idx": 3
561
+ },
562
+ {
563
+ "type": "equation",
564
+ "text": "\n$$\n\\operatorname {v e c} = \\text {p o o l i n g} (\\mathbf {E n c}) = \\text {m e a n} \\left(\\mathbf {e n c} _ {1}, \\mathbf {e n c} _ {2}, \\dots , \\mathbf {e n c} _ {1}\\right) \\tag {3}\n$$\n",
565
+ "text_format": "latex",
566
+ "bbox": [
567
+ 519,
568
+ 812,
569
+ 880,
570
+ 837
571
+ ],
572
+ "page_idx": 3
573
+ },
574
+ {
575
+ "type": "text",
576
+ "text": "where mean is a dimension-wise average operation. We measure the similarity of two code diffs by cosine similarity of their semantic vectors and retrieve the most similar code diff paired with the commit message from the parallel training corpus. For each",
577
+ "bbox": [
578
+ 507,
579
+ 839,
580
+ 882,
581
+ 919
582
+ ],
583
+ "page_idx": 3
584
+ },
585
+ {
586
+ "type": "text",
587
+ "text": "code diff, we return the first-ranked similar result. But, for the code diff in the training dataset, we return the second-ranked similar result because the first-ranked result is itself.",
588
+ "bbox": [
589
+ 112,
590
+ 84,
591
+ 489,
592
+ 148
593
+ ],
594
+ "page_idx": 4
595
+ },
596
+ {
597
+ "type": "text",
598
+ "text": "3.2 Generation module",
599
+ "text_level": 1,
600
+ "bbox": [
601
+ 112,
602
+ 162,
603
+ 312,
604
+ 175
605
+ ],
606
+ "page_idx": 4
607
+ },
608
+ {
609
+ "type": "text",
610
+ "text": "As shown at the bottom of Figure 1, in the generation module, we treat the retrieved commit message as an exemplar and leverage it to guide the neural network model to generate an accurate commit message. Our generation module consists of three components: three encoders, an exemplar guider, and a decoder.",
611
+ "bbox": [
612
+ 112,
613
+ 183,
614
+ 489,
615
+ 294
616
+ ],
617
+ "page_idx": 4
618
+ },
619
+ {
620
+ "type": "text",
621
+ "text": "First, following Equation 1, 2, three Transformer-based encoders are adopted to obtain the representations of the input code diff $(\\mathbf{Enc}^{\\mathbf{d}} = \\mathbf{enc}_1^d,\\mathbf{enc}_2^d,\\dots,\\mathbf{enc}_l^d)$ , the similar code diff $(\\mathbf{Enc}^{\\mathbf{s}} = \\mathbf{enc}_1^s,\\mathbf{enc}_2^s,\\dots,\\mathbf{enc}_m^s)$ , and similar commit message $(\\mathbf{Enc}^{\\mathbf{m}} = \\mathbf{enc}_1^m,\\mathbf{enc}_2^m,\\dots,\\mathbf{enc}_n^m)$ (step ① in Figure 1), where subscripts $l,m,n$ are the length of the input code diff, the similar code diff, and the similar commit message, respectively.",
622
+ "bbox": [
623
+ 112,
624
+ 297,
625
+ 489,
626
+ 441
627
+ ],
628
+ "page_idx": 4
629
+ },
630
+ {
631
+ "type": "text",
632
+ "text": "Second, since the retrieved similar commit messages may not always accurately describe the content/ intent of the input code diffs even express totally wrong or irrelevant semantics. Therefore, we propose an exemplar guider which first learns the semantic similarity between the retrieved and input code diff and then leverages the information of the similar commit messages based on the learned similarity to guide the commit message generation (step ②). Mathematically, exemplar guider calculate the semantic similarity $(\\lambda)$ between the input code diff and the similar code diff based on their representation $\\mathbf{Enc}_l^d$ and $\\mathbf{Enc}_m^s$ (step ② and ③):",
633
+ "bbox": [
634
+ 112,
635
+ 443,
636
+ 489,
637
+ 652
638
+ ],
639
+ "page_idx": 4
640
+ },
641
+ {
642
+ "type": "equation",
643
+ "text": "\n$$\n\\lambda = \\sigma \\left(\\mathbf {W} _ {\\mathbf {s}} \\left[ m e a n \\left(\\mathbf {E} \\mathbf {n c} ^ {d}\\right), m e a n \\left(\\mathbf {E} \\mathbf {n c} ^ {s}\\right) \\right]\\right) \\tag {4}\n$$\n",
644
+ "text_format": "latex",
645
+ "bbox": [
646
+ 164,
647
+ 665,
648
+ 487,
649
+ 682
650
+ ],
651
+ "page_idx": 4
652
+ },
653
+ {
654
+ "type": "text",
655
+ "text": "where $\\sigma$ is the sigmoid activation function, $\\mathbf{W}_{\\mathrm{s}}$ is a learnable matrix, and mean is a dimension-wise average operation.",
656
+ "bbox": [
657
+ 112,
658
+ 696,
659
+ 487,
660
+ 744
661
+ ],
662
+ "page_idx": 4
663
+ },
664
+ {
665
+ "type": "text",
666
+ "text": "Third, we weight representations of code diff and similar commit message by $1 - \\lambda$ and $\\lambda$ , respectively and then concatenate them to obtain the final input encoding.",
667
+ "bbox": [
668
+ 112,
669
+ 746,
670
+ 489,
671
+ 810
672
+ ],
673
+ "page_idx": 4
674
+ },
675
+ {
676
+ "type": "equation",
677
+ "text": "\n$$\n\\mathbf {E n c} ^ {\\mathrm {d m}} = \\left[ (1 - \\lambda) * \\mathbf {E n c} ^ {\\mathrm {d}}: \\lambda * \\mathbf {E n c} ^ {\\mathrm {s}} \\right] \\tag {5}\n$$\n",
678
+ "text_format": "latex",
679
+ "bbox": [
680
+ 171,
681
+ 822,
682
+ 487,
683
+ 840
684
+ ],
685
+ "page_idx": 4
686
+ },
687
+ {
688
+ "type": "text",
689
+ "text": "Finally, we use a Transformer-based decoder to generate the commit message. The decoder consists of multiply decoder layer and each layers includes a masked multi-head self-attention, a",
690
+ "bbox": [
691
+ 112,
692
+ 854,
693
+ 489,
694
+ 917
695
+ ],
696
+ "page_idx": 4
697
+ },
698
+ {
699
+ "type": "table",
700
+ "img_path": "images/61acb78ade90321bc370083c319f67214cff2d3ee3a341486c82dba6a545ec22.jpg",
701
+ "table_caption": [],
702
+ "table_footnote": [],
703
+ "table_body": "<table><tr><td>Language</td><td>Training</td><td>Validation</td><td>Test</td></tr><tr><td>Java</td><td>160,018</td><td>19,825</td><td>20,159</td></tr><tr><td>C#</td><td>149,907</td><td>18,688</td><td>18,702</td></tr><tr><td>C++</td><td>160,948</td><td>20,000</td><td>20,141</td></tr><tr><td>Python</td><td>206,777</td><td>25,912</td><td>25,837</td></tr><tr><td>JavaScript</td><td>197,529</td><td>24,899</td><td>24,773</td></tr></table>",
704
+ "bbox": [
705
+ 547,
706
+ 80,
707
+ 840,
708
+ 171
709
+ ],
710
+ "page_idx": 4
711
+ },
712
+ {
713
+ "type": "text",
714
+ "text": "Table 1: Statistics of the evaluation dataset.",
715
+ "bbox": [
716
+ 547,
717
+ 180,
718
+ 842,
719
+ 193
720
+ ],
721
+ "page_idx": 4
722
+ },
723
+ {
724
+ "type": "text",
725
+ "text": "multi-head cross-attention module, a FFN module and an add & norm module. Different from multi-head self-attention module in the encoder, in terms of one token, masked multi-head self-attention in the decoder can only attend to the previous tokens rather than the before and after context. In $b$ -th cross-attention layer, the input encoding $(\\mathbf{Enc}^{\\mathrm{dm}} = (\\mathbf{enc}_1^{\\mathrm{dm}}, \\mathbf{enc}_2^{\\mathrm{dm}}, \\dots, \\mathbf{enc}_{\\mathrm{l + m}}^{\\mathrm{dm}}))$ is queried by the output of the preceding commit message representations $\\mathbf{Msg} = (\\mathbf{msg}_1, \\dots, \\mathbf{msg}_t)$ obtained by masked multi-head self-attention module.",
726
+ "bbox": [
727
+ 507,
728
+ 219,
729
+ 884,
730
+ 411
731
+ ],
732
+ "page_idx": 4
733
+ },
734
+ {
735
+ "type": "equation",
736
+ "text": "\n$$\nD e c _ {\\text {h e a d} _ {i} ^ {b}} = \\sum_ {j = 1} ^ {l + m} \\alpha_ {i j} \\left(\\mathbf {W} _ {\\mathbf {V}} ^ {\\mathbf {D e c}} \\mathbf {e n c} _ {\\mathbf {j}} ^ {\\mathbf {b}}\\right) \\tag {6}\n$$\n",
737
+ "text_format": "latex",
738
+ "bbox": [
739
+ 552,
740
+ 419,
741
+ 880,
742
+ 458
743
+ ],
744
+ "page_idx": 4
745
+ },
746
+ {
747
+ "type": "equation",
748
+ "text": "\n$$\nD e c _ {e _ {i j}} = \\frac {\\left(\\mathbf {W} _ {\\mathbf {Q}} ^ {\\mathbf {D e c}} \\mathbf {m s g} _ {\\mathbf {j}} ^ {\\mathbf {b}}\\right) ^ {T} \\left(\\mathbf {W} _ {\\mathbf {K}} ^ {\\mathbf {D e c}} \\mathbf {e n c} _ {\\mathbf {i}} ^ {\\mathbf {b}}\\right)}{\\sqrt {d _ {k}}}\n$$\n",
749
+ "text_format": "latex",
750
+ "bbox": [
751
+ 569,
752
+ 457,
753
+ 836,
754
+ 488
755
+ ],
756
+ "page_idx": 4
757
+ },
758
+ {
759
+ "type": "text",
760
+ "text": "where $\\alpha_{ij} = \\frac{\\exp\\text{Dec}_{ij}}{\\sum_{k=1}^{n}\\exp\\text{Dec}_{ik}}$ , $\\mathbf{W}_{\\mathbf{Q}}^{\\mathbf{Dec}}$ , $\\mathbf{W}_{\\mathbf{K}}^{\\mathbf{Dec}}$ and $\\mathbf{W}_{\\mathbf{V}}^{\\mathbf{Dec}}$ are trainable projection matrices for queries, keys and values of the decoder layer. t is the length of preceding commit message.",
761
+ "bbox": [
762
+ 507,
763
+ 498,
764
+ 882,
765
+ 571
766
+ ],
767
+ "page_idx": 4
768
+ },
769
+ {
770
+ "type": "text",
771
+ "text": "Next, we use Equation 2 to obtain the hidden states of each decoder layer. In the last decoder layers, we employ a MLP and softmax operator to obtain the generation probability of each commit message token on the vocabulary. Then we use the cross-entropy as the loss function and apply AdamW for optimization.",
772
+ "bbox": [
773
+ 507,
774
+ 571,
775
+ 882,
776
+ 684
777
+ ],
778
+ "page_idx": 4
779
+ },
780
+ {
781
+ "type": "text",
782
+ "text": "4 Experimental Setup",
783
+ "text_level": 1,
784
+ "bbox": [
785
+ 507,
786
+ 696,
787
+ 717,
788
+ 713
789
+ ],
790
+ "page_idx": 4
791
+ },
792
+ {
793
+ "type": "text",
794
+ "text": "4.1 Dataset",
795
+ "text_level": 1,
796
+ "bbox": [
797
+ 507,
798
+ 721,
799
+ 616,
800
+ 734
801
+ ],
802
+ "page_idx": 4
803
+ },
804
+ {
805
+ "type": "text",
806
+ "text": "In our experiment, we use a large-scale dataset MCMD (Tao et al., 2021) with five programming languages (PLs): Java, C#, C++, Python and JavaScript. For each PL, MCMD collects commits from the top-100 starred repositories on GitHub and then filters the redundant messages (such as rollback commits) and noisy messages defined in Liu et al. (2018). Finally, to balance the size of data, they randomly sample and retain 450,000 commits for each PL. Each commit contains the code diff, the commit message, the name of the repository,",
807
+ "bbox": [
808
+ 505,
809
+ 741,
810
+ 884,
811
+ 919
812
+ ],
813
+ "page_idx": 4
814
+ },
815
+ {
816
+ "type": "text",
817
+ "text": "and the timestamp of commit, etc. To reduce the noise data in the dataset, we further filter out commits that contain multiple files or files that cannot be parsed (such as .jar, .ddl, .mp3, and .apk).",
818
+ "bbox": [
819
+ 112,
820
+ 84,
821
+ 489,
822
+ 149
823
+ ],
824
+ "page_idx": 5
825
+ },
826
+ {
827
+ "type": "text",
828
+ "text": "4.2 Data pre-processing",
829
+ "text_level": 1,
830
+ "bbox": [
831
+ 112,
832
+ 159,
833
+ 319,
834
+ 174
835
+ ],
836
+ "page_idx": 5
837
+ },
838
+ {
839
+ "type": "text",
840
+ "text": "The code diff in MCMD are based on line-level code change. To obtain more fine-grained code change, following previous study (Panthaplackel et al., 2020), we use a sequence of span of token-level change actions to represent the code diff. Each action is structured as <action> span of tokens <action end>. There are four <action> types, namely, <keep>, <insert>, <delete>, and <replace>. <keep> means that the span of tokens are unchanged. <insert> means that adding span of tokens. <delete> means that deleting span of tokens. <replace> means that the span of tokens in the old version that will be replaced with different span of tokens in the new version. Thus, we extend <replace> to <replace old> and <replace new> to indicate the span of old and new tokens, respectively. We use difflib<sup>1</sup> to extract the sequence of code change actions.",
841
+ "bbox": [
842
+ 112,
843
+ 179,
844
+ 489,
845
+ 485
846
+ ],
847
+ "page_idx": 5
848
+ },
849
+ {
850
+ "type": "text",
851
+ "text": "4.3 Hyperparameters",
852
+ "text_level": 1,
853
+ "bbox": [
854
+ 112,
855
+ 495,
856
+ 302,
857
+ 511
858
+ ],
859
+ "page_idx": 5
860
+ },
861
+ {
862
+ "type": "text",
863
+ "text": "We follow (Tao et al., 2021) to set the maximum lengths of code diff and commit message to 200 and 50, respectively. We use the weight of the encoder of CodeT5-base (Wang et al., 2021b) to initialize the code diff encoders and use the decoder of CodeT5-base to initialize the decoder in Figure 1. The original vocabulary sizes of CodeT5 is 32,100. We add nine special tokens (<keep>, <keep_end>, <insert>, <insert_end>, <delete>, <delete_end>, <replace_old>, <replace_new>, and <replace_end>) and the vocabulary sizes of code and queries become 32109. For the optimizer, we use AdamW with the learning rate 2e-5. The batch size is 32. The max epoch is 20. In addition, we run the experiments 3 times with random seeds 0,1,2 and display the mean value in the paper. The experiments are conducted on a server with 4 GPUs of NVIDIA Tesla V100 and it takes about 1.2 hours each epoch.",
864
+ "bbox": [
865
+ 112,
866
+ 516,
867
+ 489,
868
+ 821
869
+ ],
870
+ "page_idx": 5
871
+ },
872
+ {
873
+ "type": "text",
874
+ "text": "4.4 Evaluation metrics",
875
+ "text_level": 1,
876
+ "bbox": [
877
+ 112,
878
+ 832,
879
+ 310,
880
+ 847
881
+ ],
882
+ "page_idx": 5
883
+ },
884
+ {
885
+ "type": "text",
886
+ "text": "We evaluate the quality of the generated messages using four metrics: BLEU (Papineni et al.,",
887
+ "bbox": [
888
+ 112,
889
+ 853,
890
+ 489,
891
+ 885
892
+ ],
893
+ "page_idx": 5
894
+ },
895
+ {
896
+ "type": "text",
897
+ "text": "2002), Meteor (Banerjee and Lavie, 2005), Rouge-L (Lin, 2004), and Cider (Vedantam et al., 2015). These metrics are prevalent metrics in machine translation, text summarization, and image captioning. There are many variants of BLEU being used to measure the generated message, We choose B-Norm (the BLEU result in this paper is B-Norm), which correlates with human perception the most (Tao et al., 2021). The detailed metrics calculation can be found in Appendix.",
898
+ "bbox": [
899
+ 507,
900
+ 84,
901
+ 884,
902
+ 244
903
+ ],
904
+ "page_idx": 5
905
+ },
906
+ {
907
+ "type": "text",
908
+ "text": "4.5 Baselines",
909
+ "text_level": 1,
910
+ "bbox": [
911
+ 507,
912
+ 256,
913
+ 628,
914
+ 271
915
+ ],
916
+ "page_idx": 5
917
+ },
918
+ {
919
+ "type": "text",
920
+ "text": "We compare RACE with four end-to-end neural-based models, two IR-based methods, two hybrid approaches which combine IR-based techniques and end-to-end neural-based methods, and three pre-trained-based models. Four end-to-end neural-based models include CommitGen (Jiang et al., 2017), CoDiSum (Xu et al., 2019), NMTGen (Loyola et al., 2017), PtrGNCMsg (Liu et al., 2019) and ATOM (Liu et al., 2020). They all train models from scratch. Two IR-based methods are NNGen (Liu et al., 2018) and Lucene (Apache, 2011), they retrieve the similar code diff based on different similarity measurements and reuse the commit message of the similar code diff as the final result. CoRec and ATOM are all hybrid models which combine the neural-based models and IR-based techniques. Three pre-trained models are CommitBERT, CodeT5-small, and CodeT5-base. They are pre-trained on the large parallel code and natural language corpus and fine-tuned on the commit message generation dataset. All baselines except Lucene, CodeT5-small and CodeT5-base are introduced in Section 2. Lucene is a traditional IR baseline, which uses TF-IDF to represent a code diff as a vector and searches the similar code diff based on the cosine similarity between two vectors. CodeT5-small and CodeT5-base are source code pre-trained models and have achieved promising results in many code-related tasks (Wang et al., 2021b). We fine-tune them on MCMD as strong baselines. In addition, we only evaluate ATOM on Java dataset as the current implementation of ATOM only supports Java.",
921
+ "bbox": [
922
+ 507,
923
+ 278,
924
+ 884,
925
+ 810
926
+ ],
927
+ "page_idx": 5
928
+ },
929
+ {
930
+ "type": "text",
931
+ "text": "5 Experimental Results",
932
+ "text_level": 1,
933
+ "bbox": [
934
+ 507,
935
+ 822,
936
+ 732,
937
+ 839
938
+ ],
939
+ "page_idx": 5
940
+ },
941
+ {
942
+ "type": "text",
943
+ "text": "5.1 How does RACE perform compared with baseline approaches?",
944
+ "text_level": 1,
945
+ "bbox": [
946
+ 507,
947
+ 848,
948
+ 878,
949
+ 881
950
+ ],
951
+ "page_idx": 5
952
+ },
953
+ {
954
+ "type": "text",
955
+ "text": "To evaluate the effectiveness of RACE, we conduct the experiment by comparing it with the 11",
956
+ "bbox": [
957
+ 507,
958
+ 887,
959
+ 884,
960
+ 919
961
+ ],
962
+ "page_idx": 5
963
+ },
964
+ {
965
+ "type": "page_footnote",
966
+ "text": "1https://docs.python.org/3/library/difflib. html",
967
+ "bbox": [
968
+ 112,
969
+ 891,
970
+ 462,
971
+ 917
972
+ ],
973
+ "page_idx": 5
974
+ },
975
+ {
976
+ "type": "table",
977
+ "img_path": "images/f136987fd0ff106574374e311b19890e76c2d01820c6a1ddb6fac2535649df1e.jpg",
978
+ "table_caption": [],
979
+ "table_footnote": [],
980
+ "table_body": "<table><tr><td rowspan=\"2\" colspan=\"2\">Model</td><td colspan=\"4\">Java</td><td colspan=\"4\">C#</td><td colspan=\"4\">C++</td><td colspan=\"4\">Python</td><td colspan=\"4\">JavaScript</td></tr><tr><td>BLEU</td><td>Met.</td><td>Rou.</td><td>Cid.</td><td>BLEU</td><td>Met.</td><td>Rou.</td><td>Cid.</td><td>BLEU</td><td>Met.</td><td>Rou.</td><td>Cid.</td><td>BLEU</td><td>Met.</td><td>Rou.</td><td>Cid.</td><td>BLEU</td><td>Met.</td><td>Rou.</td><td>Cid.</td></tr><tr><td rowspan=\"2\">IR-based</td><td>NNGen</td><td>19.41</td><td>12.40</td><td>25.15</td><td>1.23</td><td>22.15</td><td>14.77</td><td>26.46</td><td>1.55</td><td>13.61</td><td>9.39</td><td>18.21</td><td>0.73</td><td>16.06</td><td>10.91</td><td>21.69</td><td>0.92</td><td>18.65</td><td>12.50</td><td>24.45</td><td>1.21</td></tr><tr><td>Lucene</td><td>15.61</td><td>10.56</td><td>19.43</td><td>0.94</td><td>20.68</td><td>13.34</td><td>23.02</td><td>1.36</td><td>13.43</td><td>8.81</td><td>16.78</td><td>0.67</td><td>15.16</td><td>9.63</td><td>18.85</td><td>0.85</td><td>17.66</td><td>11.25</td><td>21.75</td><td>1.02</td></tr><tr><td rowspan=\"4\">End-to-end</td><td>CommitGen</td><td>14.07</td><td>7.52</td><td>18.78</td><td>0.66</td><td>13.38</td><td>8.31</td><td>17.44</td><td>0.63</td><td>11.52</td><td>6.98</td><td>16.75</td><td>0.45</td><td>11.02</td><td>6.43</td><td>16.64</td><td>0.42</td><td>18.67</td><td>11.88</td><td>24.10</td><td>1.08</td></tr><tr><td>CoDiSum</td><td>13.97</td><td>6.02</td><td>16.12</td><td>0.39</td><td>12.71</td><td>5.56</td><td>14.40</td><td>0.36</td><td>12.44</td><td>6.00</td><td>14.39</td><td>0.42</td><td>14.61</td><td>8.59</td><td>17.02</td><td>0.42</td><td>11.22</td><td>5.32</td><td>13.26</td><td>0.28</td></tr><tr><td>NMTGen</td><td>15.52</td><td>8.91</td><td>21.13</td><td>0.86</td><td>12.71</td><td>8.11</td><td>17.16</td><td>0.62</td><td>11.57</td><td>7.06</td><td>17.46</td><td>0.51</td><td>11.41</td><td>7.18</td><td>18.43</td><td>0.48</td><td>18.22</td><td>12.07</td><td>24.43</td><td>1.12</td></tr><tr><td>PtrGNCMsg</td><td>17.71</td><td>11.33</td><td>24.32</td><td>0.99</td><td>15.98</td><td>10.18</td><td>21.16</td><td>0.83</td><td>14.06</td><td>9.63</td><td>20.17</td><td>0.63</td><td>15.89</td><td>11.36</td><td>23.49</td><td>0.76</td><td>20.78</td><td>14.52</td><td>27.87</td><td>1.29</td></tr><tr><td rowspan=\"2\">Hybrid</td><td>ATOM</td><td>16.42</td><td>11.66</td><td>22.67</td><td>0.91</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td></tr><tr><td>CoRec</td><td>18.51</td><td>11.26</td><td>24.78</td><td>1.13</td><td>18.41</td><td>11.70</td><td>23.73</td><td>1.12</td><td>14.02</td><td>8.63</td><td>20.10</td><td>0.72</td><td>15.09</td><td>9.60</td><td>22.35</td><td>0.80</td><td>21.30</td><td>13.84</td><td>27.53</td><td>1.40</td></tr><tr><td rowspan=\"3\">Pre-trained</td><td>CommitBERT</td><td>22.32</td><td>12.63</td><td>28.03</td><td>1.42</td><td>20.67</td><td>12.31</td><td>25.76</td><td>1.25</td><td>16.16</td><td>10.05</td><td>19.90</td><td>0.94</td><td>17.29</td><td>11.31</td><td>22.36</td><td>1.01</td><td>23.40</td><td>15.64</td><td>30.51</td><td>1.54</td></tr><tr><td>CodeT5-small</td><td>22.28</td><td>14.16</td><td>29.71</td><td>1.37</td><td>18.92</td><td>11.71</td><td>24.95</td><td>1.05</td><td>16.08</td><td>11.19</td><td>21.60</td><td>0.79</td><td>17.49</td><td>12.46</td><td>24.65</td><td>0.90</td><td>21.97</td><td>14.48</td><td>28.65</td><td>1.42</td></tr><tr><td>CodeT5-base</td><td>22.76</td><td>14.57</td><td>30.23</td><td>1.43</td><td>22.21</td><td>14.51</td><td>29.08</td><td>1.33</td><td>16.73</td><td>11.69</td><td>22.86</td><td>0.85</td><td>17.99</td><td>12.74</td><td>25.27</td><td>0.96</td><td>22.87</td><td>15.12</td><td>29.81</td><td>1.50</td></tr><tr><td rowspan=\"2\">Ours</td><td rowspan=\"2\">RACE</td><td>25.66</td><td>15.46</td><td>32.02</td><td>1.76</td><td>26.33</td><td>16.37</td><td>31.31</td><td>1.84</td><td>19.13</td><td>12.55</td><td>24.52</td><td>1.14</td><td>21.79</td><td>14.68</td><td>28.35</td><td>1.40</td><td>25.55</td><td>16.31</td><td>31.79</td><td>1.84</td></tr><tr><td>↑13%</td><td>↑6%</td><td>↑6%</td><td>↑23%</td><td>↑19%</td><td>↑13%</td><td>↑8%</td><td>↑38%</td><td>↑14%</td><td>↑7%</td><td>↑7%</td><td>↑34%</td><td>↑21%</td><td>↑15%</td><td>↑12%</td><td>↑46%</td><td>↑12%</td><td>↑8%</td><td>↑7%</td><td>↑23%</td></tr><tr><td>Ablation</td><td>RACE -Guider</td><td>23.37</td><td>13.98</td><td>30.01</td><td>1.53</td><td>21.33</td><td>13.56</td><td>27.33</td><td>1.31</td><td>17.43</td><td>12.10</td><td>22.03</td><td>0.95</td><td>19.44</td><td>13.89</td><td>26.4</td><td>1.01</td><td>23.39</td><td>15.64</td><td>30.51</td><td>1.54</td></tr></table>",
981
+ "bbox": [
982
+ 114,
983
+ 80,
984
+ 912,
985
+ 292
986
+ ],
987
+ "page_idx": 6
988
+ },
989
+ {
990
+ "type": "text",
991
+ "text": "Table 2: Comparison of RACE with baselines under four metrics on five programming languages. Met., Rou., and Cide. are short for Meteor, Rouge-L, and Cider, respectively. All results are statistically significant (with $p < 0.01$ ).",
992
+ "bbox": [
993
+ 112,
994
+ 300,
995
+ 882,
996
+ 344
997
+ ],
998
+ "page_idx": 6
999
+ },
1000
+ {
1001
+ "type": "text",
1002
+ "text": "basielines including two IR-based approaches, four end-to-end neural-based approaches, two hybrid approaches, and three pre-train-based approaches in terms of four evaluation metrics. The experimental results are shown in Table 2.",
1003
+ "bbox": [
1004
+ 112,
1005
+ 369,
1006
+ 487,
1007
+ 448
1008
+ ],
1009
+ "page_idx": 6
1010
+ },
1011
+ {
1012
+ "type": "text",
1013
+ "text": "We can see that IR-based models NNGen and Lucene generally outperform end-to-end neural models on average in terms of four metrics. It indicates that retrieved similar results can provide important information for commit message generation. CoRec, which combines the IR-based method and neural method, performs better than NNGen on $\\mathrm{C + + }$ and JavaScript dataset but lower than NNGen on Java, C# and Python. This is because CoRec only leverages the information similar code diff at the inference stage. ATOM, which priorities the generated result of the neural-based model and retrieved result of the IR-based method, also outperforms the IR-based approach Lucene and three neural-based models CommitGen, CoDiSum, and NMTGen. Three pre-trained-based approaches outperform other baselines in terms of four metrics on average. CodeT5-base performs best among them on average. Our approach performs the best among all approaches on 5 programming languages in terms of four metrics. This is because RACE treats the retrieved similar commit message as an exemplar and leverages it to guide the neural network model to generate an accurate commit message.",
1014
+ "bbox": [
1015
+ 115,
1016
+ 451,
1017
+ 487,
1018
+ 851
1019
+ ],
1020
+ "page_idx": 6
1021
+ },
1022
+ {
1023
+ "type": "text",
1024
+ "text": "We also give an example of commit messages generated by our approach and the baselines in Figure 2. IR-based methods NNGen and Lucene can retrieve semantically similar but not completely",
1025
+ "bbox": [
1026
+ 112,
1027
+ 854,
1028
+ 489,
1029
+ 920
1030
+ ],
1031
+ "page_idx": 6
1032
+ },
1033
+ {
1034
+ "type": "text",
1035
+ "text": "correct commit message. Specifically, retrieved commit messages contain not only the important semantic (\"Filter out unavailable databases\") of the current code diff but also the extra information (\"Revert\"). Neural network models generally capture the action of \"add\" but fail to further understand the intend of the code diff. The hybrid model CoRec cannot generate the correct commit message either. Our model treats the retrieved result (Revert \"Filter out unavailable databases\") as an exemplar, and guides the neural network model to generate the correct commit message.",
1036
+ "bbox": [
1037
+ 507,
1038
+ 369,
1039
+ 885,
1040
+ 562
1041
+ ],
1042
+ "page_idx": 6
1043
+ },
1044
+ {
1045
+ "type": "text",
1046
+ "text": "5.2 What is the effectiveness of exemplar guider?",
1047
+ "text_level": 1,
1048
+ "bbox": [
1049
+ 507,
1050
+ 574,
1051
+ 847,
1052
+ 606
1053
+ ],
1054
+ "page_idx": 6
1055
+ },
1056
+ {
1057
+ "type": "text",
1058
+ "text": "We conduct the ablation study to verify the effectiveness of exemplar guider module. Specifically, as shown at the bottom of Figure 1, we directly concatenated the representations of retrieved results and fed them to the decoder to generate commit messages without using the exemplar guider. As shown at the bottom of the Table 2, we can see that the performance of the ablated model (RACE-Guide) degrades in all programming languages in terms of four metrics. It demonstrates the effectiveness of our exemplar guider.",
1059
+ "bbox": [
1060
+ 505,
1061
+ 611,
1062
+ 885,
1063
+ 789
1064
+ ],
1065
+ "page_idx": 6
1066
+ },
1067
+ {
1068
+ "type": "text",
1069
+ "text": "5.3 What is the performance when we retrieve $k$ relevant commits?",
1070
+ "text_level": 1,
1071
+ "bbox": [
1072
+ 507,
1073
+ 801,
1074
+ 823,
1075
+ 832
1076
+ ],
1077
+ "page_idx": 6
1078
+ },
1079
+ {
1080
+ "type": "text",
1081
+ "text": "We also conduct experiments to recall $k$ ( $k = 1, 3, 5, 7, 9$ ) most relevant commits to augment the generation model. Specifically, as shown in Figure 1 the relevance of the code diff is measured by the cosine similarity their semantic vectors obtained by",
1082
+ "bbox": [
1083
+ 507,
1084
+ 839,
1085
+ 884,
1086
+ 919
1087
+ ],
1088
+ "page_idx": 6
1089
+ },
1090
+ {
1091
+ "type": "image",
1092
+ "img_path": "images/df3a27eb2a9292c61967484dfc545d7580e17fa9b238f8e959d09153ba050ce1.jpg",
1093
+ "image_caption": [
1094
+ "Reference Filter out unavailable databases"
1095
+ ],
1096
+ "image_footnote": [],
1097
+ "bbox": [
1098
+ 127,
1099
+ 85,
1100
+ 450,
1101
+ 211
1102
+ ],
1103
+ "page_idx": 7
1104
+ },
1105
+ {
1106
+ "type": "table",
1107
+ "img_path": "images/86d7ea444c3437930db4ea717de8ac1c91148c4ac2641c9b179b752f45bf4d67.jpg",
1108
+ "table_caption": [],
1109
+ "table_footnote": [],
1110
+ "table_body": "<table><tr><td colspan=\"2\">Baselines</td></tr><tr><td>NNGen</td><td>Revert “ Filter out unavailable databases”</td></tr><tr><td>Lucene</td><td>Revert “ filter out unavailable databases ”</td></tr><tr><td>CommitGen</td><td>Merge pull request from mistecrunch / UNK</td></tr><tr><td>NMTGen</td><td>Add &lt;unk&gt; to &lt;unk&gt;</td></tr><tr><td>PtrGNCMsg</td><td>Add support for dashboards in database</td></tr><tr><td>CoRec</td><td>Remove &lt;unk&gt;</td></tr><tr><td>CommitBERT</td><td>Add DatabaseFilter ( )</td></tr><tr><td>CodeT5-small</td><td>[database] Add databasefilter to filter all users</td></tr><tr><td>CodeT5-base</td><td>[hotfix] Adding databasefilter to core.py</td></tr><tr><td>RACE</td><td>Stage I : Revert “ Filter out unavailable databases ”Stage II : Filter out unavailable databases</td></tr></table>",
1111
+ "bbox": [
1112
+ 127,
1113
+ 229,
1114
+ 492,
1115
+ 376
1116
+ ],
1117
+ "page_idx": 7
1118
+ },
1119
+ {
1120
+ "type": "text",
1121
+ "text": "Equation 3. Then retrieved $k$ relevant commits are encoded and fed to the exemplar guider to obtain semantic similarities by Equation 4, respectively. Finally, we weight representations of code diff and similar commit messages according to the semantic similarities and feed them to the decoder to generate commit messages step by step. The experimental results are shown in Figure 3. We can see that the performance is generally stable on different $k$ . In our future work, we will continue to study alternatives on leveraging the information of the retrieved results, e.g., how many commits to retrieve and how to model the corresponding information.",
1122
+ "bbox": [
1123
+ 112,
1124
+ 485,
1125
+ 489,
1126
+ 694
1127
+ ],
1128
+ "page_idx": 7
1129
+ },
1130
+ {
1131
+ "type": "text",
1132
+ "text": "5.4 Can our framework boost the performance of existing models?",
1133
+ "text_level": 1,
1134
+ "bbox": [
1135
+ 112,
1136
+ 705,
1137
+ 413,
1138
+ 737
1139
+ ],
1140
+ "page_idx": 7
1141
+ },
1142
+ {
1143
+ "type": "text",
1144
+ "text": "We further study whether our framework can enhance the performance of the existing Seq2Seq neural network model in commit message generation. Therefore, we adapt our framework to four Seq2Seq-based models, namely NMTGen (M1), CommitBERT (M2), CodeT5-small (M3) and CodeT5-base (M4). Specifically, we use the encoder of these models as our code diff encoder and obtain the high-dimensional semantic vectors in the retrieval module (Figure 1). In the generation module, we use the encoder of their models",
1145
+ "bbox": [
1146
+ 112,
1147
+ 741,
1148
+ 489,
1149
+ 917
1150
+ ],
1151
+ "page_idx": 7
1152
+ },
1153
+ {
1154
+ "type": "image",
1155
+ "img_path": "images/b4d214e23a6fc1e743eadac9e9166b2bfd94da310cb8e92fa792f7ee376b5cb3.jpg",
1156
+ "image_caption": [
1157
+ "Figure 3: Performance of models augmented with $k$ retrieved relevant commits."
1158
+ ],
1159
+ "image_footnote": [],
1160
+ "bbox": [
1161
+ 515,
1162
+ 85,
1163
+ 884,
1164
+ 223
1165
+ ],
1166
+ "page_idx": 7
1167
+ },
1168
+ {
1169
+ "type": "image",
1170
+ "img_path": "images/1ddc39bfbd5f0f31491e972522e759c290281c890cdf3cad3b1005a0b64b862e.jpg",
1171
+ "image_caption": [
1172
+ "Figure 2: An example of generated commit messages. Reference is the developer-written commit message. The results of our approach in stage I and II are returned by the retrieved module and generation module, respectively.",
1173
+ "Figure 4: Performance gains on four models. The original performance of the models are in yellow and gains from our framework are in green. The percentage value in each bar is the rate of improvement."
1174
+ ],
1175
+ "image_footnote": [],
1176
+ "bbox": [
1177
+ 514,
1178
+ 284,
1179
+ 878,
1180
+ 456
1181
+ ],
1182
+ "page_idx": 7
1183
+ },
1184
+ {
1185
+ "type": "text",
1186
+ "text": "to encode input code diffs, similar code diffs, and similar commit messages. We also use the decoder of their models to generate commit messages.",
1187
+ "bbox": [
1188
+ 507,
1189
+ 552,
1190
+ 882,
1191
+ 601
1192
+ ],
1193
+ "page_idx": 7
1194
+ },
1195
+ {
1196
+ "type": "text",
1197
+ "text": "The experimental results are shown in Figure 4, we present the performance of four original models (yellow) and gains (green) from our framework on five programming languages in terms of $\\mathrm{BLEU}^2$ score. Overall, we can see that our framework can improve the performance of all four neural models in all programming languages. Our framework can improve the performance of the original model from $7\\%$ to $73\\%$ . Especially, after applying our framework, the performance of NMTGen has more than $20\\%$ improvement on all programming languages. In addition, Our framework can boost the performance of NMTGen on BLUE, Meteor, Rouge-L, and Cider by $43\\%$ , $49\\%$ , $33\\%$ , and $61\\%$ on average, boost CommitBERT by $11\\%$ , $9\\%$ , $11\\%$ , and $12\\%$ , boost CodeT5-small by $15\\%$ , $14\\%$ , $11\\%$ , and $26\\%$ , and boost CodeT5-base by $16\\%$ , $10\\%$ ,",
1198
+ "bbox": [
1199
+ 507,
1200
+ 602,
1201
+ 884,
1202
+ 876
1203
+ ],
1204
+ "page_idx": 7
1205
+ },
1206
+ {
1207
+ "type": "page_footnote",
1208
+ "text": "2We show results of other three metrics in Appendix due to space limitation. Our conclusions also hold.",
1209
+ "bbox": [
1210
+ 507,
1211
+ 892,
1212
+ 882,
1213
+ 917
1214
+ ],
1215
+ "page_idx": 7
1216
+ },
1217
+ {
1218
+ "type": "table",
1219
+ "img_path": "images/ebc2cd954cae688528282d3e7f004a8bf9630cd9d2c676f4e0bfc930d03b87f6.jpg",
1220
+ "table_caption": [],
1221
+ "table_footnote": [],
1222
+ "table_body": "<table><tr><td>Model</td><td>Informativeness</td><td>Conciseness</td><td>Expressiveness</td></tr><tr><td>CommitBERT</td><td>1.22 (±1.02)</td><td>2.03 (±1.04)</td><td>2.46 (±0.99)</td></tr><tr><td>NNGen</td><td>1.03 (±1.00)</td><td>1.74 (±1.01)</td><td>2.36 (±0.95)</td></tr><tr><td>NMTGen</td><td>0.74 (±0.92)</td><td>1.56 (±0.93)</td><td>2.11 (±0.94)</td></tr><tr><td>CoRec</td><td>1.05 (±1.09)</td><td>1.80 (±1.05)</td><td>2.43 (±0.88)</td></tr><tr><td>RACE</td><td>2.49 (±1.10)</td><td>3.08 (±0.96)</td><td>2.85 (±0.84)</td></tr></table>",
1223
+ "bbox": [
1224
+ 114,
1225
+ 80,
1226
+ 490,
1227
+ 177
1228
+ ],
1229
+ "page_idx": 8
1230
+ },
1231
+ {
1232
+ "type": "text",
1233
+ "text": "Table 3: Results of human evaluation (standard deviation in parentheses).",
1234
+ "bbox": [
1235
+ 112,
1236
+ 186,
1237
+ 489,
1238
+ 216
1239
+ ],
1240
+ "page_idx": 8
1241
+ },
1242
+ {
1243
+ "type": "text",
1244
+ "text": "$8\\%$ , and $32\\%$",
1245
+ "bbox": [
1246
+ 112,
1247
+ 241,
1248
+ 233,
1249
+ 258
1250
+ ],
1251
+ "page_idx": 8
1252
+ },
1253
+ {
1254
+ "type": "text",
1255
+ "text": "5.5 Human evaluation",
1256
+ "text_level": 1,
1257
+ "bbox": [
1258
+ 112,
1259
+ 273,
1260
+ 305,
1261
+ 288
1262
+ ],
1263
+ "page_idx": 8
1264
+ },
1265
+ {
1266
+ "type": "text",
1267
+ "text": "We also conduct a human evaluation by following the previous works (Moreno et al., 2013; Panichella et al., 2016; Shi et al., 2021b) to evaluate the semantic similarity of the commit message generated by RACE and four baselines NNGen, NMTGen, CommitBERT, and CoRec. The four baselines are IR-based, end-to-end neural network-based, hybrid, and pre-trained-based approaches, respectively. We randomly choose 50 code diff from the testing sets and their commit message generated by four approaches. Finally, we sample $250 < \\text{code diff}$ , commit message> pairs to score. Specifically, we invite 4 volunteers with excellent English ability and more than three years of software development experience. Each volunteer is asked to assign scores from 0 to 4 (the higher the better) to the generated commit message from the three aspects: Informativeness (the amount of important information about the code diff reflected in the commit message), Conciseness (the extend of extraneous information included in the commit message), and Expressiveness (grammaticality and fluency). Each pair is evaluated by four volunteers, and the final score is the average of them.",
1268
+ "bbox": [
1269
+ 112,
1270
+ 294,
1271
+ 489,
1272
+ 680
1273
+ ],
1274
+ "page_idx": 8
1275
+ },
1276
+ {
1277
+ "type": "text",
1278
+ "text": "To verify the agreement among the volunteers, we calculate the Krippendorff's alpha (Hayes and Krippendorff, 2007) and Kendall rank correlation coefficient (Kendall's Tau) values (Kendall, 1945). The value of Krippendorff's alpha is 0.90 and the values of pairwise Kendall's Tau range from 0.73 to 0.95, which indicates that there is a high degree of agreement between the 4 volunteers and that scores are reliable. Table 3 shows the result of human evaluation. RACE is better than other approaches in Informative, Conciseness, and Expressiveness, which means that our approach tends to generate concise and readable commit messages with more",
1279
+ "bbox": [
1280
+ 112,
1281
+ 683,
1282
+ 489,
1283
+ 891
1284
+ ],
1285
+ "page_idx": 8
1286
+ },
1287
+ {
1288
+ "type": "text",
1289
+ "text": "comprehensive semantics. In addition, we confirm the superiority of our approach using Wilcoxon signed-rank tests (Wilcoxon et al., 1970) for the human evaluation. Results show that the improvement of RACE over other approaches is statistically significant with all p-values smaller than 0.05 at $95\\%$ confidence level.",
1290
+ "bbox": [
1291
+ 507,
1292
+ 84,
1293
+ 884,
1294
+ 197
1295
+ ],
1296
+ "page_idx": 8
1297
+ },
1298
+ {
1299
+ "type": "text",
1300
+ "text": "6 Conclusion",
1301
+ "text_level": 1,
1302
+ "bbox": [
1303
+ 507,
1304
+ 210,
1305
+ 640,
1306
+ 225
1307
+ ],
1308
+ "page_idx": 8
1309
+ },
1310
+ {
1311
+ "type": "text",
1312
+ "text": "This paper proposes a new retrieval-augmented neural commit message generation method, which treats the retrieved similar commit message as an exemplar and uses it to guide the neural network model to generate an accurate and readable commit message. Extensive experimental results demonstrate that our approach outperforms recent baselines and our framework can significantly boost the performance of four neural network models. Our data, source code and Appendix are available at https://github.com/DeepSoftwareAnalytics/RACE.",
1313
+ "bbox": [
1314
+ 507,
1315
+ 237,
1316
+ 884,
1317
+ 430
1318
+ ],
1319
+ "page_idx": 8
1320
+ },
1321
+ {
1322
+ "type": "text",
1323
+ "text": "Limitations",
1324
+ "text_level": 1,
1325
+ "bbox": [
1326
+ 509,
1327
+ 445,
1328
+ 615,
1329
+ 460
1330
+ ],
1331
+ "page_idx": 8
1332
+ },
1333
+ {
1334
+ "type": "text",
1335
+ "text": "We have identified the following main limitations:",
1336
+ "bbox": [
1337
+ 507,
1338
+ 472,
1339
+ 880,
1340
+ 488
1341
+ ],
1342
+ "page_idx": 8
1343
+ },
1344
+ {
1345
+ "type": "text",
1346
+ "text": "Programming Languages. We only conduct experiments on five programming languages. Although in principle, our framework is not specifically designed for certain languages, models perform differently in different programming languages. Therefore, more experiments are needed to confirm the generality of our framework. In the future, we will extend our study to other programming languages.",
1347
+ "bbox": [
1348
+ 507,
1349
+ 489,
1350
+ 882,
1351
+ 633
1352
+ ],
1353
+ "page_idx": 8
1354
+ },
1355
+ {
1356
+ "type": "text",
1357
+ "text": "Code base. Compared with purely neural network-based models, our method needs a code base to retrieve the most similar example from that. This limitation is inherited from IR-based techniques.",
1358
+ "bbox": [
1359
+ 507,
1360
+ 634,
1361
+ 880,
1362
+ 715
1363
+ ],
1364
+ "page_idx": 8
1365
+ },
1366
+ {
1367
+ "type": "text",
1368
+ "text": "Training Time. In addition to modeling the information of input code diffs, our model needs to retrieve similar diffs and encode them. Thus, our model takes a long time to train (about 35 hours to train the model).",
1369
+ "bbox": [
1370
+ 507,
1371
+ 715,
1372
+ 880,
1373
+ 795
1374
+ ],
1375
+ "page_idx": 8
1376
+ },
1377
+ {
1378
+ "type": "text",
1379
+ "text": "Long Code Diffs. Longer code diffs may contain more complex semantics or behaviors. Long diffs (over 512 tokens) are truncated in our approach and some information would be lost. In our future work, we will design mechanisms to better handle long diffs.",
1380
+ "bbox": [
1381
+ 507,
1382
+ 797,
1383
+ 880,
1384
+ 892
1385
+ ],
1386
+ "page_idx": 8
1387
+ },
1388
+ {
1389
+ "type": "page_footnote",
1390
+ "text": "3The result can be found in 1-4 of Appendix",
1391
+ "bbox": [
1392
+ 134,
1393
+ 903,
1394
+ 408,
1395
+ 917
1396
+ ],
1397
+ "page_idx": 8
1398
+ },
1399
+ {
1400
+ "type": "page_footnote",
1401
+ "text": "Available in Appendix",
1402
+ "bbox": [
1403
+ 529,
1404
+ 903,
1405
+ 678,
1406
+ 917
1407
+ ],
1408
+ "page_idx": 8
1409
+ },
1410
+ {
1411
+ "type": "text",
1412
+ "text": "Acknowledgement",
1413
+ "text_level": 1,
1414
+ "bbox": [
1415
+ 114,
1416
+ 84,
1417
+ 278,
1418
+ 99
1419
+ ],
1420
+ "page_idx": 9
1421
+ },
1422
+ {
1423
+ "type": "text",
1424
+ "text": "We thank reviewers for their valuable comments on this work. This research was supported by National Key R&D Program of China (No. 2017YFA0700800). We would like to thank Jiaqi Guo and Wenchao Gu for their valuable suggestions and feedback during the work discussion process. We also thank the participants of our human evaluation for their time.",
1425
+ "bbox": [
1426
+ 112,
1427
+ 109,
1428
+ 489,
1429
+ 237
1430
+ ],
1431
+ "page_idx": 9
1432
+ },
1433
+ {
1434
+ "type": "text",
1435
+ "text": "References",
1436
+ "text_level": 1,
1437
+ "bbox": [
1438
+ 114,
1439
+ 263,
1440
+ 213,
1441
+ 279
1442
+ ],
1443
+ "page_idx": 9
1444
+ },
1445
+ {
1446
+ "type": "list",
1447
+ "sub_type": "ref_text",
1448
+ "list_items": [
1449
+ "Apache. 2011. Apache lucene.",
1450
+ "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In IEEvaluation@ACL.",
1451
+ "Mike Barnett, Christian Bird, João Brunet, and Shuvendu K. Lahiri. 2015. Helping developers help themselves: Automatic decomposition of code review changesets. In ICSE (1), pages 134-144. IEEE Computer Society.",
1452
+ "Raymond P. L. Buse and Westley Weimer. 2010. Automatically documenting program changes. In ASE, pages 33-42. ACM.",
1453
+ "Luis Fernando Cortes-Coy, Mario Linares Vásquez, Jairo Aponte, and Denys Poshyvanyk. 2014. On automatically generating commit messages via summarization of source code changes. In SCAM, pages 275-284. IEEE Computer Society.",
1454
+ "Martin Dias, Alberto Bacchelli, Georgios Gousios, Damien Cassou, and Stephane Ducasse. 2015. Untangling fine-grained code changes. In SANER, pages 341-350. IEEE Computer Society.",
1455
+ "Jinhao Dong, Yiling Lou, Qihao Zhu, Zeyu Sun, Zhilin Li, Wenjie Zhang, and Dan Hao. 2022. Fira: Fine-grained graph-based code change representation for automated commit message generation.",
1456
+ "Lun Du, Xiaozhou Shi, Yanlin Wang, Ensheng Shi, Shi Han, and Dongmei Zhang. 2021. Is a single model enough? mucos: A multi-model ensemble learning approach for semantic code search. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 2994-2998.",
1457
+ "Robert Dyer, Hoan Anh Nguyen, Hridesh Rajan, and Tien N. Nguyen. 2013. Boa: a language and infrastructure for analyzing ultra-large-scale software repositories. In ICSE, pages 422-431. IEEE Computer Society.",
1458
+ "Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020."
1459
+ ],
1460
+ "bbox": [
1461
+ 115,
1462
+ 287,
1463
+ 489,
1464
+ 917
1465
+ ],
1466
+ "page_idx": 9
1467
+ },
1468
+ {
1469
+ "type": "list",
1470
+ "sub_type": "ref_text",
1471
+ "list_items": [
1472
+ "Codebert: A pre-trained model for programming and natural languages. In EMNLP (Findings), volume EMNLP 2020 of Findings of ACL, pages 1536-1547. Association for Computational Linguistics.",
1473
+ "Xiaodong Gu, Hongyu Zhang, and Sunghun Kim. 2018. Deep code search. In ICSE, pages 933-944. ACM.",
1474
+ "Andrew F Hayes and Klaus Krippendorff. 2007. Answering the call for a standard reliability measure for coding data. Communication methods and measures, 1(1):77-89.",
1475
+ "Yuan Huang, Nan Jia, Hao-Jie Zhou, Xiangping Chen, Zibin Zheng, and Mingdong Tang. 2020. Learning human-written commit messages to document code changes. J. Comput. Sci. Technol., 35(6):1258-1277.",
1476
+ "Yuan Huang, Qiaoyang Zheng, Xiangping Chen, Yingfei Xiong, Zhiyong Liu, and Xiaonan Luo. 2017. Mining version control system for automatically generating commit comment. In ESEM, pages 414-423. IEEE Computer Society.",
1477
+ "Siyuan Jiang, Ameer Armaly, and Collin McMillan. 2017. Automatically generating commit messages from diffs using neural machine translation. In ASE.",
1478
+ "Tae Hwan Jung. 2021. Commitbert: Commit message generation using pre-trained programming language model. In Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021), pages 26-33.",
1479
+ "Maurice G Kendall. 1945. The treatment of ties in ranking problems. Biometrika, 33(3):239-251.",
1480
+ "Patrick S. H. Lewis, Ethan Perez, Aleksandra Pik-tus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Roektaschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive NLP tasks. In NeurIPS.",
1481
+ "Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out.",
1482
+ "Qin Liu, Zihe Liu, Hongming Zhu, Hongfei Fan, Bowen Du, and Yu Qian. 2019. Generating commit messages from diffs using pointer-generator network. In MSR, pages 299-309. IEEE / ACM.",
1483
+ "Shangqing Liu, Cuiyun Gao, Sen Chen, Lun Yiu Nie, and Yang Liu. 2020. ATOM: commit message generation based on abstract syntax tree and hybrid ranking. TSE, PP:1-1.",
1484
+ "Zhongxin Liu, Xin Xia, Ahmed E. Hassan, David Lo, Zhenchang Xing, and Xinyu Wang. 2018. Neural-machine-translation-based commit message generation: how far are we? In ASE, pages 373-384. ACM.",
1485
+ "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In ICLR."
1486
+ ],
1487
+ "bbox": [
1488
+ 510,
1489
+ 85,
1490
+ 884,
1491
+ 917
1492
+ ],
1493
+ "page_idx": 9
1494
+ },
1495
+ {
1496
+ "type": "list",
1497
+ "sub_type": "ref_text",
1498
+ "list_items": [
1499
+ "Pablo Loyola, Edison Marrese-Taylor, and Yutaka Matsuo. 2017. A neural architecture for generating natural language descriptions from source code changes. In ACL (2), pages 287-292. Association for Computational Linguistics.",
1500
+ "Laura Moreno, Jairo Aponte, Giriprasad Sridhara, Andrian Marcus, Lori L. Pollock, and K. Vijay-Shanker. 2013. Automatic generation of natural language summaries for java classes. In ICPC, pages 23-32. IEEE Computer Society.",
1501
+ "Lun Yiu Nie, Cuiyun Gao, Zhicong Zhong, Wai Lam, Yang Liu, and Zenglin Xu. 2021. Coregen: Contextualized code representation learning for commit message generation. Neurocomputing, 459:97-107.",
1502
+ "Sebastiano Panichella, Annibale Panichella, Moritz Beller, Andy Zaidman, and Harald C. Gall. 2016. The impact of test case summaries on bug fixing performance: an empirical investigation. In ICSE, pages 547-558. ACM.",
1503
+ "Sheena Panthaplackel, Pengyu Nie, Milos Gligoric, Junyi Jessy Li, and Raymond J. Mooney. 2020. Learning to update natural language comments based on code changes. In ACL, pages 1853-1868. Association for Computational Linguistics.",
1504
+ "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL.",
1505
+ "Jinfeng Shen, Xiaobing Sun, Bin Li, Hui Yang, and Jiajun Hu. 2016. On automatic summarization of what and why information in source code changes. In COMPSAC, pages 103-112. IEEE Computer Society.",
1506
+ "Ensheng Shi, Wenchao Gub, Yanlin Wang, Lun Du, Hongyu Zhang, Shi Han, Dongmei Zhang, and Hongbin Sun. 2022a. Enhancing semantic code search with multimodal contrastive learning and soft data augmentation. arXiv preprint arXiv:2204.03293.",
1507
+ "Ensheng Shi, Yanlin Wang, Lun Du, Junjie Chen, Shi Han, Hongyu Zhang, Dongmei Zhang, and Hongbin Sun. 2022b. On the evaluation of neural code summarization. In ICSE.",
1508
+ "Ensheng Shi, Yanlin Wang, Lun Du, Hongyu Zhang, Shi Han, Dongmei Zhang, and Hongbin Sun. 2021a. Cast: Enhancing code summarization with hierarchical splitting and reconstruction of abstract syntax trees. In EMNLP.",
1509
+ "Ensheng Shi, Yanlin Wang, Lun Du, Hongyu Zhang, Shi Han, Dongmei Zhang, and Hongbin Sun. 2021b. CAST: enhancing code summarization with hierarchical splitting and reconstruction of abstract syntax trees. In EMNLP (1), pages 4053-4062. Association for Computational Linguistics."
1510
+ ],
1511
+ "bbox": [
1512
+ 115,
1513
+ 85,
1514
+ 489,
1515
+ 917
1516
+ ],
1517
+ "page_idx": 10
1518
+ },
1519
+ {
1520
+ "type": "list",
1521
+ "sub_type": "ref_text",
1522
+ "list_items": [
1523
+ "Wei Tao, Yanlin Wang, Ensheng Shi, Lun Du, Shi Han, Hongyu Zhang, Dongmei Zhang, and Wenqiang Zhang. 2021. On the evaluation of commit message generation models: An experimental study. In ICSME.",
1524
+ "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 5998-6008.",
1525
+ "Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In CVPR.",
1526
+ "Haoye Wang, Xin Xia, David Lo, Qiang He, Xinyu Wang, and John Grundy. 2021a. Context-aware retrieval-based deep commit message generation. ACM Trans. Softw. Eng. Methodol., 30(4):56:1-56:30.",
1527
+ "Yanlin Wang, Lun Du, Ensheng Shi, Yuxuan Hu, Shi Han, and Dongmei Zhang. 2020. Cocogum: Contextual code summarization with multi-relational gnn on ums. Technical report, Microsoft, MSR-TR-2020-16. [Online].",
1528
+ "Yue Wang, Weishi Wang, Shafiq R. Joty, and Steven C. H. Hoi. 2021b. Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. In EMNLP (1), pages 8696-8708. Association for Computational Linguistics.",
1529
+ "Bolin Wei, Yongmin Li, Ge Li, Xin Xia, and Zhi Jin. 2020. Retrieve and refine: exemplar-based neural comment generation. In 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 349-360. IEEE.",
1530
+ "Frank Wilcoxon, SK Katti, and Roberta A Wilcox. 1970. Critical values and probability levels for the wilcoxon rank sum test and the wilcoxon signed rank test. Selected tables in mathematical statistics, 1:171-259.",
1531
+ "Shengbin Xu, Yuan Yao, Feng Xu, Tianxiao Gu, Hanghang Tong, and Jian Lu. 2019. Commit message generation for source code changes. In *IJCAI*, pages 3975-3981. ijcai.org.",
1532
+ "HongChien Yu, Chenyan Xiong, and Jamie Callan. 2021. Improving query representations for dense retrieval with pseudo relevance feedback. In CIKM, pages 3592-3596. ACM.",
1533
+ "Jian Zhang, Xu Wang, Hongyu Zhang, Hailong Sun, and Xudong Liu. 2020. Retrieval-based neural source code summarization. In ICSE."
1534
+ ],
1535
+ "bbox": [
1536
+ 510,
1537
+ 85,
1538
+ 880,
1539
+ 829
1540
+ ],
1541
+ "page_idx": 10
1542
+ }
1543
+ ]
2203.02xxx/2203.02700/3d69ca7f-39a5-4c99-b9df-bcb921fe9d04_model.json ADDED
@@ -0,0 +1,2048 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ [
3
+ {
4
+ "type": "aside_text",
5
+ "bbox": [
6
+ 0.023,
7
+ 0.313,
8
+ 0.061,
9
+ 0.726
10
+ ],
11
+ "angle": 270,
12
+ "content": "arXiv:2203.02700v3 [cs.SE] 22 Oct 2022"
13
+ },
14
+ {
15
+ "type": "title",
16
+ "bbox": [
17
+ 0.194,
18
+ 0.08,
19
+ 0.804,
20
+ 0.1
21
+ ],
22
+ "angle": 0,
23
+ "content": "RACE: Retrieval-Augmented Commit Message Generation"
24
+ },
25
+ {
26
+ "type": "text",
27
+ "bbox": [
28
+ 0.293,
29
+ 0.107,
30
+ 0.714,
31
+ 0.124
32
+ ],
33
+ "angle": 0,
34
+ "content": "Ensheng Shi\\(^{a}\\) Yanlin Wang\\(^{b,\\S,\\dagger}\\) Wei Tao\\(^{c}\\) Lun Du\\(^{d}\\)"
35
+ },
36
+ {
37
+ "type": "text",
38
+ "bbox": [
39
+ 0.248,
40
+ 0.124,
41
+ 0.755,
42
+ 0.141
43
+ ],
44
+ "angle": 0,
45
+ "content": "Hongyu Zhang<sup>e</sup> Shi Hand Dongmei Zhang<sup>d</sup> Hongbin Sun<sup>a,§</sup>"
46
+ },
47
+ {
48
+ "type": "text",
49
+ "bbox": [
50
+ 0.15,
51
+ 0.141,
52
+ 0.853,
53
+ 0.158
54
+ ],
55
+ "angle": 0,
56
+ "content": "\\(^{a}\\)Xi'an Jiaotong University \\(^{b}\\)School of Software Engineering, Sun Yat-sen University"
57
+ },
58
+ {
59
+ "type": "text",
60
+ "bbox": [
61
+ 0.201,
62
+ 0.158,
63
+ 0.802,
64
+ 0.174
65
+ ],
66
+ "angle": 0,
67
+ "content": "\\(^{c}\\)Fudan University \\(^{d}\\)Microsoft Research \\(^{e}\\)The University of Newcastle"
68
+ },
69
+ {
70
+ "type": "text",
71
+ "bbox": [
72
+ 0.251,
73
+ 0.176,
74
+ 0.753,
75
+ 0.19
76
+ ],
77
+ "angle": 0,
78
+ "content": "s1530129650@stu.xjtu.edu.cn, hsun@mail.xjtu.edu.cn"
79
+ },
80
+ {
81
+ "type": "text",
82
+ "bbox": [
83
+ 0.258,
84
+ 0.192,
85
+ 0.744,
86
+ 0.207
87
+ ],
88
+ "angle": 0,
89
+ "content": "wangylin36@mail.sysu.edu.cn,wtao18@fudan.edu.cn"
90
+ },
91
+ {
92
+ "type": "text",
93
+ "bbox": [
94
+ 0.301,
95
+ 0.209,
96
+ 0.705,
97
+ 0.223
98
+ ],
99
+ "angle": 0,
100
+ "content": "{lun.du, shihan, dongmeiz}@microsoft.com"
101
+ },
102
+ {
103
+ "type": "text",
104
+ "bbox": [
105
+ 0.355,
106
+ 0.225,
107
+ 0.649,
108
+ 0.24
109
+ ],
110
+ "angle": 0,
111
+ "content": "hongyu.zhang@newcastle.edu.au"
112
+ },
113
+ {
114
+ "type": "title",
115
+ "bbox": [
116
+ 0.261,
117
+ 0.253,
118
+ 0.341,
119
+ 0.267
120
+ ],
121
+ "angle": 0,
122
+ "content": "Abstract"
123
+ },
124
+ {
125
+ "type": "text",
126
+ "bbox": [
127
+ 0.145,
128
+ 0.28,
129
+ 0.461,
130
+ 0.662
131
+ ],
132
+ "angle": 0,
133
+ "content": "Commit messages are important for software development and maintenance. Many neural network-based approaches have been proposed and shown promising results on automatic commit message generation. However, the generated commit messages could be repetitive or redundant. In this paper, we propose RACE, a new retrieval-augmented neural commit message generation method, which treats the retrieved similar commit as an exemplar and leverages it to generate an accurate commit message. As the retrieved commit message may not always accurately describe the content/intent of the current code diff, we also propose an exemplar guider, which learns the semantic similarity between the retrieved and current code diff and then guides the generation of commit message based on the similarity. We conduct extensive experiments on a large public dataset with five programming languages. Experimental results show that RACE can outperform all baselines. Furthermore, RACE can boost the performance of existing Seq2Seq models in commit message generation. Our data and source code are available at https://github.com/DeepSoftwareAnalytics/RACE."
134
+ },
135
+ {
136
+ "type": "title",
137
+ "bbox": [
138
+ 0.115,
139
+ 0.675,
140
+ 0.26,
141
+ 0.689
142
+ ],
143
+ "angle": 0,
144
+ "content": "1 Introduction"
145
+ },
146
+ {
147
+ "type": "text",
148
+ "bbox": [
149
+ 0.113,
150
+ 0.7,
151
+ 0.49,
152
+ 0.86
153
+ ],
154
+ "angle": 0,
155
+ "content": "In software development and maintenance, source code is frequently changed. In practice, code changes are often documented as natural language commit messages, which summarize what (content) the code changes are or why (intent) the code is changed (Buse and Weimer, 2010; Cortes-Coy et al., 2014). High-quality commit messages are essential to help developers understand the evolution of software without diving into implementation details, which can save a large amount of"
156
+ },
157
+ {
158
+ "type": "text",
159
+ "bbox": [
160
+ 0.508,
161
+ 0.254,
162
+ 0.885,
163
+ 0.463
164
+ ],
165
+ "angle": 0,
166
+ "content": "time and effort in software development and maintenance (Dias et al., 2015; Barnett et al., 2015). However, it is difficult to write high-quality commit messages due to lack of time, clear motivation, or experienced skills. Even for seasoned developers, it still poses a considerable amount of extra workload to write a concise and informative commit message for massive code changes (Nie et al., 2021). It is also reported that around \\(14\\%\\) of commit messages over 23,000 projects in SourceForge are left empty (Dyer et al., 2013). Thus, automatically generating commit messages becomes an important task."
167
+ },
168
+ {
169
+ "type": "text",
170
+ "bbox": [
171
+ 0.508,
172
+ 0.469,
173
+ 0.885,
174
+ 0.92
175
+ ],
176
+ "angle": 0,
177
+ "content": "Over the years, many approaches have been proposed to automatically generate commit messages. Early studies (Shen et al., 2016; Cortes-Coy et al., 2014) are mainly based on predefined rules or templates, which may not cover all situations or comprehensively infer the intentions behind code changes. Later, some studies (Liu et al., 2018; Huang et al., 2017, 2020) adopt information retrieval (IR) techniques to reuse commit messages of similar code changes. They can take advantage of similar examples, but the reused commit messages might not correctly describe the content/intent of the current code change. Recently, some Seq2Seq-based neural network models (Loyola et al., 2017; Jiang et al., 2017; Xu et al., 2019; Liu et al., 2019; Jung, 2021) have been proposed to understand code diffs and generate the high-quality commit messages. These approaches show promising performance, but they tend to generate high-frequency and repetitive tokens and the generated commit messages have the problem of insufficient information and poor readability (Wang et al., 2021a; Liu et al., 2018). Some studies (Liu et al., 2020; Wang et al., 2021a) also explore the combination of neural-based and IR-based techniques. Liu et al. (2020) propose an approach to rank the retrieved commit message (obtained by a simple IR-based model) and the generated commit message (ob-"
178
+ },
179
+ {
180
+ "type": "page_footnote",
181
+ "bbox": [
182
+ 0.114,
183
+ 0.868,
184
+ 0.488,
185
+ 0.892
186
+ ],
187
+ "angle": 0,
188
+ "content": "\\(^{\\S}\\) Yanlin Wang and Hongbin Sun are the corresponding authors."
189
+ },
190
+ {
191
+ "type": "page_footnote",
192
+ "bbox": [
193
+ 0.114,
194
+ 0.893,
195
+ 0.487,
196
+ 0.918
197
+ ],
198
+ "angle": 0,
199
+ "content": "\\(^{\\dagger}\\)Work done during the author's employment at Microsoft Research Asia"
200
+ },
201
+ {
202
+ "type": "list",
203
+ "bbox": [
204
+ 0.114,
205
+ 0.868,
206
+ 0.488,
207
+ 0.918
208
+ ],
209
+ "angle": 0,
210
+ "content": null
211
+ }
212
+ ],
213
+ [
214
+ {
215
+ "type": "text",
216
+ "bbox": [
217
+ 0.113,
218
+ 0.085,
219
+ 0.49,
220
+ 0.198
221
+ ],
222
+ "angle": 0,
223
+ "content": "tained by a neural network model). Wang et al. (2021a) propose to use the similar code diff as auxiliary information in the inference stage, while the model is not trained to learn how to effectively utilize the information of retrieval results. Therefore, both of them fail to take advantage of the information of retrieved similar results well."
224
+ },
225
+ {
226
+ "type": "text",
227
+ "bbox": [
228
+ 0.117,
229
+ 0.205,
230
+ 0.492,
231
+ 0.64
232
+ ],
233
+ "angle": 0,
234
+ "content": "In this paper, we propose a novel model RACE (Retrieval-Augmented Commit mEssay generation), which retrieves a similar commit message as an exemplar, guides the neural model to learn the content of the code diff and the intent behind the code diff, and generates the readable and informative commit message. The key idea of our approach is retrieval and augmentation. Specifically, we first train a code diff encoder to learn the semantics of code diffs and encode the code diff into high-dimensional semantic space. Then, we retrieve the semantically similar code diff paired with the commit message on a large parallel corpus based on the similarity measured by vectors' distance. Next, we treat the similar commit message as an exemplar and leverage it to guide the neural-based models to generate an accurate commit message. However, the retrieved commit messages may not accurately describe the content/intent of current code diffs and may even contain wrong or irrelevant information. To avoid the retrieved samples dominating the processing of commit message generation, we propose an exemplar guider, which first learns the semantic similarity between the retrieved and current code diff and then leverages the information of the exemplar based on the learned similarity to guide the commit message generation."
235
+ },
236
+ {
237
+ "type": "text",
238
+ "bbox": [
239
+ 0.113,
240
+ 0.646,
241
+ 0.492,
242
+ 0.921
243
+ ],
244
+ "angle": 0,
245
+ "content": "To evaluate the effectiveness of RACE, we conduct experiments on a large-scale dataset MCMD (Tao et al., 2021) with five programming language (Java, C#, C++, Python and JavaScript) and compare RACE with 11 state-of-the-art approaches. Experimental results show that: (1) RACE significantly outperforms existing state-of-the-art approaches in terms of four metrics (BLUE, Meteor, Rouge-L and Cider) on the commit message generation. (2) RACE can boost the performance of existing Seq2Seq models in commit message generation. For example, it can improve the performance of NMTGen (Loyola et al., 2017), CommitBERT (Jung, 2021), CodeT5-small (Wang et al., 2021b) and CodeT5-base (Wang et al., 2021b) by \\(43\\%\\), \\(11\\%\\), \\(15\\%\\), and \\(16\\%\\) on average in terms of BLEU, respectively. In addition,"
246
+ },
247
+ {
248
+ "type": "text",
249
+ "bbox": [
250
+ 0.509,
251
+ 0.085,
252
+ 0.882,
253
+ 0.116
254
+ ],
255
+ "angle": 0,
256
+ "content": "we also conduct human evaluation to confirm the effectiveness of RACE."
257
+ },
258
+ {
259
+ "type": "text",
260
+ "bbox": [
261
+ 0.509,
262
+ 0.117,
263
+ 0.882,
264
+ 0.149
265
+ ],
266
+ "angle": 0,
267
+ "content": "We summarize the main contributions of this paper as follows:"
268
+ },
269
+ {
270
+ "type": "text",
271
+ "bbox": [
272
+ 0.532,
273
+ 0.159,
274
+ 0.884,
275
+ 0.255
276
+ ],
277
+ "angle": 0,
278
+ "content": "- We propose a retrieval-augmented neural commit message generation model, which treats the retrieved similar commit as an exemplar and leverages it to guide neural network model to generate informative and readable commit messages."
279
+ },
280
+ {
281
+ "type": "text",
282
+ "bbox": [
283
+ 0.532,
284
+ 0.265,
285
+ 0.885,
286
+ 0.344
287
+ ],
288
+ "angle": 0,
289
+ "content": "- We apply our retrieval-augmented framework to four existing neural network-based approaches (NMTGen, CommitBERT, CodeT5-small, and CodeT5-base) and greatly boost their performance."
290
+ },
291
+ {
292
+ "type": "text",
293
+ "bbox": [
294
+ 0.532,
295
+ 0.355,
296
+ 0.884,
297
+ 0.435
298
+ ],
299
+ "angle": 0,
300
+ "content": "- We perform extensive experiments including human evaluation on a large multi-programming-language dataset and the results confirm the effectiveness of our approach over state-of-the-art approaches."
301
+ },
302
+ {
303
+ "type": "list",
304
+ "bbox": [
305
+ 0.532,
306
+ 0.159,
307
+ 0.885,
308
+ 0.435
309
+ ],
310
+ "angle": 0,
311
+ "content": null
312
+ },
313
+ {
314
+ "type": "title",
315
+ "bbox": [
316
+ 0.51,
317
+ 0.444,
318
+ 0.667,
319
+ 0.459
320
+ ],
321
+ "angle": 0,
322
+ "content": "2 Related Work"
323
+ },
324
+ {
325
+ "type": "text",
326
+ "bbox": [
327
+ 0.508,
328
+ 0.469,
329
+ 0.885,
330
+ 0.629
331
+ ],
332
+ "angle": 0,
333
+ "content": "Code intelligence, which leverages machine learning especially deep learning-based method to understand source code, is an emerging topic and has obtained the promising results in many software engineering tasks, such as code summarization (Zhang et al., 2020; Shi et al., 2021a, 2022b; Wang et al., 2020) and code search (Gu et al., 2018; Du et al., 2021; Shi et al., 2022a). Among them, commit message generation plays an important role in the software evolution."
334
+ },
335
+ {
336
+ "type": "text",
337
+ "bbox": [
338
+ 0.508,
339
+ 0.63,
340
+ 0.884,
341
+ 0.887
342
+ ],
343
+ "angle": 0,
344
+ "content": "In early work, information retrieval techniques are introduced to commit message generation (Liu et al., 2018; Huang et al., 2017, 2020). For instance, ChangeDoc (Huang et al., 2020) retrieves the most similar commits according to the syntax or semantics in the code diff and reuses commit messages of similar code diffs. NNGen (Liu et al., 2018) is a simple yet effective retrieval-based method using the nearest neighbor algorithm. It firstly recalls the top-k similar code diffs in the parallel corpus based on cosine similarity between bag-of-words vectors of code diffs. Then select the most similar result based on BLEU scores between each of them (topk results) and the input code diff. These approaches can reuse similar examples and the reused commit messages are usually readable and understandable."
345
+ },
346
+ {
347
+ "type": "text",
348
+ "bbox": [
349
+ 0.509,
350
+ 0.888,
351
+ 0.884,
352
+ 0.919
353
+ ],
354
+ "angle": 0,
355
+ "content": "Recently, many neural-based approaches (Loyola et al., 2017; Jiang et al., 2017; Xu et al., 2019;"
356
+ }
357
+ ],
358
+ [
359
+ {
360
+ "type": "text",
361
+ "bbox": [
362
+ 0.117,
363
+ 0.085,
364
+ 0.49,
365
+ 0.455
366
+ ],
367
+ "angle": 0,
368
+ "content": "Liu et al., 2019, 2020; Jung, 2021; Dong et al., 2022; Nie et al., 2021; Wang et al., 2021a) have been used to learn the semantic of code diffs and translate them into commit messages. For example, NMTGen (Loyola et al., 2017) and CommitGen (Jiang et al., 2017) treat the code diffs as plain texts and adopt the Seq2Seq neural network with different attention mechanisms to translate them into commit messages. CoDiSum (Xu et al., 2019) extracts both code structure and code semantics from code diffs and jointly models them with a multi-layer bidirectional GRU to better learn the representations of code diffs. PtrGNCMsg (Liu et al., 2019) incorporates the pointer-generator network into the Seq2Seq model to handle out-of-vocabulary (OOV) words. CommitBERT leverage CodeBERT (Feng et al., 2020), a pre-trained language model for source code, to learn the semantic representations of code diffs and adopt a Transformer-based (Vaswani et al., 2017) decoder to generate the commit message. These approaches show promising results on the generation of commit messages."
369
+ },
370
+ {
371
+ "type": "text",
372
+ "bbox": [
373
+ 0.117,
374
+ 0.469,
375
+ 0.488,
376
+ 0.919
377
+ ],
378
+ "angle": 0,
379
+ "content": "Recently, introducing retrieved relevant results into the training process has been found useful in most generation tasks (Lewis et al., 2020; Yu et al., 2021; Wei et al., 2020). Some studies (Liu et al., 2020; Wang et al., 2021a) also explore the combination of neural-based models and IR-based techniques to generate commit messages. ATOM (Liu et al., 2020) ensembles the neural-based model and the IR-based technique through the hybrid ranking. Specifically, it uses BiLSTM to encode ASTs paths extracted from ASTs of code diffs and adopt a decoder to generate commit messages. It also uses TF-IDF technique to represent code diffs as vectors and retrieves the most similar commit message based on cosine similarity. The generated and retrieved commit messages are finally prioritized by a hybrid ranking module. CoRec (Wang et al., 2021a) is also a hybrid model and only considers the retrieved result during the inference. Specifically, at the training stage, they use an encoder-decoder neural model to encode the input code diffs by an encoder and generate commit messages by a decoder. At the inference stage, they first use the trained encoder to retrieve the most similar code diff from the training set. Then they reuse a trained encoder-decoder to encode the input and retrieved code diff, combine the probability distributions (obtained by two decoders) of each word, and generate"
380
+ },
381
+ {
382
+ "type": "text",
383
+ "bbox": [
384
+ 0.512,
385
+ 0.085,
386
+ 0.885,
387
+ 0.23
388
+ ],
389
+ "angle": 0,
390
+ "content": "the final commit message step by step. In summary, ATOM does not learn to refine the retrieved results or the generated results, and CoRec is not trained to utilize the information of retrieval results. Therefore, both of them fail to take full advantage of the retrieved similar results. In this paper, we treat the retrieved similar commit as an exemplar and train the model to leverage the exemplar to enhance commit message generation."
391
+ },
392
+ {
393
+ "type": "title",
394
+ "bbox": [
395
+ 0.512,
396
+ 0.244,
397
+ 0.715,
398
+ 0.261
399
+ ],
400
+ "angle": 0,
401
+ "content": "3 Proposed Approach"
402
+ },
403
+ {
404
+ "type": "text",
405
+ "bbox": [
406
+ 0.512,
407
+ 0.271,
408
+ 0.885,
409
+ 0.448
410
+ ],
411
+ "angle": 0,
412
+ "content": "The overview of RACE is shown in Figure 1. It includes two modules: retrieval module and generation module. Specifically, RACE firstly retrieves the most semantically similar code diff paired with the commit message from the large parallel training corpus. The semantic similarity between two code diffs is measured by the cosine similarity of vectors obtained by a code diff encoder. Next, RACE treats the retrieved commit message as an example and uses it to guide the neural network to generate an understandable and concise commit message."
413
+ },
414
+ {
415
+ "type": "title",
416
+ "bbox": [
417
+ 0.512,
418
+ 0.461,
419
+ 0.691,
420
+ 0.476
421
+ ],
422
+ "angle": 0,
423
+ "content": "3.1 Retrieval module"
424
+ },
425
+ {
426
+ "type": "text",
427
+ "bbox": [
428
+ 0.512,
429
+ 0.484,
430
+ 0.885,
431
+ 0.675
432
+ ],
433
+ "angle": 0,
434
+ "content": "In this module, we aim to retrieve the most semantically similar result. Specifically, we first train an encoder-decoder neural network on the large commit message generation dataset. The encoder is used to learn the semantics of code diffs and encode code diffs into a high-dimension semantic space. Then we retrieve the most semantically similar code diff paired with the commit message from the large parallel training corpus. The semantic similarity between two code diffs is measured by the cosine similarity of vectors obtained by a well-trained code diff encoder."
435
+ },
436
+ {
437
+ "type": "text",
438
+ "bbox": [
439
+ 0.512,
440
+ 0.678,
441
+ 0.885,
442
+ 0.87
443
+ ],
444
+ "angle": 0,
445
+ "content": "Recently, encoder-decoder neural network models (Loyola et al., 2017; Jiang et al., 2017; Jung, 2021), which leverage an encoder to learn the semantic of code diff and employ a decoder to generate the commit message, have shown their superiority in the understanding of code offs and commit messages generation. To enable the code diff encoder to understand the semantics of code offs, we train it with a commit message generator on a large commit message generation dataset, which consists of about 0.9 million <code diff, commit message> pairs."
446
+ },
447
+ {
448
+ "type": "text",
449
+ "bbox": [
450
+ 0.512,
451
+ 0.872,
452
+ 0.884,
453
+ 0.919
454
+ ],
455
+ "angle": 0,
456
+ "content": "To capture long-range dependencies (e.g. a variable is initialized before the changed line) and more contextual information of code diffs, we em"
457
+ }
458
+ ],
459
+ [
460
+ {
461
+ "type": "image",
462
+ "bbox": [
463
+ 0.197,
464
+ 0.085,
465
+ 0.805,
466
+ 0.187
467
+ ],
468
+ "angle": 0,
469
+ "content": null
470
+ },
471
+ {
472
+ "type": "image",
473
+ "bbox": [
474
+ 0.198,
475
+ 0.19,
476
+ 0.805,
477
+ 0.315
478
+ ],
479
+ "angle": 0,
480
+ "content": null
481
+ },
482
+ {
483
+ "type": "image_caption",
484
+ "bbox": [
485
+ 0.113,
486
+ 0.326,
487
+ 0.885,
488
+ 0.37
489
+ ],
490
+ "angle": 0,
491
+ "content": "Figure 1: The architecture of RACE. It includes two modules: retrieval module and generation module. The retrieval module is used to retrieve the most similar code diff and commit message. The generation module leverages the retrieved result to enhance the performance of neural network models."
492
+ },
493
+ {
494
+ "type": "text",
495
+ "bbox": [
496
+ 0.113,
497
+ 0.394,
498
+ 0.49,
499
+ 0.62
500
+ ],
501
+ "angle": 0,
502
+ "content": "ploy a Transformer-based encoder to learn the semantic representations of input code diffs. As shown in Figure 1, a Transformer-based encoder is stacked with multiple encoder layers. Each layer consists of four parts, namely, a multi-head self-attention module, a relative position embedding module, a feed forward network (FFN) and an add & norm module. In \\( b \\)-th attention head, the input \\( \\mathbf{X}^{\\mathrm{b}} = (\\mathbf{x}_1^{\\mathrm{b}},\\mathbf{x}_2^{\\mathrm{b}},\\dots,\\mathbf{x}_1^{\\mathrm{b}}) \\) (where \\( \\mathbf{X}^{\\mathrm{b}} = \\mathbf{X}[(b - 1)*head_{dim}:b*head_{dim}] \\), \\( \\mathbf{X} \\) is the sequence of code diff embedding, \\( head_{dim} \\) is the dimension of each head and \\( l \\) is the input sequence length.) is transformed to \\( (\\mathbf{Head}^{b} = \\mathbf{head}_{1}^{\\mathrm{b}},\\mathbf{head}_{2}^{\\mathrm{b}},\\dots,\\mathbf{head}_{l}^{\\mathrm{b}}) \\) by:"
503
+ },
504
+ {
505
+ "type": "equation",
506
+ "bbox": [
507
+ 0.175,
508
+ 0.627,
509
+ 0.486,
510
+ 0.667
511
+ ],
512
+ "angle": 0,
513
+ "content": "\\[\n\\mathbf {h e a d} _ {\\mathrm {i}} ^ {\\mathrm {b}} = \\sum_ {j = 1} ^ {l} \\alpha_ {i j} \\left(\\mathbf {W} _ {\\mathbf {V}} \\mathbf {x} _ {\\mathrm {j}} ^ {\\mathrm {b}} + \\mathbf {p} _ {\\mathrm {i j}} ^ {\\mathbf {V}}\\right) \\tag {1}\n\\]"
514
+ },
515
+ {
516
+ "type": "equation",
517
+ "bbox": [
518
+ 0.205,
519
+ 0.665,
520
+ 0.425,
521
+ 0.696
522
+ ],
523
+ "angle": 0,
524
+ "content": "\\[\ne _ {i j} = \\frac {\\left(\\mathbf {W _ {Q}} \\mathbf {x _ {i} ^ {b}}\\right) ^ {T} \\left(\\mathbf {W _ {K}} \\mathbf {x _ {j} ^ {b}} + \\mathbf {p _ {i j} ^ {K}}\\right)}{\\sqrt {d _ {k}}}\n\\]"
525
+ },
526
+ {
527
+ "type": "text",
528
+ "bbox": [
529
+ 0.113,
530
+ 0.701,
531
+ 0.489,
532
+ 0.785
533
+ ],
534
+ "angle": 0,
535
+ "content": "where \\(\\alpha_{ij} = \\frac{\\exp e_{ij}}{\\sum_{k=1}^{n}\\exp e_{ik}}\\), \\(\\mathbf{W}_{\\mathbf{Q}}\\), \\(\\mathbf{W}_{\\mathbf{K}}\\) and \\(\\mathbf{W}_{\\mathbf{V}}\\) are learnable matrix for queries, keys and values. \\(d_k\\) is the dimension of queries and keys; \\(\\mathbf{p}_{\\mathbf{ij}}^{\\mathbf{K}}\\) and \\(\\mathbf{p}_{\\mathbf{ij}}^{\\mathbf{V}}\\) are relative positional representations for positions \\(i\\) and \\(j\\)."
536
+ },
537
+ {
538
+ "type": "text",
539
+ "bbox": [
540
+ 0.113,
541
+ 0.786,
542
+ 0.49,
543
+ 0.864
544
+ ],
545
+ "angle": 0,
546
+ "content": "The outputs of all heads are concatenated and then fed to the FFN modules which is a multi-layer perception. The add & norm operation are employed after the multi-head attention and FFN modules. The calculations are as follows:"
547
+ },
548
+ {
549
+ "type": "equation",
550
+ "bbox": [
551
+ 0.15,
552
+ 0.87,
553
+ 0.488,
554
+ 0.906
555
+ ],
556
+ "angle": 0,
557
+ "content": "\\[\n\\begin{array}{l} \\mathbf {H e a d} = C o n c a t \\left(\\mathbf {H e a d} ^ {\\mathbf {1}}, \\mathbf {H e a d} ^ {\\mathbf {d}}, \\mathbf {H e a d} ^ {\\mathbf {B}}\\right) \\\\ \\mathbf {H i d} = a d d \\& n o r m (\\mathbf {H e a d}, \\mathbf {X}) \\end{array} \\tag {2}\n\\]"
558
+ },
559
+ {
560
+ "type": "equation",
561
+ "bbox": [
562
+ 0.163,
563
+ 0.908,
564
+ 0.432,
565
+ 0.922
566
+ ],
567
+ "angle": 0,
568
+ "content": "\\[\n\\mathbf {E n c} = a d d \\& n o r m (\\mathbf {F F N} (\\mathbf {H i d}), \\mathbf {H i d})\n\\]"
569
+ },
570
+ {
571
+ "type": "text",
572
+ "bbox": [
573
+ 0.508,
574
+ 0.394,
575
+ 0.885,
576
+ 0.523
577
+ ],
578
+ "angle": 0,
579
+ "content": "where \\(add\\&norm(\\mathbf{A_1},\\mathbf{A_2}) = LN(\\mathbf{A_1} + \\mathbf{A_2})\\) \\(B\\) is the number of heads and \\(LN\\) is layer normalization. The final output of encoder is sent to Transformer-based decoder to generate the commit message step by step. We use cross-entropy as loss function and adopt AdamW (Loshchilov and Hutter, 2019) to optimize the parameters of the code diff encoder and the decoder at the top of Figure 1."
580
+ },
581
+ {
582
+ "type": "text",
583
+ "bbox": [
584
+ 0.508,
585
+ 0.524,
586
+ 0.884,
587
+ 0.621
588
+ ],
589
+ "angle": 0,
590
+ "content": "Next, the retrieval module is used to retrieve the most similar result from a large parallel training corpus. We firstly use the above code diff encoder to map code diffs into a high-dimensional latent space and retrieve the most similar example based on cosine similarity."
591
+ },
592
+ {
593
+ "type": "text",
594
+ "bbox": [
595
+ 0.508,
596
+ 0.623,
597
+ 0.884,
598
+ 0.799
599
+ ],
600
+ "angle": 0,
601
+ "content": "Specifically, after being trained in the commit message generation dataset, the code diff encoder can capture the semantic of code diff well. We use well-trained code diff encoder following a mean-pooling operation to map the code diff into a high dimensional space. Mathematically, given the input code diff embedding \\(\\mathbf{X} = (\\mathbf{x}_1,\\mathbf{x}_2,\\dots,\\mathbf{x}_l)\\), the code diff encoder can transformed them to \\(\\mathbf{Enc} = (\\mathbf{enc}_1,\\mathbf{enc}_2,\\dots,\\mathbf{enc}_l)\\). Then we obtain the semantic vector of the code diff by pooling operation:"
602
+ },
603
+ {
604
+ "type": "equation",
605
+ "bbox": [
606
+ 0.52,
607
+ 0.813,
608
+ 0.881,
609
+ 0.838
610
+ ],
611
+ "angle": 0,
612
+ "content": "\\[\n\\operatorname {v e c} = \\text {p o o l i n g} (\\mathbf {E n c}) = \\text {m e a n} \\left(\\mathbf {e n c} _ {1}, \\mathbf {e n c} _ {2}, \\dots , \\mathbf {e n c} _ {1}\\right) \\tag {3}\n\\]"
613
+ },
614
+ {
615
+ "type": "text",
616
+ "bbox": [
617
+ 0.508,
618
+ 0.84,
619
+ 0.884,
620
+ 0.92
621
+ ],
622
+ "angle": 0,
623
+ "content": "where mean is a dimension-wise average operation. We measure the similarity of two code diffs by cosine similarity of their semantic vectors and retrieve the most similar code diff paired with the commit message from the parallel training corpus. For each"
624
+ }
625
+ ],
626
+ [
627
+ {
628
+ "type": "text",
629
+ "bbox": [
630
+ 0.113,
631
+ 0.085,
632
+ 0.49,
633
+ 0.149
634
+ ],
635
+ "angle": 0,
636
+ "content": "code diff, we return the first-ranked similar result. But, for the code diff in the training dataset, we return the second-ranked similar result because the first-ranked result is itself."
637
+ },
638
+ {
639
+ "type": "title",
640
+ "bbox": [
641
+ 0.114,
642
+ 0.163,
643
+ 0.314,
644
+ 0.177
645
+ ],
646
+ "angle": 0,
647
+ "content": "3.2 Generation module"
648
+ },
649
+ {
650
+ "type": "text",
651
+ "bbox": [
652
+ 0.113,
653
+ 0.184,
654
+ 0.49,
655
+ 0.295
656
+ ],
657
+ "angle": 0,
658
+ "content": "As shown at the bottom of Figure 1, in the generation module, we treat the retrieved commit message as an exemplar and leverage it to guide the neural network model to generate an accurate commit message. Our generation module consists of three components: three encoders, an exemplar guider, and a decoder."
659
+ },
660
+ {
661
+ "type": "text",
662
+ "bbox": [
663
+ 0.113,
664
+ 0.298,
665
+ 0.49,
666
+ 0.442
667
+ ],
668
+ "angle": 0,
669
+ "content": "First, following Equation 1, 2, three Transformer-based encoders are adopted to obtain the representations of the input code diff \\((\\mathbf{Enc}^{\\mathbf{d}} = \\mathbf{enc}_1^d,\\mathbf{enc}_2^d,\\dots,\\mathbf{enc}_l^d)\\), the similar code diff \\((\\mathbf{Enc}^{\\mathbf{s}} = \\mathbf{enc}_1^s,\\mathbf{enc}_2^s,\\dots,\\mathbf{enc}_m^s)\\), and similar commit message \\((\\mathbf{Enc}^{\\mathbf{m}} = \\mathbf{enc}_1^m,\\mathbf{enc}_2^m,\\dots,\\mathbf{enc}_n^m)\\) (step ① in Figure 1), where subscripts \\(l,m,n\\) are the length of the input code diff, the similar code diff, and the similar commit message, respectively."
670
+ },
671
+ {
672
+ "type": "text",
673
+ "bbox": [
674
+ 0.113,
675
+ 0.444,
676
+ 0.49,
677
+ 0.653
678
+ ],
679
+ "angle": 0,
680
+ "content": "Second, since the retrieved similar commit messages may not always accurately describe the content/ intent of the input code diffs even express totally wrong or irrelevant semantics. Therefore, we propose an exemplar guider which first learns the semantic similarity between the retrieved and input code diff and then leverages the information of the similar commit messages based on the learned similarity to guide the commit message generation (step ②). Mathematically, exemplar guider calculate the semantic similarity \\((\\lambda)\\) between the input code diff and the similar code diff based on their representation \\(\\mathbf{Enc}_l^d\\) and \\(\\mathbf{Enc}_m^s\\) (step ② and ③):"
681
+ },
682
+ {
683
+ "type": "equation",
684
+ "bbox": [
685
+ 0.165,
686
+ 0.667,
687
+ 0.488,
688
+ 0.683
689
+ ],
690
+ "angle": 0,
691
+ "content": "\\[\n\\lambda = \\sigma \\left(\\mathbf {W} _ {\\mathbf {s}} \\left[ m e a n \\left(\\mathbf {E} \\mathbf {n c} ^ {d}\\right), m e a n \\left(\\mathbf {E} \\mathbf {n c} ^ {s}\\right) \\right]\\right) \\tag {4}\n\\]"
692
+ },
693
+ {
694
+ "type": "text",
695
+ "bbox": [
696
+ 0.113,
697
+ 0.697,
698
+ 0.489,
699
+ 0.745
700
+ ],
701
+ "angle": 0,
702
+ "content": "where \\(\\sigma\\) is the sigmoid activation function, \\(\\mathbf{W}_{\\mathrm{s}}\\) is a learnable matrix, and mean is a dimension-wise average operation."
703
+ },
704
+ {
705
+ "type": "text",
706
+ "bbox": [
707
+ 0.113,
708
+ 0.747,
709
+ 0.49,
710
+ 0.811
711
+ ],
712
+ "angle": 0,
713
+ "content": "Third, we weight representations of code diff and similar commit message by \\(1 - \\lambda\\) and \\(\\lambda\\), respectively and then concatenate them to obtain the final input encoding."
714
+ },
715
+ {
716
+ "type": "equation",
717
+ "bbox": [
718
+ 0.172,
719
+ 0.823,
720
+ 0.488,
721
+ 0.841
722
+ ],
723
+ "angle": 0,
724
+ "content": "\\[\n\\mathbf {E n c} ^ {\\mathrm {d m}} = \\left[ (1 - \\lambda) * \\mathbf {E n c} ^ {\\mathrm {d}}: \\lambda * \\mathbf {E n c} ^ {\\mathrm {s}} \\right] \\tag {5}\n\\]"
725
+ },
726
+ {
727
+ "type": "text",
728
+ "bbox": [
729
+ 0.113,
730
+ 0.855,
731
+ 0.49,
732
+ 0.919
733
+ ],
734
+ "angle": 0,
735
+ "content": "Finally, we use a Transformer-based decoder to generate the commit message. The decoder consists of multiply decoder layer and each layers includes a masked multi-head self-attention, a"
736
+ },
737
+ {
738
+ "type": "table",
739
+ "bbox": [
740
+ 0.549,
741
+ 0.082,
742
+ 0.842,
743
+ 0.172
744
+ ],
745
+ "angle": 0,
746
+ "content": "<table><tr><td>Language</td><td>Training</td><td>Validation</td><td>Test</td></tr><tr><td>Java</td><td>160,018</td><td>19,825</td><td>20,159</td></tr><tr><td>C#</td><td>149,907</td><td>18,688</td><td>18,702</td></tr><tr><td>C++</td><td>160,948</td><td>20,000</td><td>20,141</td></tr><tr><td>Python</td><td>206,777</td><td>25,912</td><td>25,837</td></tr><tr><td>JavaScript</td><td>197,529</td><td>24,899</td><td>24,773</td></tr></table>"
747
+ },
748
+ {
749
+ "type": "table_caption",
750
+ "bbox": [
751
+ 0.548,
752
+ 0.181,
753
+ 0.843,
754
+ 0.194
755
+ ],
756
+ "angle": 0,
757
+ "content": "Table 1: Statistics of the evaluation dataset."
758
+ },
759
+ {
760
+ "type": "text",
761
+ "bbox": [
762
+ 0.508,
763
+ 0.22,
764
+ 0.885,
765
+ 0.412
766
+ ],
767
+ "angle": 0,
768
+ "content": "multi-head cross-attention module, a FFN module and an add & norm module. Different from multi-head self-attention module in the encoder, in terms of one token, masked multi-head self-attention in the decoder can only attend to the previous tokens rather than the before and after context. In \\( b \\)-th cross-attention layer, the input encoding \\( (\\mathbf{Enc}^{\\mathrm{dm}} = (\\mathbf{enc}_1^{\\mathrm{dm}}, \\mathbf{enc}_2^{\\mathrm{dm}}, \\dots, \\mathbf{enc}_{\\mathrm{l + m}}^{\\mathrm{dm}})) \\) is queried by the output of the preceding commit message representations \\( \\mathbf{Msg} = (\\mathbf{msg}_1, \\dots, \\mathbf{msg}_t) \\) obtained by masked multi-head self-attention module."
769
+ },
770
+ {
771
+ "type": "equation",
772
+ "bbox": [
773
+ 0.553,
774
+ 0.42,
775
+ 0.882,
776
+ 0.459
777
+ ],
778
+ "angle": 0,
779
+ "content": "\\[\nD e c _ {\\text {h e a d} _ {i} ^ {b}} = \\sum_ {j = 1} ^ {l + m} \\alpha_ {i j} \\left(\\mathbf {W} _ {\\mathbf {V}} ^ {\\mathbf {D e c}} \\mathbf {e n c} _ {\\mathbf {j}} ^ {\\mathbf {b}}\\right) \\tag {6}\n\\]"
780
+ },
781
+ {
782
+ "type": "equation",
783
+ "bbox": [
784
+ 0.571,
785
+ 0.458,
786
+ 0.837,
787
+ 0.489
788
+ ],
789
+ "angle": 0,
790
+ "content": "\\[\nD e c _ {e _ {i j}} = \\frac {\\left(\\mathbf {W} _ {\\mathbf {Q}} ^ {\\mathbf {D e c}} \\mathbf {m s g} _ {\\mathbf {j}} ^ {\\mathbf {b}}\\right) ^ {T} \\left(\\mathbf {W} _ {\\mathbf {K}} ^ {\\mathbf {D e c}} \\mathbf {e n c} _ {\\mathbf {i}} ^ {\\mathbf {b}}\\right)}{\\sqrt {d _ {k}}}\n\\]"
791
+ },
792
+ {
793
+ "type": "text",
794
+ "bbox": [
795
+ 0.508,
796
+ 0.499,
797
+ 0.884,
798
+ 0.572
799
+ ],
800
+ "angle": 0,
801
+ "content": "where \\(\\alpha_{ij} = \\frac{\\exp\\text{Dec}_{ij}}{\\sum_{k=1}^{n}\\exp\\text{Dec}_{ik}}\\), \\(\\mathbf{W}_{\\mathbf{Q}}^{\\mathbf{Dec}}\\), \\(\\mathbf{W}_{\\mathbf{K}}^{\\mathbf{Dec}}\\) and \\(\\mathbf{W}_{\\mathbf{V}}^{\\mathbf{Dec}}\\) are trainable projection matrices for queries, keys and values of the decoder layer. t is the length of preceding commit message."
802
+ },
803
+ {
804
+ "type": "text",
805
+ "bbox": [
806
+ 0.508,
807
+ 0.573,
808
+ 0.884,
809
+ 0.685
810
+ ],
811
+ "angle": 0,
812
+ "content": "Next, we use Equation 2 to obtain the hidden states of each decoder layer. In the last decoder layers, we employ a MLP and softmax operator to obtain the generation probability of each commit message token on the vocabulary. Then we use the cross-entropy as the loss function and apply AdamW for optimization."
813
+ },
814
+ {
815
+ "type": "title",
816
+ "bbox": [
817
+ 0.509,
818
+ 0.697,
819
+ 0.719,
820
+ 0.714
821
+ ],
822
+ "angle": 0,
823
+ "content": "4 Experimental Setup"
824
+ },
825
+ {
826
+ "type": "title",
827
+ "bbox": [
828
+ 0.509,
829
+ 0.722,
830
+ 0.617,
831
+ 0.736
832
+ ],
833
+ "angle": 0,
834
+ "content": "4.1 Dataset"
835
+ },
836
+ {
837
+ "type": "text",
838
+ "bbox": [
839
+ 0.507,
840
+ 0.743,
841
+ 0.885,
842
+ 0.92
843
+ ],
844
+ "angle": 0,
845
+ "content": "In our experiment, we use a large-scale dataset MCMD (Tao et al., 2021) with five programming languages (PLs): Java, C#, C++, Python and JavaScript. For each PL, MCMD collects commits from the top-100 starred repositories on GitHub and then filters the redundant messages (such as rollback commits) and noisy messages defined in Liu et al. (2018). Finally, to balance the size of data, they randomly sample and retain 450,000 commits for each PL. Each commit contains the code diff, the commit message, the name of the repository,"
846
+ }
847
+ ],
848
+ [
849
+ {
850
+ "type": "text",
851
+ "bbox": [
852
+ 0.113,
853
+ 0.085,
854
+ 0.49,
855
+ 0.15
856
+ ],
857
+ "angle": 0,
858
+ "content": "and the timestamp of commit, etc. To reduce the noise data in the dataset, we further filter out commits that contain multiple files or files that cannot be parsed (such as .jar, .ddl, .mp3, and .apk)."
859
+ },
860
+ {
861
+ "type": "title",
862
+ "bbox": [
863
+ 0.114,
864
+ 0.16,
865
+ 0.321,
866
+ 0.175
867
+ ],
868
+ "angle": 0,
869
+ "content": "4.2 Data pre-processing"
870
+ },
871
+ {
872
+ "type": "text",
873
+ "bbox": [
874
+ 0.113,
875
+ 0.18,
876
+ 0.49,
877
+ 0.486
878
+ ],
879
+ "angle": 0,
880
+ "content": "The code diff in MCMD are based on line-level code change. To obtain more fine-grained code change, following previous study (Panthaplackel et al., 2020), we use a sequence of span of token-level change actions to represent the code diff. Each action is structured as <action> span of tokens <action end>. There are four <action> types, namely, <keep>, <insert>, <delete>, and <replace>. <keep> means that the span of tokens are unchanged. <insert> means that adding span of tokens. <delete> means that deleting span of tokens. <replace> means that the span of tokens in the old version that will be replaced with different span of tokens in the new version. Thus, we extend <replace> to <replace old> and <replace new> to indicate the span of old and new tokens, respectively. We use difflib<sup>1</sup> to extract the sequence of code change actions."
881
+ },
882
+ {
883
+ "type": "title",
884
+ "bbox": [
885
+ 0.114,
886
+ 0.497,
887
+ 0.303,
888
+ 0.512
889
+ ],
890
+ "angle": 0,
891
+ "content": "4.3 Hyperparameters"
892
+ },
893
+ {
894
+ "type": "text",
895
+ "bbox": [
896
+ 0.113,
897
+ 0.517,
898
+ 0.49,
899
+ 0.822
900
+ ],
901
+ "angle": 0,
902
+ "content": "We follow (Tao et al., 2021) to set the maximum lengths of code diff and commit message to 200 and 50, respectively. We use the weight of the encoder of CodeT5-base (Wang et al., 2021b) to initialize the code diff encoders and use the decoder of CodeT5-base to initialize the decoder in Figure 1. The original vocabulary sizes of CodeT5 is 32,100. We add nine special tokens (<keep>, <keep_end>, <insert>, <insert_end>, <delete>, <delete_end>, <replace_old>, <replace_new>, and <replace_end>) and the vocabulary sizes of code and queries become 32109. For the optimizer, we use AdamW with the learning rate 2e-5. The batch size is 32. The max epoch is 20. In addition, we run the experiments 3 times with random seeds 0,1,2 and display the mean value in the paper. The experiments are conducted on a server with 4 GPUs of NVIDIA Tesla V100 and it takes about 1.2 hours each epoch."
903
+ },
904
+ {
905
+ "type": "title",
906
+ "bbox": [
907
+ 0.114,
908
+ 0.833,
909
+ 0.312,
910
+ 0.848
911
+ ],
912
+ "angle": 0,
913
+ "content": "4.4 Evaluation metrics"
914
+ },
915
+ {
916
+ "type": "text",
917
+ "bbox": [
918
+ 0.113,
919
+ 0.854,
920
+ 0.49,
921
+ 0.886
922
+ ],
923
+ "angle": 0,
924
+ "content": "We evaluate the quality of the generated messages using four metrics: BLEU (Papineni et al.,"
925
+ },
926
+ {
927
+ "type": "text",
928
+ "bbox": [
929
+ 0.508,
930
+ 0.085,
931
+ 0.885,
932
+ 0.246
933
+ ],
934
+ "angle": 0,
935
+ "content": "2002), Meteor (Banerjee and Lavie, 2005), Rouge-L (Lin, 2004), and Cider (Vedantam et al., 2015). These metrics are prevalent metrics in machine translation, text summarization, and image captioning. There are many variants of BLEU being used to measure the generated message, We choose B-Norm (the BLEU result in this paper is B-Norm), which correlates with human perception the most (Tao et al., 2021). The detailed metrics calculation can be found in Appendix."
936
+ },
937
+ {
938
+ "type": "title",
939
+ "bbox": [
940
+ 0.509,
941
+ 0.258,
942
+ 0.63,
943
+ 0.272
944
+ ],
945
+ "angle": 0,
946
+ "content": "4.5 Baselines"
947
+ },
948
+ {
949
+ "type": "text",
950
+ "bbox": [
951
+ 0.508,
952
+ 0.279,
953
+ 0.885,
954
+ 0.812
955
+ ],
956
+ "angle": 0,
957
+ "content": "We compare RACE with four end-to-end neural-based models, two IR-based methods, two hybrid approaches which combine IR-based techniques and end-to-end neural-based methods, and three pre-trained-based models. Four end-to-end neural-based models include CommitGen (Jiang et al., 2017), CoDiSum (Xu et al., 2019), NMTGen (Loyola et al., 2017), PtrGNCMsg (Liu et al., 2019) and ATOM (Liu et al., 2020). They all train models from scratch. Two IR-based methods are NNGen (Liu et al., 2018) and Lucene (Apache, 2011), they retrieve the similar code diff based on different similarity measurements and reuse the commit message of the similar code diff as the final result. CoRec and ATOM are all hybrid models which combine the neural-based models and IR-based techniques. Three pre-trained models are CommitBERT, CodeT5-small, and CodeT5-base. They are pre-trained on the large parallel code and natural language corpus and fine-tuned on the commit message generation dataset. All baselines except Lucene, CodeT5-small and CodeT5-base are introduced in Section 2. Lucene is a traditional IR baseline, which uses TF-IDF to represent a code diff as a vector and searches the similar code diff based on the cosine similarity between two vectors. CodeT5-small and CodeT5-base are source code pre-trained models and have achieved promising results in many code-related tasks (Wang et al., 2021b). We fine-tune them on MCMD as strong baselines. In addition, we only evaluate ATOM on Java dataset as the current implementation of ATOM only supports Java."
958
+ },
959
+ {
960
+ "type": "title",
961
+ "bbox": [
962
+ 0.509,
963
+ 0.823,
964
+ 0.733,
965
+ 0.84
966
+ ],
967
+ "angle": 0,
968
+ "content": "5 Experimental Results"
969
+ },
970
+ {
971
+ "type": "title",
972
+ "bbox": [
973
+ 0.509,
974
+ 0.849,
975
+ 0.88,
976
+ 0.882
977
+ ],
978
+ "angle": 0,
979
+ "content": "5.1 How does RACE perform compared with baseline approaches?"
980
+ },
981
+ {
982
+ "type": "text",
983
+ "bbox": [
984
+ 0.508,
985
+ 0.888,
986
+ 0.885,
987
+ 0.92
988
+ ],
989
+ "angle": 0,
990
+ "content": "To evaluate the effectiveness of RACE, we conduct the experiment by comparing it with the 11"
991
+ },
992
+ {
993
+ "type": "page_footnote",
994
+ "bbox": [
995
+ 0.114,
996
+ 0.892,
997
+ 0.463,
998
+ 0.918
999
+ ],
1000
+ "angle": 0,
1001
+ "content": "1https://docs.python.org/3/library/difflib. html"
1002
+ }
1003
+ ],
1004
+ [
1005
+ {
1006
+ "type": "table",
1007
+ "bbox": [
1008
+ 0.115,
1009
+ 0.082,
1010
+ 0.913,
1011
+ 0.293
1012
+ ],
1013
+ "angle": 0,
1014
+ "content": "<table><tr><td rowspan=\"2\" colspan=\"2\">Model</td><td colspan=\"4\">Java</td><td colspan=\"4\">C#</td><td colspan=\"4\">C++</td><td colspan=\"4\">Python</td><td colspan=\"4\">JavaScript</td></tr><tr><td>BLEU</td><td>Met.</td><td>Rou.</td><td>Cid.</td><td>BLEU</td><td>Met.</td><td>Rou.</td><td>Cid.</td><td>BLEU</td><td>Met.</td><td>Rou.</td><td>Cid.</td><td>BLEU</td><td>Met.</td><td>Rou.</td><td>Cid.</td><td>BLEU</td><td>Met.</td><td>Rou.</td><td>Cid.</td></tr><tr><td rowspan=\"2\">IR-based</td><td>NNGen</td><td>19.41</td><td>12.40</td><td>25.15</td><td>1.23</td><td>22.15</td><td>14.77</td><td>26.46</td><td>1.55</td><td>13.61</td><td>9.39</td><td>18.21</td><td>0.73</td><td>16.06</td><td>10.91</td><td>21.69</td><td>0.92</td><td>18.65</td><td>12.50</td><td>24.45</td><td>1.21</td></tr><tr><td>Lucene</td><td>15.61</td><td>10.56</td><td>19.43</td><td>0.94</td><td>20.68</td><td>13.34</td><td>23.02</td><td>1.36</td><td>13.43</td><td>8.81</td><td>16.78</td><td>0.67</td><td>15.16</td><td>9.63</td><td>18.85</td><td>0.85</td><td>17.66</td><td>11.25</td><td>21.75</td><td>1.02</td></tr><tr><td rowspan=\"4\">End-to-end</td><td>CommitGen</td><td>14.07</td><td>7.52</td><td>18.78</td><td>0.66</td><td>13.38</td><td>8.31</td><td>17.44</td><td>0.63</td><td>11.52</td><td>6.98</td><td>16.75</td><td>0.45</td><td>11.02</td><td>6.43</td><td>16.64</td><td>0.42</td><td>18.67</td><td>11.88</td><td>24.10</td><td>1.08</td></tr><tr><td>CoDiSum</td><td>13.97</td><td>6.02</td><td>16.12</td><td>0.39</td><td>12.71</td><td>5.56</td><td>14.40</td><td>0.36</td><td>12.44</td><td>6.00</td><td>14.39</td><td>0.42</td><td>14.61</td><td>8.59</td><td>17.02</td><td>0.42</td><td>11.22</td><td>5.32</td><td>13.26</td><td>0.28</td></tr><tr><td>NMTGen</td><td>15.52</td><td>8.91</td><td>21.13</td><td>0.86</td><td>12.71</td><td>8.11</td><td>17.16</td><td>0.62</td><td>11.57</td><td>7.06</td><td>17.46</td><td>0.51</td><td>11.41</td><td>7.18</td><td>18.43</td><td>0.48</td><td>18.22</td><td>12.07</td><td>24.43</td><td>1.12</td></tr><tr><td>PtrGNCMsg</td><td>17.71</td><td>11.33</td><td>24.32</td><td>0.99</td><td>15.98</td><td>10.18</td><td>21.16</td><td>0.83</td><td>14.06</td><td>9.63</td><td>20.17</td><td>0.63</td><td>15.89</td><td>11.36</td><td>23.49</td><td>0.76</td><td>20.78</td><td>14.52</td><td>27.87</td><td>1.29</td></tr><tr><td rowspan=\"2\">Hybrid</td><td>ATOM</td><td>16.42</td><td>11.66</td><td>22.67</td><td>0.91</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td></tr><tr><td>CoRec</td><td>18.51</td><td>11.26</td><td>24.78</td><td>1.13</td><td>18.41</td><td>11.70</td><td>23.73</td><td>1.12</td><td>14.02</td><td>8.63</td><td>20.10</td><td>0.72</td><td>15.09</td><td>9.60</td><td>22.35</td><td>0.80</td><td>21.30</td><td>13.84</td><td>27.53</td><td>1.40</td></tr><tr><td rowspan=\"3\">Pre-trained</td><td>CommitBERT</td><td>22.32</td><td>12.63</td><td>28.03</td><td>1.42</td><td>20.67</td><td>12.31</td><td>25.76</td><td>1.25</td><td>16.16</td><td>10.05</td><td>19.90</td><td>0.94</td><td>17.29</td><td>11.31</td><td>22.36</td><td>1.01</td><td>23.40</td><td>15.64</td><td>30.51</td><td>1.54</td></tr><tr><td>CodeT5-small</td><td>22.28</td><td>14.16</td><td>29.71</td><td>1.37</td><td>18.92</td><td>11.71</td><td>24.95</td><td>1.05</td><td>16.08</td><td>11.19</td><td>21.60</td><td>0.79</td><td>17.49</td><td>12.46</td><td>24.65</td><td>0.90</td><td>21.97</td><td>14.48</td><td>28.65</td><td>1.42</td></tr><tr><td>CodeT5-base</td><td>22.76</td><td>14.57</td><td>30.23</td><td>1.43</td><td>22.21</td><td>14.51</td><td>29.08</td><td>1.33</td><td>16.73</td><td>11.69</td><td>22.86</td><td>0.85</td><td>17.99</td><td>12.74</td><td>25.27</td><td>0.96</td><td>22.87</td><td>15.12</td><td>29.81</td><td>1.50</td></tr><tr><td rowspan=\"2\">Ours</td><td rowspan=\"2\">RACE</td><td>25.66</td><td>15.46</td><td>32.02</td><td>1.76</td><td>26.33</td><td>16.37</td><td>31.31</td><td>1.84</td><td>19.13</td><td>12.55</td><td>24.52</td><td>1.14</td><td>21.79</td><td>14.68</td><td>28.35</td><td>1.40</td><td>25.55</td><td>16.31</td><td>31.79</td><td>1.84</td></tr><tr><td>↑13%</td><td>↑6%</td><td>↑6%</td><td>↑23%</td><td>↑19%</td><td>↑13%</td><td>↑8%</td><td>↑38%</td><td>↑14%</td><td>↑7%</td><td>↑7%</td><td>↑34%</td><td>↑21%</td><td>↑15%</td><td>↑12%</td><td>↑46%</td><td>↑12%</td><td>↑8%</td><td>↑7%</td><td>↑23%</td></tr><tr><td>Ablation</td><td>RACE -Guider</td><td>23.37</td><td>13.98</td><td>30.01</td><td>1.53</td><td>21.33</td><td>13.56</td><td>27.33</td><td>1.31</td><td>17.43</td><td>12.10</td><td>22.03</td><td>0.95</td><td>19.44</td><td>13.89</td><td>26.4</td><td>1.01</td><td>23.39</td><td>15.64</td><td>30.51</td><td>1.54</td></tr></table>"
1015
+ },
1016
+ {
1017
+ "type": "table_caption",
1018
+ "bbox": [
1019
+ 0.113,
1020
+ 0.302,
1021
+ 0.884,
1022
+ 0.345
1023
+ ],
1024
+ "angle": 0,
1025
+ "content": "Table 2: Comparison of RACE with baselines under four metrics on five programming languages. Met., Rou., and Cide. are short for Meteor, Rouge-L, and Cider, respectively. All results are statistically significant (with \\( p < 0.01 \\))."
1026
+ },
1027
+ {
1028
+ "type": "text",
1029
+ "bbox": [
1030
+ 0.113,
1031
+ 0.37,
1032
+ 0.489,
1033
+ 0.449
1034
+ ],
1035
+ "angle": 0,
1036
+ "content": "basielines including two IR-based approaches, four end-to-end neural-based approaches, two hybrid approaches, and three pre-train-based approaches in terms of four evaluation metrics. The experimental results are shown in Table 2."
1037
+ },
1038
+ {
1039
+ "type": "text",
1040
+ "bbox": [
1041
+ 0.117,
1042
+ 0.452,
1043
+ 0.489,
1044
+ 0.852
1045
+ ],
1046
+ "angle": 0,
1047
+ "content": "We can see that IR-based models NNGen and Lucene generally outperform end-to-end neural models on average in terms of four metrics. It indicates that retrieved similar results can provide important information for commit message generation. CoRec, which combines the IR-based method and neural method, performs better than NNGen on \\(\\mathrm{C + + }\\) and JavaScript dataset but lower than NNGen on Java, C# and Python. This is because CoRec only leverages the information similar code diff at the inference stage. ATOM, which priorities the generated result of the neural-based model and retrieved result of the IR-based method, also outperforms the IR-based approach Lucene and three neural-based models CommitGen, CoDiSum, and NMTGen. Three pre-trained-based approaches outperform other baselines in terms of four metrics on average. CodeT5-base performs best among them on average. Our approach performs the best among all approaches on 5 programming languages in terms of four metrics. This is because RACE treats the retrieved similar commit message as an exemplar and leverages it to guide the neural network model to generate an accurate commit message."
1048
+ },
1049
+ {
1050
+ "type": "text",
1051
+ "bbox": [
1052
+ 0.113,
1053
+ 0.855,
1054
+ 0.49,
1055
+ 0.921
1056
+ ],
1057
+ "angle": 0,
1058
+ "content": "We also give an example of commit messages generated by our approach and the baselines in Figure 2. IR-based methods NNGen and Lucene can retrieve semantically similar but not completely"
1059
+ },
1060
+ {
1061
+ "type": "text",
1062
+ "bbox": [
1063
+ 0.508,
1064
+ 0.37,
1065
+ 0.886,
1066
+ 0.563
1067
+ ],
1068
+ "angle": 0,
1069
+ "content": "correct commit message. Specifically, retrieved commit messages contain not only the important semantic (\"Filter out unavailable databases\") of the current code diff but also the extra information (\"Revert\"). Neural network models generally capture the action of \"add\" but fail to further understand the intend of the code diff. The hybrid model CoRec cannot generate the correct commit message either. Our model treats the retrieved result (Revert \"Filter out unavailable databases\") as an exemplar, and guides the neural network model to generate the correct commit message."
1070
+ },
1071
+ {
1072
+ "type": "title",
1073
+ "bbox": [
1074
+ 0.509,
1075
+ 0.575,
1076
+ 0.848,
1077
+ 0.607
1078
+ ],
1079
+ "angle": 0,
1080
+ "content": "5.2 What is the effectiveness of exemplar guider?"
1081
+ },
1082
+ {
1083
+ "type": "text",
1084
+ "bbox": [
1085
+ 0.507,
1086
+ 0.612,
1087
+ 0.886,
1088
+ 0.79
1089
+ ],
1090
+ "angle": 0,
1091
+ "content": "We conduct the ablation study to verify the effectiveness of exemplar guider module. Specifically, as shown at the bottom of Figure 1, we directly concatenated the representations of retrieved results and fed them to the decoder to generate commit messages without using the exemplar guider. As shown at the bottom of the Table 2, we can see that the performance of the ablated model (RACE-Guide) degrades in all programming languages in terms of four metrics. It demonstrates the effectiveness of our exemplar guider."
1092
+ },
1093
+ {
1094
+ "type": "title",
1095
+ "bbox": [
1096
+ 0.509,
1097
+ 0.802,
1098
+ 0.825,
1099
+ 0.833
1100
+ ],
1101
+ "angle": 0,
1102
+ "content": "5.3 What is the performance when we retrieve \\( k \\) relevant commits?"
1103
+ },
1104
+ {
1105
+ "type": "text",
1106
+ "bbox": [
1107
+ 0.508,
1108
+ 0.84,
1109
+ 0.885,
1110
+ 0.92
1111
+ ],
1112
+ "angle": 0,
1113
+ "content": "We also conduct experiments to recall \\( k \\) (\\( k = 1, 3, 5, 7, 9 \\)) most relevant commits to augment the generation model. Specifically, as shown in Figure 1 the relevance of the code diff is measured by the cosine similarity their semantic vectors obtained by"
1114
+ }
1115
+ ],
1116
+ [
1117
+ {
1118
+ "type": "image",
1119
+ "bbox": [
1120
+ 0.129,
1121
+ 0.086,
1122
+ 0.451,
1123
+ 0.212
1124
+ ],
1125
+ "angle": 0,
1126
+ "content": null
1127
+ },
1128
+ {
1129
+ "type": "image_caption",
1130
+ "bbox": [
1131
+ 0.133,
1132
+ 0.215,
1133
+ 0.404,
1134
+ 0.226
1135
+ ],
1136
+ "angle": 0,
1137
+ "content": "Reference Filter out unavailable databases"
1138
+ },
1139
+ {
1140
+ "type": "table",
1141
+ "bbox": [
1142
+ 0.128,
1143
+ 0.23,
1144
+ 0.493,
1145
+ 0.377
1146
+ ],
1147
+ "angle": 0,
1148
+ "content": "<table><tr><td colspan=\"2\">Baselines</td></tr><tr><td>NNGen</td><td>Revert “ Filter out unavailable databases”</td></tr><tr><td>Lucene</td><td>Revert “ filter out unavailable databases ”</td></tr><tr><td>CommitGen</td><td>Merge pull request from mistecrunch / UNK</td></tr><tr><td>NMTGen</td><td>Add &lt;unk&gt; to &lt;unk&gt;</td></tr><tr><td>PtrGNCMsg</td><td>Add support for dashboards in database</td></tr><tr><td>CoRec</td><td>Remove &lt;unk&gt;</td></tr><tr><td>CommitBERT</td><td>Add DatabaseFilter ( )</td></tr><tr><td>CodeT5-small</td><td>[database] Add databasefilter to filter all users</td></tr><tr><td>CodeT5-base</td><td>[hotfix] Adding databasefilter to core.py</td></tr><tr><td>RACE</td><td>Stage I : Revert “ Filter out unavailable databases ”Stage II : Filter out unavailable databases</td></tr></table>"
1149
+ },
1150
+ {
1151
+ "type": "image_caption",
1152
+ "bbox": [
1153
+ 0.113,
1154
+ 0.388,
1155
+ 0.49,
1156
+ 0.459
1157
+ ],
1158
+ "angle": 0,
1159
+ "content": "Figure 2: An example of generated commit messages. Reference is the developer-written commit message. The results of our approach in stage I and II are returned by the retrieved module and generation module, respectively."
1160
+ },
1161
+ {
1162
+ "type": "text",
1163
+ "bbox": [
1164
+ 0.113,
1165
+ 0.486,
1166
+ 0.49,
1167
+ 0.695
1168
+ ],
1169
+ "angle": 0,
1170
+ "content": "Equation 3. Then retrieved \\( k \\) relevant commits are encoded and fed to the exemplar guider to obtain semantic similarities by Equation 4, respectively. Finally, we weight representations of code diff and similar commit messages according to the semantic similarities and feed them to the decoder to generate commit messages step by step. The experimental results are shown in Figure 3. We can see that the performance is generally stable on different \\( k \\). In our future work, we will continue to study alternatives on leveraging the information of the retrieved results, e.g., how many commits to retrieve and how to model the corresponding information."
1171
+ },
1172
+ {
1173
+ "type": "title",
1174
+ "bbox": [
1175
+ 0.114,
1176
+ 0.706,
1177
+ 0.415,
1178
+ 0.738
1179
+ ],
1180
+ "angle": 0,
1181
+ "content": "5.4 Can our framework boost the performance of existing models?"
1182
+ },
1183
+ {
1184
+ "type": "text",
1185
+ "bbox": [
1186
+ 0.113,
1187
+ 0.743,
1188
+ 0.49,
1189
+ 0.919
1190
+ ],
1191
+ "angle": 0,
1192
+ "content": "We further study whether our framework can enhance the performance of the existing Seq2Seq neural network model in commit message generation. Therefore, we adapt our framework to four Seq2Seq-based models, namely NMTGen (M1), CommitBERT (M2), CodeT5-small (M3) and CodeT5-base (M4). Specifically, we use the encoder of these models as our code diff encoder and obtain the high-dimensional semantic vectors in the retrieval module (Figure 1). In the generation module, we use the encoder of their models"
1193
+ },
1194
+ {
1195
+ "type": "image",
1196
+ "bbox": [
1197
+ 0.516,
1198
+ 0.086,
1199
+ 0.885,
1200
+ 0.224
1201
+ ],
1202
+ "angle": 0,
1203
+ "content": null
1204
+ },
1205
+ {
1206
+ "type": "image_caption",
1207
+ "bbox": [
1208
+ 0.509,
1209
+ 0.238,
1210
+ 0.882,
1211
+ 0.266
1212
+ ],
1213
+ "angle": 0,
1214
+ "content": "Figure 3: Performance of models augmented with \\( k \\) retrieved relevant commits."
1215
+ },
1216
+ {
1217
+ "type": "image",
1218
+ "bbox": [
1219
+ 0.515,
1220
+ 0.285,
1221
+ 0.88,
1222
+ 0.457
1223
+ ],
1224
+ "angle": 0,
1225
+ "content": null
1226
+ },
1227
+ {
1228
+ "type": "image_caption",
1229
+ "bbox": [
1230
+ 0.508,
1231
+ 0.467,
1232
+ 0.884,
1233
+ 0.525
1234
+ ],
1235
+ "angle": 0,
1236
+ "content": "Figure 4: Performance gains on four models. The original performance of the models are in yellow and gains from our framework are in green. The percentage value in each bar is the rate of improvement."
1237
+ },
1238
+ {
1239
+ "type": "text",
1240
+ "bbox": [
1241
+ 0.508,
1242
+ 0.553,
1243
+ 0.883,
1244
+ 0.602
1245
+ ],
1246
+ "angle": 0,
1247
+ "content": "to encode input code diffs, similar code diffs, and similar commit messages. We also use the decoder of their models to generate commit messages."
1248
+ },
1249
+ {
1250
+ "type": "text",
1251
+ "bbox": [
1252
+ 0.508,
1253
+ 0.604,
1254
+ 0.885,
1255
+ 0.877
1256
+ ],
1257
+ "angle": 0,
1258
+ "content": "The experimental results are shown in Figure 4, we present the performance of four original models (yellow) and gains (green) from our framework on five programming languages in terms of \\(\\mathrm{BLEU}^2\\) score. Overall, we can see that our framework can improve the performance of all four neural models in all programming languages. Our framework can improve the performance of the original model from \\(7\\%\\) to \\(73\\%\\). Especially, after applying our framework, the performance of NMTGen has more than \\(20\\%\\) improvement on all programming languages. In addition, Our framework can boost the performance of NMTGen on BLUE, Meteor, Rouge-L, and Cider by \\(43\\%\\), \\(49\\%\\), \\(33\\%\\), and \\(61\\%\\) on average, boost CommitBERT by \\(11\\%\\), \\(9\\%\\), \\(11\\%\\), and \\(12\\%\\), boost CodeT5-small by \\(15\\%\\), \\(14\\%\\), \\(11\\%\\), and \\(26\\%\\), and boost CodeT5-base by \\(16\\%\\), \\(10\\%\\),"
1259
+ },
1260
+ {
1261
+ "type": "page_footnote",
1262
+ "bbox": [
1263
+ 0.509,
1264
+ 0.893,
1265
+ 0.883,
1266
+ 0.919
1267
+ ],
1268
+ "angle": 0,
1269
+ "content": "2We show results of other three metrics in Appendix due to space limitation. Our conclusions also hold."
1270
+ }
1271
+ ],
1272
+ [
1273
+ {
1274
+ "type": "table",
1275
+ "bbox": [
1276
+ 0.115,
1277
+ 0.082,
1278
+ 0.491,
1279
+ 0.178
1280
+ ],
1281
+ "angle": 0,
1282
+ "content": "<table><tr><td>Model</td><td>Informativeness</td><td>Conciseness</td><td>Expressiveness</td></tr><tr><td>CommitBERT</td><td>1.22 (±1.02)</td><td>2.03 (±1.04)</td><td>2.46 (±0.99)</td></tr><tr><td>NNGen</td><td>1.03 (±1.00)</td><td>1.74 (±1.01)</td><td>2.36 (±0.95)</td></tr><tr><td>NMTGen</td><td>0.74 (±0.92)</td><td>1.56 (±0.93)</td><td>2.11 (±0.94)</td></tr><tr><td>CoRec</td><td>1.05 (±1.09)</td><td>1.80 (±1.05)</td><td>2.43 (±0.88)</td></tr><tr><td>RACE</td><td>2.49 (±1.10)</td><td>3.08 (±0.96)</td><td>2.85 (±0.84)</td></tr></table>"
1283
+ },
1284
+ {
1285
+ "type": "table_caption",
1286
+ "bbox": [
1287
+ 0.114,
1288
+ 0.187,
1289
+ 0.49,
1290
+ 0.217
1291
+ ],
1292
+ "angle": 0,
1293
+ "content": "Table 3: Results of human evaluation (standard deviation in parentheses)."
1294
+ },
1295
+ {
1296
+ "type": "text",
1297
+ "bbox": [
1298
+ 0.114,
1299
+ 0.242,
1300
+ 0.235,
1301
+ 0.259
1302
+ ],
1303
+ "angle": 0,
1304
+ "content": "\\(8\\%\\), and \\(32\\%\\)"
1305
+ },
1306
+ {
1307
+ "type": "title",
1308
+ "bbox": [
1309
+ 0.114,
1310
+ 0.274,
1311
+ 0.307,
1312
+ 0.289
1313
+ ],
1314
+ "angle": 0,
1315
+ "content": "5.5 Human evaluation"
1316
+ },
1317
+ {
1318
+ "type": "text",
1319
+ "bbox": [
1320
+ 0.113,
1321
+ 0.296,
1322
+ 0.49,
1323
+ 0.681
1324
+ ],
1325
+ "angle": 0,
1326
+ "content": "We also conduct a human evaluation by following the previous works (Moreno et al., 2013; Panichella et al., 2016; Shi et al., 2021b) to evaluate the semantic similarity of the commit message generated by RACE and four baselines NNGen, NMTGen, CommitBERT, and CoRec. The four baselines are IR-based, end-to-end neural network-based, hybrid, and pre-trained-based approaches, respectively. We randomly choose 50 code diff from the testing sets and their commit message generated by four approaches. Finally, we sample \\(250 < \\text{code diff}\\), commit message> pairs to score. Specifically, we invite 4 volunteers with excellent English ability and more than three years of software development experience. Each volunteer is asked to assign scores from 0 to 4 (the higher the better) to the generated commit message from the three aspects: Informativeness (the amount of important information about the code diff reflected in the commit message), Conciseness (the extend of extraneous information included in the commit message), and Expressiveness (grammaticality and fluency). Each pair is evaluated by four volunteers, and the final score is the average of them."
1327
+ },
1328
+ {
1329
+ "type": "text",
1330
+ "bbox": [
1331
+ 0.113,
1332
+ 0.684,
1333
+ 0.49,
1334
+ 0.892
1335
+ ],
1336
+ "angle": 0,
1337
+ "content": "To verify the agreement among the volunteers, we calculate the Krippendorff's alpha (Hayes and Krippendorff, 2007) and Kendall rank correlation coefficient (Kendall's Tau) values (Kendall, 1945). The value of Krippendorff's alpha is 0.90 and the values of pairwise Kendall's Tau range from 0.73 to 0.95, which indicates that there is a high degree of agreement between the 4 volunteers and that scores are reliable. Table 3 shows the result of human evaluation. RACE is better than other approaches in Informative, Conciseness, and Expressiveness, which means that our approach tends to generate concise and readable commit messages with more"
1338
+ },
1339
+ {
1340
+ "type": "text",
1341
+ "bbox": [
1342
+ 0.508,
1343
+ 0.085,
1344
+ 0.885,
1345
+ 0.198
1346
+ ],
1347
+ "angle": 0,
1348
+ "content": "comprehensive semantics. In addition, we confirm the superiority of our approach using Wilcoxon signed-rank tests (Wilcoxon et al., 1970) for the human evaluation. Results show that the improvement of RACE over other approaches is statistically significant with all p-values smaller than 0.05 at \\(95\\%\\) confidence level."
1349
+ },
1350
+ {
1351
+ "type": "title",
1352
+ "bbox": [
1353
+ 0.509,
1354
+ 0.211,
1355
+ 0.642,
1356
+ 0.227
1357
+ ],
1358
+ "angle": 0,
1359
+ "content": "6 Conclusion"
1360
+ },
1361
+ {
1362
+ "type": "text",
1363
+ "bbox": [
1364
+ 0.508,
1365
+ 0.239,
1366
+ 0.885,
1367
+ 0.431
1368
+ ],
1369
+ "angle": 0,
1370
+ "content": "This paper proposes a new retrieval-augmented neural commit message generation method, which treats the retrieved similar commit message as an exemplar and uses it to guide the neural network model to generate an accurate and readable commit message. Extensive experimental results demonstrate that our approach outperforms recent baselines and our framework can significantly boost the performance of four neural network models. Our data, source code and Appendix are available at https://github.com/DeepSoftwareAnalytics/RACE."
1371
+ },
1372
+ {
1373
+ "type": "title",
1374
+ "bbox": [
1375
+ 0.51,
1376
+ 0.446,
1377
+ 0.616,
1378
+ 0.461
1379
+ ],
1380
+ "angle": 0,
1381
+ "content": "Limitations"
1382
+ },
1383
+ {
1384
+ "type": "text",
1385
+ "bbox": [
1386
+ 0.508,
1387
+ 0.473,
1388
+ 0.882,
1389
+ 0.489
1390
+ ],
1391
+ "angle": 0,
1392
+ "content": "We have identified the following main limitations:"
1393
+ },
1394
+ {
1395
+ "type": "text",
1396
+ "bbox": [
1397
+ 0.508,
1398
+ 0.49,
1399
+ 0.884,
1400
+ 0.634
1401
+ ],
1402
+ "angle": 0,
1403
+ "content": "Programming Languages. We only conduct experiments on five programming languages. Although in principle, our framework is not specifically designed for certain languages, models perform differently in different programming languages. Therefore, more experiments are needed to confirm the generality of our framework. In the future, we will extend our study to other programming languages."
1404
+ },
1405
+ {
1406
+ "type": "text",
1407
+ "bbox": [
1408
+ 0.508,
1409
+ 0.635,
1410
+ 0.882,
1411
+ 0.716
1412
+ ],
1413
+ "angle": 0,
1414
+ "content": "Code base. Compared with purely neural network-based models, our method needs a code base to retrieve the most similar example from that. This limitation is inherited from IR-based techniques."
1415
+ },
1416
+ {
1417
+ "type": "text",
1418
+ "bbox": [
1419
+ 0.508,
1420
+ 0.716,
1421
+ 0.882,
1422
+ 0.796
1423
+ ],
1424
+ "angle": 0,
1425
+ "content": "Training Time. In addition to modeling the information of input code diffs, our model needs to retrieve similar diffs and encode them. Thus, our model takes a long time to train (about 35 hours to train the model)."
1426
+ },
1427
+ {
1428
+ "type": "text",
1429
+ "bbox": [
1430
+ 0.508,
1431
+ 0.798,
1432
+ 0.882,
1433
+ 0.893
1434
+ ],
1435
+ "angle": 0,
1436
+ "content": "Long Code Diffs. Longer code diffs may contain more complex semantics or behaviors. Long diffs (over 512 tokens) are truncated in our approach and some information would be lost. In our future work, we will design mechanisms to better handle long diffs."
1437
+ },
1438
+ {
1439
+ "type": "page_footnote",
1440
+ "bbox": [
1441
+ 0.136,
1442
+ 0.904,
1443
+ 0.409,
1444
+ 0.919
1445
+ ],
1446
+ "angle": 0,
1447
+ "content": "3The result can be found in 1-4 of Appendix"
1448
+ },
1449
+ {
1450
+ "type": "page_footnote",
1451
+ "bbox": [
1452
+ 0.531,
1453
+ 0.904,
1454
+ 0.679,
1455
+ 0.919
1456
+ ],
1457
+ "angle": 0,
1458
+ "content": "Available in Appendix"
1459
+ }
1460
+ ],
1461
+ [
1462
+ {
1463
+ "type": "title",
1464
+ "bbox": [
1465
+ 0.115,
1466
+ 0.085,
1467
+ 0.279,
1468
+ 0.101
1469
+ ],
1470
+ "angle": 0,
1471
+ "content": "Acknowledgement"
1472
+ },
1473
+ {
1474
+ "type": "text",
1475
+ "bbox": [
1476
+ 0.113,
1477
+ 0.11,
1478
+ 0.49,
1479
+ 0.238
1480
+ ],
1481
+ "angle": 0,
1482
+ "content": "We thank reviewers for their valuable comments on this work. This research was supported by National Key R&D Program of China (No. 2017YFA0700800). We would like to thank Jiaqi Guo and Wenchao Gu for their valuable suggestions and feedback during the work discussion process. We also thank the participants of our human evaluation for their time."
1483
+ },
1484
+ {
1485
+ "type": "title",
1486
+ "bbox": [
1487
+ 0.115,
1488
+ 0.265,
1489
+ 0.214,
1490
+ 0.28
1491
+ ],
1492
+ "angle": 0,
1493
+ "content": "References"
1494
+ },
1495
+ {
1496
+ "type": "ref_text",
1497
+ "bbox": [
1498
+ 0.116,
1499
+ 0.288,
1500
+ 0.327,
1501
+ 0.302
1502
+ ],
1503
+ "angle": 0,
1504
+ "content": "Apache. 2011. Apache lucene."
1505
+ },
1506
+ {
1507
+ "type": "ref_text",
1508
+ "bbox": [
1509
+ 0.117,
1510
+ 0.312,
1511
+ 0.49,
1512
+ 0.365
1513
+ ],
1514
+ "angle": 0,
1515
+ "content": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In IEEvaluation@ACL."
1516
+ },
1517
+ {
1518
+ "type": "ref_text",
1519
+ "bbox": [
1520
+ 0.117,
1521
+ 0.375,
1522
+ 0.489,
1523
+ 0.441
1524
+ ],
1525
+ "angle": 0,
1526
+ "content": "Mike Barnett, Christian Bird, João Brunet, and Shuvendu K. Lahiri. 2015. Helping developers help themselves: Automatic decomposition of code review changesets. In ICSE (1), pages 134-144. IEEE Computer Society."
1527
+ },
1528
+ {
1529
+ "type": "ref_text",
1530
+ "bbox": [
1531
+ 0.117,
1532
+ 0.451,
1533
+ 0.489,
1534
+ 0.491
1535
+ ],
1536
+ "angle": 0,
1537
+ "content": "Raymond P. L. Buse and Westley Weimer. 2010. Automatically documenting program changes. In ASE, pages 33-42. ACM."
1538
+ },
1539
+ {
1540
+ "type": "ref_text",
1541
+ "bbox": [
1542
+ 0.117,
1543
+ 0.5,
1544
+ 0.49,
1545
+ 0.566
1546
+ ],
1547
+ "angle": 0,
1548
+ "content": "Luis Fernando Cortes-Coy, Mario Linares Vásquez, Jairo Aponte, and Denys Poshyvanyk. 2014. On automatically generating commit messages via summarization of source code changes. In SCAM, pages 275-284. IEEE Computer Society."
1549
+ },
1550
+ {
1551
+ "type": "ref_text",
1552
+ "bbox": [
1553
+ 0.117,
1554
+ 0.576,
1555
+ 0.49,
1556
+ 0.63
1557
+ ],
1558
+ "angle": 0,
1559
+ "content": "Martin Dias, Alberto Bacchelli, Georgios Gousios, Damien Cassou, and Stephane Ducasse. 2015. Untangling fine-grained code changes. In SANER, pages 341-350. IEEE Computer Society."
1560
+ },
1561
+ {
1562
+ "type": "ref_text",
1563
+ "bbox": [
1564
+ 0.117,
1565
+ 0.639,
1566
+ 0.49,
1567
+ 0.692
1568
+ ],
1569
+ "angle": 0,
1570
+ "content": "Jinhao Dong, Yiling Lou, Qihao Zhu, Zeyu Sun, Zhilin Li, Wenjie Zhang, and Dan Hao. 2022. Fira: Fine-grained graph-based code change representation for automated commit message generation."
1571
+ },
1572
+ {
1573
+ "type": "ref_text",
1574
+ "bbox": [
1575
+ 0.117,
1576
+ 0.702,
1577
+ 0.49,
1578
+ 0.792
1579
+ ],
1580
+ "angle": 0,
1581
+ "content": "Lun Du, Xiaozhou Shi, Yanlin Wang, Ensheng Shi, Shi Han, and Dongmei Zhang. 2021. Is a single model enough? mucos: A multi-model ensemble learning approach for semantic code search. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 2994-2998."
1582
+ },
1583
+ {
1584
+ "type": "ref_text",
1585
+ "bbox": [
1586
+ 0.117,
1587
+ 0.803,
1588
+ 0.49,
1589
+ 0.87
1590
+ ],
1591
+ "angle": 0,
1592
+ "content": "Robert Dyer, Hoan Anh Nguyen, Hridesh Rajan, and Tien N. Nguyen. 2013. Boa: a language and infrastructure for analyzing ultra-large-scale software repositories. In ICSE, pages 422-431. IEEE Computer Society."
1593
+ },
1594
+ {
1595
+ "type": "ref_text",
1596
+ "bbox": [
1597
+ 0.117,
1598
+ 0.879,
1599
+ 0.49,
1600
+ 0.919
1601
+ ],
1602
+ "angle": 0,
1603
+ "content": "Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020."
1604
+ },
1605
+ {
1606
+ "type": "list",
1607
+ "bbox": [
1608
+ 0.116,
1609
+ 0.288,
1610
+ 0.49,
1611
+ 0.919
1612
+ ],
1613
+ "angle": 0,
1614
+ "content": null
1615
+ },
1616
+ {
1617
+ "type": "ref_text",
1618
+ "bbox": [
1619
+ 0.53,
1620
+ 0.086,
1621
+ 0.885,
1622
+ 0.14
1623
+ ],
1624
+ "angle": 0,
1625
+ "content": "Codebert: A pre-trained model for programming and natural languages. In EMNLP (Findings), volume EMNLP 2020 of Findings of ACL, pages 1536-1547. Association for Computational Linguistics."
1626
+ },
1627
+ {
1628
+ "type": "ref_text",
1629
+ "bbox": [
1630
+ 0.512,
1631
+ 0.148,
1632
+ 0.884,
1633
+ 0.175
1634
+ ],
1635
+ "angle": 0,
1636
+ "content": "Xiaodong Gu, Hongyu Zhang, and Sunghun Kim. 2018. Deep code search. In ICSE, pages 933-944. ACM."
1637
+ },
1638
+ {
1639
+ "type": "ref_text",
1640
+ "bbox": [
1641
+ 0.512,
1642
+ 0.184,
1643
+ 0.885,
1644
+ 0.237
1645
+ ],
1646
+ "angle": 0,
1647
+ "content": "Andrew F Hayes and Klaus Krippendorff. 2007. Answering the call for a standard reliability measure for coding data. Communication methods and measures, 1(1):77-89."
1648
+ },
1649
+ {
1650
+ "type": "ref_text",
1651
+ "bbox": [
1652
+ 0.512,
1653
+ 0.246,
1654
+ 0.885,
1655
+ 0.311
1656
+ ],
1657
+ "angle": 0,
1658
+ "content": "Yuan Huang, Nan Jia, Hao-Jie Zhou, Xiangping Chen, Zibin Zheng, and Mingdong Tang. 2020. Learning human-written commit messages to document code changes. J. Comput. Sci. Technol., 35(6):1258-1277."
1659
+ },
1660
+ {
1661
+ "type": "ref_text",
1662
+ "bbox": [
1663
+ 0.512,
1664
+ 0.322,
1665
+ 0.885,
1666
+ 0.387
1667
+ ],
1668
+ "angle": 0,
1669
+ "content": "Yuan Huang, Qiaoyang Zheng, Xiangping Chen, Yingfei Xiong, Zhiyong Liu, and Xiaonan Luo. 2017. Mining version control system for automatically generating commit comment. In ESEM, pages 414-423. IEEE Computer Society."
1670
+ },
1671
+ {
1672
+ "type": "ref_text",
1673
+ "bbox": [
1674
+ 0.512,
1675
+ 0.396,
1676
+ 0.885,
1677
+ 0.436
1678
+ ],
1679
+ "angle": 0,
1680
+ "content": "Siyuan Jiang, Ameer Armaly, and Collin McMillan. 2017. Automatically generating commit messages from diffs using neural machine translation. In ASE."
1681
+ },
1682
+ {
1683
+ "type": "ref_text",
1684
+ "bbox": [
1685
+ 0.512,
1686
+ 0.445,
1687
+ 0.885,
1688
+ 0.511
1689
+ ],
1690
+ "angle": 0,
1691
+ "content": "Tae Hwan Jung. 2021. Commitbert: Commit message generation using pre-trained programming language model. In Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021), pages 26-33."
1692
+ },
1693
+ {
1694
+ "type": "ref_text",
1695
+ "bbox": [
1696
+ 0.512,
1697
+ 0.52,
1698
+ 0.885,
1699
+ 0.547
1700
+ ],
1701
+ "angle": 0,
1702
+ "content": "Maurice G Kendall. 1945. The treatment of ties in ranking problems. Biometrika, 33(3):239-251."
1703
+ },
1704
+ {
1705
+ "type": "ref_text",
1706
+ "bbox": [
1707
+ 0.512,
1708
+ 0.556,
1709
+ 0.885,
1710
+ 0.634
1711
+ ],
1712
+ "angle": 0,
1713
+ "content": "Patrick S. H. Lewis, Ethan Perez, Aleksandra Pik-tus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Roektaschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive NLP tasks. In NeurIPS."
1714
+ },
1715
+ {
1716
+ "type": "ref_text",
1717
+ "bbox": [
1718
+ 0.512,
1719
+ 0.644,
1720
+ 0.885,
1721
+ 0.683
1722
+ ],
1723
+ "angle": 0,
1724
+ "content": "Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out."
1725
+ },
1726
+ {
1727
+ "type": "ref_text",
1728
+ "bbox": [
1729
+ 0.512,
1730
+ 0.693,
1731
+ 0.885,
1732
+ 0.746
1733
+ ],
1734
+ "angle": 0,
1735
+ "content": "Qin Liu, Zihe Liu, Hongming Zhu, Hongfei Fan, Bowen Du, and Yu Qian. 2019. Generating commit messages from diffs using pointer-generator network. In MSR, pages 299-309. IEEE / ACM."
1736
+ },
1737
+ {
1738
+ "type": "ref_text",
1739
+ "bbox": [
1740
+ 0.512,
1741
+ 0.755,
1742
+ 0.885,
1743
+ 0.808
1744
+ ],
1745
+ "angle": 0,
1746
+ "content": "Shangqing Liu, Cuiyun Gao, Sen Chen, Lun Yiu Nie, and Yang Liu. 2020. ATOM: commit message generation based on abstract syntax tree and hybrid ranking. TSE, PP:1-1."
1747
+ },
1748
+ {
1749
+ "type": "ref_text",
1750
+ "bbox": [
1751
+ 0.512,
1752
+ 0.817,
1753
+ 0.885,
1754
+ 0.882
1755
+ ],
1756
+ "angle": 0,
1757
+ "content": "Zhongxin Liu, Xin Xia, Ahmed E. Hassan, David Lo, Zhenchang Xing, and Xinyu Wang. 2018. Neural-machine-translation-based commit message generation: how far are we? In ASE, pages 373-384. ACM."
1758
+ },
1759
+ {
1760
+ "type": "ref_text",
1761
+ "bbox": [
1762
+ 0.512,
1763
+ 0.892,
1764
+ 0.885,
1765
+ 0.919
1766
+ ],
1767
+ "angle": 0,
1768
+ "content": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In ICLR."
1769
+ },
1770
+ {
1771
+ "type": "list",
1772
+ "bbox": [
1773
+ 0.512,
1774
+ 0.086,
1775
+ 0.885,
1776
+ 0.919
1777
+ ],
1778
+ "angle": 0,
1779
+ "content": null
1780
+ }
1781
+ ],
1782
+ [
1783
+ {
1784
+ "type": "ref_text",
1785
+ "bbox": [
1786
+ 0.117,
1787
+ 0.086,
1788
+ 0.49,
1789
+ 0.153
1790
+ ],
1791
+ "angle": 0,
1792
+ "content": "Pablo Loyola, Edison Marrese-Taylor, and Yutaka Matsuo. 2017. A neural architecture for generating natural language descriptions from source code changes. In ACL (2), pages 287-292. Association for Computational Linguistics."
1793
+ },
1794
+ {
1795
+ "type": "ref_text",
1796
+ "bbox": [
1797
+ 0.117,
1798
+ 0.166,
1799
+ 0.488,
1800
+ 0.232
1801
+ ],
1802
+ "angle": 0,
1803
+ "content": "Laura Moreno, Jairo Aponte, Giriprasad Sridhara, Andrian Marcus, Lori L. Pollock, and K. Vijay-Shanker. 2013. Automatic generation of natural language summaries for java classes. In ICPC, pages 23-32. IEEE Computer Society."
1804
+ },
1805
+ {
1806
+ "type": "ref_text",
1807
+ "bbox": [
1808
+ 0.117,
1809
+ 0.245,
1810
+ 0.488,
1811
+ 0.298
1812
+ ],
1813
+ "angle": 0,
1814
+ "content": "Lun Yiu Nie, Cuiyun Gao, Zhicong Zhong, Wai Lam, Yang Liu, and Zenglin Xu. 2021. Coregen: Contextualized code representation learning for commit message generation. Neurocomputing, 459:97-107."
1815
+ },
1816
+ {
1817
+ "type": "ref_text",
1818
+ "bbox": [
1819
+ 0.117,
1820
+ 0.311,
1821
+ 0.488,
1822
+ 0.377
1823
+ ],
1824
+ "angle": 0,
1825
+ "content": "Sebastiano Panichella, Annibale Panichella, Moritz Beller, Andy Zaidman, and Harald C. Gall. 2016. The impact of test case summaries on bug fixing performance: an empirical investigation. In ICSE, pages 547-558. ACM."
1826
+ },
1827
+ {
1828
+ "type": "ref_text",
1829
+ "bbox": [
1830
+ 0.117,
1831
+ 0.39,
1832
+ 0.488,
1833
+ 0.456
1834
+ ],
1835
+ "angle": 0,
1836
+ "content": "Sheena Panthaplackel, Pengyu Nie, Milos Gligoric, Junyi Jessy Li, and Raymond J. Mooney. 2020. Learning to update natural language comments based on code changes. In ACL, pages 1853-1868. Association for Computational Linguistics."
1837
+ },
1838
+ {
1839
+ "type": "ref_text",
1840
+ "bbox": [
1841
+ 0.117,
1842
+ 0.469,
1843
+ 0.488,
1844
+ 0.509
1845
+ ],
1846
+ "angle": 0,
1847
+ "content": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL."
1848
+ },
1849
+ {
1850
+ "type": "ref_text",
1851
+ "bbox": [
1852
+ 0.117,
1853
+ 0.522,
1854
+ 0.488,
1855
+ 0.588
1856
+ ],
1857
+ "angle": 0,
1858
+ "content": "Jinfeng Shen, Xiaobing Sun, Bin Li, Hui Yang, and Jiajun Hu. 2016. On automatic summarization of what and why information in source code changes. In COMPSAC, pages 103-112. IEEE Computer Society."
1859
+ },
1860
+ {
1861
+ "type": "ref_text",
1862
+ "bbox": [
1863
+ 0.117,
1864
+ 0.602,
1865
+ 0.488,
1866
+ 0.68
1867
+ ],
1868
+ "angle": 0,
1869
+ "content": "Ensheng Shi, Wenchao Gub, Yanlin Wang, Lun Du, Hongyu Zhang, Shi Han, Dongmei Zhang, and Hongbin Sun. 2022a. Enhancing semantic code search with multimodal contrastive learning and soft data augmentation. arXiv preprint arXiv:2204.03293."
1870
+ },
1871
+ {
1872
+ "type": "ref_text",
1873
+ "bbox": [
1874
+ 0.117,
1875
+ 0.694,
1876
+ 0.488,
1877
+ 0.746
1878
+ ],
1879
+ "angle": 0,
1880
+ "content": "Ensheng Shi, Yanlin Wang, Lun Du, Junjie Chen, Shi Han, Hongyu Zhang, Dongmei Zhang, and Hongbin Sun. 2022b. On the evaluation of neural code summarization. In ICSE."
1881
+ },
1882
+ {
1883
+ "type": "ref_text",
1884
+ "bbox": [
1885
+ 0.117,
1886
+ 0.76,
1887
+ 0.488,
1888
+ 0.825
1889
+ ],
1890
+ "angle": 0,
1891
+ "content": "Ensheng Shi, Yanlin Wang, Lun Du, Hongyu Zhang, Shi Han, Dongmei Zhang, and Hongbin Sun. 2021a. Cast: Enhancing code summarization with hierarchical splitting and reconstruction of abstract syntax trees. In EMNLP."
1892
+ },
1893
+ {
1894
+ "type": "ref_text",
1895
+ "bbox": [
1896
+ 0.117,
1897
+ 0.84,
1898
+ 0.488,
1899
+ 0.919
1900
+ ],
1901
+ "angle": 0,
1902
+ "content": "Ensheng Shi, Yanlin Wang, Lun Du, Hongyu Zhang, Shi Han, Dongmei Zhang, and Hongbin Sun. 2021b. CAST: enhancing code summarization with hierarchical splitting and reconstruction of abstract syntax trees. In EMNLP (1), pages 4053-4062. Association for Computational Linguistics."
1903
+ },
1904
+ {
1905
+ "type": "list",
1906
+ "bbox": [
1907
+ 0.117,
1908
+ 0.086,
1909
+ 0.49,
1910
+ 0.919
1911
+ ],
1912
+ "angle": 0,
1913
+ "content": null
1914
+ },
1915
+ {
1916
+ "type": "ref_text",
1917
+ "bbox": [
1918
+ 0.513,
1919
+ 0.086,
1920
+ 0.882,
1921
+ 0.151
1922
+ ],
1923
+ "angle": 0,
1924
+ "content": "Wei Tao, Yanlin Wang, Ensheng Shi, Lun Du, Shi Han, Hongyu Zhang, Dongmei Zhang, and Wenqiang Zhang. 2021. On the evaluation of commit message generation models: An experimental study. In ICSME."
1925
+ },
1926
+ {
1927
+ "type": "ref_text",
1928
+ "bbox": [
1929
+ 0.512,
1930
+ 0.162,
1931
+ 0.882,
1932
+ 0.215
1933
+ ],
1934
+ "angle": 0,
1935
+ "content": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 5998-6008."
1936
+ },
1937
+ {
1938
+ "type": "ref_text",
1939
+ "bbox": [
1940
+ 0.512,
1941
+ 0.225,
1942
+ 0.882,
1943
+ 0.264
1944
+ ],
1945
+ "angle": 0,
1946
+ "content": "Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In CVPR."
1947
+ },
1948
+ {
1949
+ "type": "ref_text",
1950
+ "bbox": [
1951
+ 0.512,
1952
+ 0.274,
1953
+ 0.882,
1954
+ 0.339
1955
+ ],
1956
+ "angle": 0,
1957
+ "content": "Haoye Wang, Xin Xia, David Lo, Qiang He, Xinyu Wang, and John Grundy. 2021a. Context-aware retrieval-based deep commit message generation. ACM Trans. Softw. Eng. Methodol., 30(4):56:1-56:30."
1958
+ },
1959
+ {
1960
+ "type": "ref_text",
1961
+ "bbox": [
1962
+ 0.512,
1963
+ 0.35,
1964
+ 0.882,
1965
+ 0.415
1966
+ ],
1967
+ "angle": 0,
1968
+ "content": "Yanlin Wang, Lun Du, Ensheng Shi, Yuxuan Hu, Shi Han, and Dongmei Zhang. 2020. Cocogum: Contextual code summarization with multi-relational gnn on ums. Technical report, Microsoft, MSR-TR-2020-16. [Online]."
1969
+ },
1970
+ {
1971
+ "type": "ref_text",
1972
+ "bbox": [
1973
+ 0.512,
1974
+ 0.426,
1975
+ 0.882,
1976
+ 0.504
1977
+ ],
1978
+ "angle": 0,
1979
+ "content": "Yue Wang, Weishi Wang, Shafiq R. Joty, and Steven C. H. Hoi. 2021b. Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. In EMNLP (1), pages 8696-8708. Association for Computational Linguistics."
1980
+ },
1981
+ {
1982
+ "type": "ref_text",
1983
+ "bbox": [
1984
+ 0.512,
1985
+ 0.514,
1986
+ 0.882,
1987
+ 0.581
1988
+ ],
1989
+ "angle": 0,
1990
+ "content": "Bolin Wei, Yongmin Li, Ge Li, Xin Xia, and Zhi Jin. 2020. Retrieve and refine: exemplar-based neural comment generation. In 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 349-360. IEEE."
1991
+ },
1992
+ {
1993
+ "type": "ref_text",
1994
+ "bbox": [
1995
+ 0.512,
1996
+ 0.59,
1997
+ 0.882,
1998
+ 0.655
1999
+ ],
2000
+ "angle": 0,
2001
+ "content": "Frank Wilcoxon, SK Katti, and Roberta A Wilcox. 1970. Critical values and probability levels for the wilcoxon rank sum test and the wilcoxon signed rank test. Selected tables in mathematical statistics, 1:171-259."
2002
+ },
2003
+ {
2004
+ "type": "ref_text",
2005
+ "bbox": [
2006
+ 0.512,
2007
+ 0.666,
2008
+ 0.882,
2009
+ 0.72
2010
+ ],
2011
+ "angle": 0,
2012
+ "content": "Shengbin Xu, Yuan Yao, Feng Xu, Tianxiao Gu, Hanghang Tong, and Jian Lu. 2019. Commit message generation for source code changes. In *IJCAI*, pages 3975-3981. ijcai.org."
2013
+ },
2014
+ {
2015
+ "type": "ref_text",
2016
+ "bbox": [
2017
+ 0.512,
2018
+ 0.728,
2019
+ 0.882,
2020
+ 0.782
2021
+ ],
2022
+ "angle": 0,
2023
+ "content": "HongChien Yu, Chenyan Xiong, and Jamie Callan. 2021. Improving query representations for dense retrieval with pseudo relevance feedback. In CIKM, pages 3592-3596. ACM."
2024
+ },
2025
+ {
2026
+ "type": "ref_text",
2027
+ "bbox": [
2028
+ 0.512,
2029
+ 0.791,
2030
+ 0.882,
2031
+ 0.831
2032
+ ],
2033
+ "angle": 0,
2034
+ "content": "Jian Zhang, Xu Wang, Hongyu Zhang, Hailong Sun, and Xudong Liu. 2020. Retrieval-based neural source code summarization. In ICSE."
2035
+ },
2036
+ {
2037
+ "type": "list",
2038
+ "bbox": [
2039
+ 0.512,
2040
+ 0.086,
2041
+ 0.882,
2042
+ 0.831
2043
+ ],
2044
+ "angle": 0,
2045
+ "content": null
2046
+ }
2047
+ ]
2048
+ ]
2203.02xxx/2203.02700/3d69ca7f-39a5-4c99-b9df-bcb921fe9d04_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c8ebd879a8873c60b56b95b8ac0aa4eb3fd0bf94f885520814f4ad38e73a500
3
+ size 653890
2203.02xxx/2203.02700/full.md ADDED
@@ -0,0 +1,310 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RACE: Retrieval-Augmented Commit Message Generation
2
+
3
+ Ensheng Shi $^{a}$ Yanlin Wang $^{b,\S,\dagger}$ Wei Tao $^{c}$ Lun Du $^{d}$
4
+
5
+ Hongyu Zhang<sup>e</sup> Shi Hand Dongmei Zhang<sup>d</sup> Hongbin Sun<sup>a,§</sup>
6
+
7
+ $^{a}$ Xi'an Jiaotong University $^{b}$ School of Software Engineering, Sun Yat-sen University
8
+
9
+ $^{c}$ Fudan University $^{d}$ Microsoft Research $^{e}$ The University of Newcastle
10
+
11
+ s1530129650@stu.xjtu.edu.cn, hsun@mail.xjtu.edu.cn
12
+
13
+ wangylin36@mail.sysu.edu.cn,wtao18@fudan.edu.cn
14
+
15
+ {lun.du, shihan, dongmeiz}@microsoft.com
16
+
17
+ hongyu.zhang@newcastle.edu.au
18
+
19
+ # Abstract
20
+
21
+ Commit messages are important for software development and maintenance. Many neural network-based approaches have been proposed and shown promising results on automatic commit message generation. However, the generated commit messages could be repetitive or redundant. In this paper, we propose RACE, a new retrieval-augmented neural commit message generation method, which treats the retrieved similar commit as an exemplar and leverages it to generate an accurate commit message. As the retrieved commit message may not always accurately describe the content/intent of the current code diff, we also propose an exemplar guider, which learns the semantic similarity between the retrieved and current code diff and then guides the generation of commit message based on the similarity. We conduct extensive experiments on a large public dataset with five programming languages. Experimental results show that RACE can outperform all baselines. Furthermore, RACE can boost the performance of existing Seq2Seq models in commit message generation. Our data and source code are available at https://github.com/DeepSoftwareAnalytics/RACE.
22
+
23
+ # 1 Introduction
24
+
25
+ In software development and maintenance, source code is frequently changed. In practice, code changes are often documented as natural language commit messages, which summarize what (content) the code changes are or why (intent) the code is changed (Buse and Weimer, 2010; Cortes-Coy et al., 2014). High-quality commit messages are essential to help developers understand the evolution of software without diving into implementation details, which can save a large amount of
26
+
27
+ time and effort in software development and maintenance (Dias et al., 2015; Barnett et al., 2015). However, it is difficult to write high-quality commit messages due to lack of time, clear motivation, or experienced skills. Even for seasoned developers, it still poses a considerable amount of extra workload to write a concise and informative commit message for massive code changes (Nie et al., 2021). It is also reported that around $14\%$ of commit messages over 23,000 projects in SourceForge are left empty (Dyer et al., 2013). Thus, automatically generating commit messages becomes an important task.
28
+
29
+ Over the years, many approaches have been proposed to automatically generate commit messages. Early studies (Shen et al., 2016; Cortes-Coy et al., 2014) are mainly based on predefined rules or templates, which may not cover all situations or comprehensively infer the intentions behind code changes. Later, some studies (Liu et al., 2018; Huang et al., 2017, 2020) adopt information retrieval (IR) techniques to reuse commit messages of similar code changes. They can take advantage of similar examples, but the reused commit messages might not correctly describe the content/intent of the current code change. Recently, some Seq2Seq-based neural network models (Loyola et al., 2017; Jiang et al., 2017; Xu et al., 2019; Liu et al., 2019; Jung, 2021) have been proposed to understand code diffs and generate the high-quality commit messages. These approaches show promising performance, but they tend to generate high-frequency and repetitive tokens and the generated commit messages have the problem of insufficient information and poor readability (Wang et al., 2021a; Liu et al., 2018). Some studies (Liu et al., 2020; Wang et al., 2021a) also explore the combination of neural-based and IR-based techniques. Liu et al. (2020) propose an approach to rank the retrieved commit message (obtained by a simple IR-based model) and the generated commit message (ob-
30
+
31
+ tained by a neural network model). Wang et al. (2021a) propose to use the similar code diff as auxiliary information in the inference stage, while the model is not trained to learn how to effectively utilize the information of retrieval results. Therefore, both of them fail to take advantage of the information of retrieved similar results well.
32
+
33
+ In this paper, we propose a novel model RACE (Retrieval-Augmented Commit mEssay generation), which retrieves a similar commit message as an exemplar, guides the neural model to learn the content of the code diff and the intent behind the code diff, and generates the readable and informative commit message. The key idea of our approach is retrieval and augmentation. Specifically, we first train a code diff encoder to learn the semantics of code diffs and encode the code diff into high-dimensional semantic space. Then, we retrieve the semantically similar code diff paired with the commit message on a large parallel corpus based on the similarity measured by vectors' distance. Next, we treat the similar commit message as an exemplar and leverage it to guide the neural-based models to generate an accurate commit message. However, the retrieved commit messages may not accurately describe the content/intent of current code diffs and may even contain wrong or irrelevant information. To avoid the retrieved samples dominating the processing of commit message generation, we propose an exemplar guider, which first learns the semantic similarity between the retrieved and current code diff and then leverages the information of the exemplar based on the learned similarity to guide the commit message generation.
34
+
35
+ To evaluate the effectiveness of RACE, we conduct experiments on a large-scale dataset MCMD (Tao et al., 2021) with five programming language (Java, C#, C++, Python and JavaScript) and compare RACE with 11 state-of-the-art approaches. Experimental results show that: (1) RACE significantly outperforms existing state-of-the-art approaches in terms of four metrics (BLUE, Meteor, Rouge-L and Cider) on the commit message generation. (2) RACE can boost the performance of existing Seq2Seq models in commit message generation. For example, it can improve the performance of NMTGen (Loyola et al., 2017), CommitBERT (Jung, 2021), CodeT5-small (Wang et al., 2021b) and CodeT5-base (Wang et al., 2021b) by $43\%$ , $11\%$ , $15\%$ , and $16\%$ on average in terms of BLEU, respectively. In addition,
36
+
37
+ we also conduct human evaluation to confirm the effectiveness of RACE.
38
+
39
+ We summarize the main contributions of this paper as follows:
40
+
41
+ - We propose a retrieval-augmented neural commit message generation model, which treats the retrieved similar commit as an exemplar and leverages it to guide neural network model to generate informative and readable commit messages.
42
+ - We apply our retrieval-augmented framework to four existing neural network-based approaches (NMTGen, CommitBERT, CodeT5-small, and CodeT5-base) and greatly boost their performance.
43
+ - We perform extensive experiments including human evaluation on a large multi-programming-language dataset and the results confirm the effectiveness of our approach over state-of-the-art approaches.
44
+
45
+ # 2 Related Work
46
+
47
+ Code intelligence, which leverages machine learning especially deep learning-based method to understand source code, is an emerging topic and has obtained the promising results in many software engineering tasks, such as code summarization (Zhang et al., 2020; Shi et al., 2021a, 2022b; Wang et al., 2020) and code search (Gu et al., 2018; Du et al., 2021; Shi et al., 2022a). Among them, commit message generation plays an important role in the software evolution.
48
+
49
+ In early work, information retrieval techniques are introduced to commit message generation (Liu et al., 2018; Huang et al., 2017, 2020). For instance, ChangeDoc (Huang et al., 2020) retrieves the most similar commits according to the syntax or semantics in the code diff and reuses commit messages of similar code diffs. NNGen (Liu et al., 2018) is a simple yet effective retrieval-based method using the nearest neighbor algorithm. It firstly recalls the top-k similar code diffs in the parallel corpus based on cosine similarity between bag-of-words vectors of code diffs. Then select the most similar result based on BLEU scores between each of them (topk results) and the input code diff. These approaches can reuse similar examples and the reused commit messages are usually readable and understandable.
50
+
51
+ Recently, many neural-based approaches (Loyola et al., 2017; Jiang et al., 2017; Xu et al., 2019;
52
+
53
+ Liu et al., 2019, 2020; Jung, 2021; Dong et al., 2022; Nie et al., 2021; Wang et al., 2021a) have been used to learn the semantic of code diffs and translate them into commit messages. For example, NMTGen (Loyola et al., 2017) and CommitGen (Jiang et al., 2017) treat the code diffs as plain texts and adopt the Seq2Seq neural network with different attention mechanisms to translate them into commit messages. CoDiSum (Xu et al., 2019) extracts both code structure and code semantics from code diffs and jointly models them with a multi-layer bidirectional GRU to better learn the representations of code diffs. PtrGNCMsg (Liu et al., 2019) incorporates the pointer-generator network into the Seq2Seq model to handle out-of-vocabulary (OOV) words. CommitBERT leverage CodeBERT (Feng et al., 2020), a pre-trained language model for source code, to learn the semantic representations of code diffs and adopt a Transformer-based (Vaswani et al., 2017) decoder to generate the commit message. These approaches show promising results on the generation of commit messages.
54
+
55
+ Recently, introducing retrieved relevant results into the training process has been found useful in most generation tasks (Lewis et al., 2020; Yu et al., 2021; Wei et al., 2020). Some studies (Liu et al., 2020; Wang et al., 2021a) also explore the combination of neural-based models and IR-based techniques to generate commit messages. ATOM (Liu et al., 2020) ensembles the neural-based model and the IR-based technique through the hybrid ranking. Specifically, it uses BiLSTM to encode ASTs paths extracted from ASTs of code diffs and adopt a decoder to generate commit messages. It also uses TF-IDF technique to represent code diffs as vectors and retrieves the most similar commit message based on cosine similarity. The generated and retrieved commit messages are finally prioritized by a hybrid ranking module. CoRec (Wang et al., 2021a) is also a hybrid model and only considers the retrieved result during the inference. Specifically, at the training stage, they use an encoder-decoder neural model to encode the input code diffs by an encoder and generate commit messages by a decoder. At the inference stage, they first use the trained encoder to retrieve the most similar code diff from the training set. Then they reuse a trained encoder-decoder to encode the input and retrieved code diff, combine the probability distributions (obtained by two decoders) of each word, and generate
56
+
57
+ the final commit message step by step. In summary, ATOM does not learn to refine the retrieved results or the generated results, and CoRec is not trained to utilize the information of retrieval results. Therefore, both of them fail to take full advantage of the retrieved similar results. In this paper, we treat the retrieved similar commit as an exemplar and train the model to leverage the exemplar to enhance commit message generation.
58
+
59
+ # 3 Proposed Approach
60
+
61
+ The overview of RACE is shown in Figure 1. It includes two modules: retrieval module and generation module. Specifically, RACE firstly retrieves the most semantically similar code diff paired with the commit message from the large parallel training corpus. The semantic similarity between two code diffs is measured by the cosine similarity of vectors obtained by a code diff encoder. Next, RACE treats the retrieved commit message as an example and uses it to guide the neural network to generate an understandable and concise commit message.
62
+
63
+ # 3.1 Retrieval module
64
+
65
+ In this module, we aim to retrieve the most semantically similar result. Specifically, we first train an encoder-decoder neural network on the large commit message generation dataset. The encoder is used to learn the semantics of code diffs and encode code diffs into a high-dimension semantic space. Then we retrieve the most semantically similar code diff paired with the commit message from the large parallel training corpus. The semantic similarity between two code diffs is measured by the cosine similarity of vectors obtained by a well-trained code diff encoder.
66
+
67
+ Recently, encoder-decoder neural network models (Loyola et al., 2017; Jiang et al., 2017; Jung, 2021), which leverage an encoder to learn the semantic of code diff and employ a decoder to generate the commit message, have shown their superiority in the understanding of code offs and commit messages generation. To enable the code diff encoder to understand the semantics of code offs, we train it with a commit message generator on a large commit message generation dataset, which consists of about 0.9 million <code diff, commit message> pairs.
68
+
69
+ To capture long-range dependencies (e.g. a variable is initialized before the changed line) and more contextual information of code diffs, we em
70
+
71
+ ![](images/fd4aef83d51ff54aa0f9e77f50d30f3d1820f9418368e54b218e777453a75071.jpg)
72
+
73
+ ![](images/1f23e86417c1a02989374e6f2ce6197fd5f838385a9bb232fd5ef16290277b89.jpg)
74
+ Figure 1: The architecture of RACE. It includes two modules: retrieval module and generation module. The retrieval module is used to retrieve the most similar code diff and commit message. The generation module leverages the retrieved result to enhance the performance of neural network models.
75
+
76
+ ploy a Transformer-based encoder to learn the semantic representations of input code diffs. As shown in Figure 1, a Transformer-based encoder is stacked with multiple encoder layers. Each layer consists of four parts, namely, a multi-head self-attention module, a relative position embedding module, a feed forward network (FFN) and an add & norm module. In $b$ -th attention head, the input $\mathbf{X}^{\mathrm{b}} = (\mathbf{x}_1^{\mathrm{b}},\mathbf{x}_2^{\mathrm{b}},\dots,\mathbf{x}_1^{\mathrm{b}})$ (where $\mathbf{X}^{\mathrm{b}} = \mathbf{X}[(b - 1)*head_{dim}:b*head_{dim}]$ , $\mathbf{X}$ is the sequence of code diff embedding, $head_{dim}$ is the dimension of each head and $l$ is the input sequence length.) is transformed to $(\mathbf{Head}^{b} = \mathbf{head}_{1}^{\mathrm{b}},\mathbf{head}_{2}^{\mathrm{b}},\dots,\mathbf{head}_{l}^{\mathrm{b}})$ by:
77
+
78
+ $$
79
+ \mathbf {h e a d} _ {\mathrm {i}} ^ {\mathrm {b}} = \sum_ {j = 1} ^ {l} \alpha_ {i j} \left(\mathbf {W} _ {\mathbf {V}} \mathbf {x} _ {\mathrm {j}} ^ {\mathrm {b}} + \mathbf {p} _ {\mathrm {i j}} ^ {\mathbf {V}}\right) \tag {1}
80
+ $$
81
+
82
+ $$
83
+ e _ {i j} = \frac {\left(\mathbf {W _ {Q}} \mathbf {x _ {i} ^ {b}}\right) ^ {T} \left(\mathbf {W _ {K}} \mathbf {x _ {j} ^ {b}} + \mathbf {p _ {i j} ^ {K}}\right)}{\sqrt {d _ {k}}}
84
+ $$
85
+
86
+ where $\alpha_{ij} = \frac{\exp e_{ij}}{\sum_{k=1}^{n}\exp e_{ik}}$ , $\mathbf{W}_{\mathbf{Q}}$ , $\mathbf{W}_{\mathbf{K}}$ and $\mathbf{W}_{\mathbf{V}}$ are learnable matrix for queries, keys and values. $d_k$ is the dimension of queries and keys; $\mathbf{p}_{\mathbf{ij}}^{\mathbf{K}}$ and $\mathbf{p}_{\mathbf{ij}}^{\mathbf{V}}$ are relative positional representations for positions $i$ and $j$ .
87
+
88
+ The outputs of all heads are concatenated and then fed to the FFN modules which is a multi-layer perception. The add & norm operation are employed after the multi-head attention and FFN modules. The calculations are as follows:
89
+
90
+ $$
91
+ \begin{array}{l} \mathbf {H e a d} = C o n c a t \left(\mathbf {H e a d} ^ {\mathbf {1}}, \mathbf {H e a d} ^ {\mathbf {d}}, \mathbf {H e a d} ^ {\mathbf {B}}\right) \\ \mathbf {H i d} = a d d \& n o r m (\mathbf {H e a d}, \mathbf {X}) \end{array} \tag {2}
92
+ $$
93
+
94
+ $$
95
+ \mathbf {E n c} = a d d \& n o r m (\mathbf {F F N} (\mathbf {H i d}), \mathbf {H i d})
96
+ $$
97
+
98
+ where $add\&norm(\mathbf{A_1},\mathbf{A_2}) = LN(\mathbf{A_1} + \mathbf{A_2})$ $B$ is the number of heads and $LN$ is layer normalization. The final output of encoder is sent to Transformer-based decoder to generate the commit message step by step. We use cross-entropy as loss function and adopt AdamW (Loshchilov and Hutter, 2019) to optimize the parameters of the code diff encoder and the decoder at the top of Figure 1.
99
+
100
+ Next, the retrieval module is used to retrieve the most similar result from a large parallel training corpus. We firstly use the above code diff encoder to map code diffs into a high-dimensional latent space and retrieve the most similar example based on cosine similarity.
101
+
102
+ Specifically, after being trained in the commit message generation dataset, the code diff encoder can capture the semantic of code diff well. We use well-trained code diff encoder following a mean-pooling operation to map the code diff into a high dimensional space. Mathematically, given the input code diff embedding $\mathbf{X} = (\mathbf{x}_1,\mathbf{x}_2,\dots,\mathbf{x}_l)$ , the code diff encoder can transformed them to $\mathbf{Enc} = (\mathbf{enc}_1,\mathbf{enc}_2,\dots,\mathbf{enc}_l)$ . Then we obtain the semantic vector of the code diff by pooling operation:
103
+
104
+ $$
105
+ \operatorname {v e c} = \text {p o o l i n g} (\mathbf {E n c}) = \text {m e a n} \left(\mathbf {e n c} _ {1}, \mathbf {e n c} _ {2}, \dots , \mathbf {e n c} _ {1}\right) \tag {3}
106
+ $$
107
+
108
+ where mean is a dimension-wise average operation. We measure the similarity of two code diffs by cosine similarity of their semantic vectors and retrieve the most similar code diff paired with the commit message from the parallel training corpus. For each
109
+
110
+ code diff, we return the first-ranked similar result. But, for the code diff in the training dataset, we return the second-ranked similar result because the first-ranked result is itself.
111
+
112
+ # 3.2 Generation module
113
+
114
+ As shown at the bottom of Figure 1, in the generation module, we treat the retrieved commit message as an exemplar and leverage it to guide the neural network model to generate an accurate commit message. Our generation module consists of three components: three encoders, an exemplar guider, and a decoder.
115
+
116
+ First, following Equation 1, 2, three Transformer-based encoders are adopted to obtain the representations of the input code diff $(\mathbf{Enc}^{\mathbf{d}} = \mathbf{enc}_1^d,\mathbf{enc}_2^d,\dots,\mathbf{enc}_l^d)$ , the similar code diff $(\mathbf{Enc}^{\mathbf{s}} = \mathbf{enc}_1^s,\mathbf{enc}_2^s,\dots,\mathbf{enc}_m^s)$ , and similar commit message $(\mathbf{Enc}^{\mathbf{m}} = \mathbf{enc}_1^m,\mathbf{enc}_2^m,\dots,\mathbf{enc}_n^m)$ (step ① in Figure 1), where subscripts $l,m,n$ are the length of the input code diff, the similar code diff, and the similar commit message, respectively.
117
+
118
+ Second, since the retrieved similar commit messages may not always accurately describe the content/ intent of the input code diffs even express totally wrong or irrelevant semantics. Therefore, we propose an exemplar guider which first learns the semantic similarity between the retrieved and input code diff and then leverages the information of the similar commit messages based on the learned similarity to guide the commit message generation (step ②). Mathematically, exemplar guider calculate the semantic similarity $(\lambda)$ between the input code diff and the similar code diff based on their representation $\mathbf{Enc}_l^d$ and $\mathbf{Enc}_m^s$ (step ② and ③):
119
+
120
+ $$
121
+ \lambda = \sigma \left(\mathbf {W} _ {\mathbf {s}} \left[ m e a n \left(\mathbf {E} \mathbf {n c} ^ {d}\right), m e a n \left(\mathbf {E} \mathbf {n c} ^ {s}\right) \right]\right) \tag {4}
122
+ $$
123
+
124
+ where $\sigma$ is the sigmoid activation function, $\mathbf{W}_{\mathrm{s}}$ is a learnable matrix, and mean is a dimension-wise average operation.
125
+
126
+ Third, we weight representations of code diff and similar commit message by $1 - \lambda$ and $\lambda$ , respectively and then concatenate them to obtain the final input encoding.
127
+
128
+ $$
129
+ \mathbf {E n c} ^ {\mathrm {d m}} = \left[ (1 - \lambda) * \mathbf {E n c} ^ {\mathrm {d}}: \lambda * \mathbf {E n c} ^ {\mathrm {s}} \right] \tag {5}
130
+ $$
131
+
132
+ Finally, we use a Transformer-based decoder to generate the commit message. The decoder consists of multiply decoder layer and each layers includes a masked multi-head self-attention, a
133
+
134
+ <table><tr><td>Language</td><td>Training</td><td>Validation</td><td>Test</td></tr><tr><td>Java</td><td>160,018</td><td>19,825</td><td>20,159</td></tr><tr><td>C#</td><td>149,907</td><td>18,688</td><td>18,702</td></tr><tr><td>C++</td><td>160,948</td><td>20,000</td><td>20,141</td></tr><tr><td>Python</td><td>206,777</td><td>25,912</td><td>25,837</td></tr><tr><td>JavaScript</td><td>197,529</td><td>24,899</td><td>24,773</td></tr></table>
135
+
136
+ Table 1: Statistics of the evaluation dataset.
137
+
138
+ multi-head cross-attention module, a FFN module and an add & norm module. Different from multi-head self-attention module in the encoder, in terms of one token, masked multi-head self-attention in the decoder can only attend to the previous tokens rather than the before and after context. In $b$ -th cross-attention layer, the input encoding $(\mathbf{Enc}^{\mathrm{dm}} = (\mathbf{enc}_1^{\mathrm{dm}}, \mathbf{enc}_2^{\mathrm{dm}}, \dots, \mathbf{enc}_{\mathrm{l + m}}^{\mathrm{dm}}))$ is queried by the output of the preceding commit message representations $\mathbf{Msg} = (\mathbf{msg}_1, \dots, \mathbf{msg}_t)$ obtained by masked multi-head self-attention module.
139
+
140
+ $$
141
+ D e c _ {\text {h e a d} _ {i} ^ {b}} = \sum_ {j = 1} ^ {l + m} \alpha_ {i j} \left(\mathbf {W} _ {\mathbf {V}} ^ {\mathbf {D e c}} \mathbf {e n c} _ {\mathbf {j}} ^ {\mathbf {b}}\right) \tag {6}
142
+ $$
143
+
144
+ $$
145
+ D e c _ {e _ {i j}} = \frac {\left(\mathbf {W} _ {\mathbf {Q}} ^ {\mathbf {D e c}} \mathbf {m s g} _ {\mathbf {j}} ^ {\mathbf {b}}\right) ^ {T} \left(\mathbf {W} _ {\mathbf {K}} ^ {\mathbf {D e c}} \mathbf {e n c} _ {\mathbf {i}} ^ {\mathbf {b}}\right)}{\sqrt {d _ {k}}}
146
+ $$
147
+
148
+ where $\alpha_{ij} = \frac{\exp\text{Dec}_{ij}}{\sum_{k=1}^{n}\exp\text{Dec}_{ik}}$ , $\mathbf{W}_{\mathbf{Q}}^{\mathbf{Dec}}$ , $\mathbf{W}_{\mathbf{K}}^{\mathbf{Dec}}$ and $\mathbf{W}_{\mathbf{V}}^{\mathbf{Dec}}$ are trainable projection matrices for queries, keys and values of the decoder layer. t is the length of preceding commit message.
149
+
150
+ Next, we use Equation 2 to obtain the hidden states of each decoder layer. In the last decoder layers, we employ a MLP and softmax operator to obtain the generation probability of each commit message token on the vocabulary. Then we use the cross-entropy as the loss function and apply AdamW for optimization.
151
+
152
+ # 4 Experimental Setup
153
+
154
+ # 4.1 Dataset
155
+
156
+ In our experiment, we use a large-scale dataset MCMD (Tao et al., 2021) with five programming languages (PLs): Java, C#, C++, Python and JavaScript. For each PL, MCMD collects commits from the top-100 starred repositories on GitHub and then filters the redundant messages (such as rollback commits) and noisy messages defined in Liu et al. (2018). Finally, to balance the size of data, they randomly sample and retain 450,000 commits for each PL. Each commit contains the code diff, the commit message, the name of the repository,
157
+
158
+ and the timestamp of commit, etc. To reduce the noise data in the dataset, we further filter out commits that contain multiple files or files that cannot be parsed (such as .jar, .ddl, .mp3, and .apk).
159
+
160
+ # 4.2 Data pre-processing
161
+
162
+ The code diff in MCMD are based on line-level code change. To obtain more fine-grained code change, following previous study (Panthaplackel et al., 2020), we use a sequence of span of token-level change actions to represent the code diff. Each action is structured as <action> span of tokens <action end>. There are four <action> types, namely, <keep>, <insert>, <delete>, and <replace>. <keep> means that the span of tokens are unchanged. <insert> means that adding span of tokens. <delete> means that deleting span of tokens. <replace> means that the span of tokens in the old version that will be replaced with different span of tokens in the new version. Thus, we extend <replace> to <replace old> and <replace new> to indicate the span of old and new tokens, respectively. We use difflib<sup>1</sup> to extract the sequence of code change actions.
163
+
164
+ # 4.3 Hyperparameters
165
+
166
+ We follow (Tao et al., 2021) to set the maximum lengths of code diff and commit message to 200 and 50, respectively. We use the weight of the encoder of CodeT5-base (Wang et al., 2021b) to initialize the code diff encoders and use the decoder of CodeT5-base to initialize the decoder in Figure 1. The original vocabulary sizes of CodeT5 is 32,100. We add nine special tokens (<keep>, <keep_end>, <insert>, <insert_end>, <delete>, <delete_end>, <replace_old>, <replace_new>, and <replace_end>) and the vocabulary sizes of code and queries become 32109. For the optimizer, we use AdamW with the learning rate 2e-5. The batch size is 32. The max epoch is 20. In addition, we run the experiments 3 times with random seeds 0,1,2 and display the mean value in the paper. The experiments are conducted on a server with 4 GPUs of NVIDIA Tesla V100 and it takes about 1.2 hours each epoch.
167
+
168
+ # 4.4 Evaluation metrics
169
+
170
+ We evaluate the quality of the generated messages using four metrics: BLEU (Papineni et al.,
171
+
172
+ 2002), Meteor (Banerjee and Lavie, 2005), Rouge-L (Lin, 2004), and Cider (Vedantam et al., 2015). These metrics are prevalent metrics in machine translation, text summarization, and image captioning. There are many variants of BLEU being used to measure the generated message, We choose B-Norm (the BLEU result in this paper is B-Norm), which correlates with human perception the most (Tao et al., 2021). The detailed metrics calculation can be found in Appendix.
173
+
174
+ # 4.5 Baselines
175
+
176
+ We compare RACE with four end-to-end neural-based models, two IR-based methods, two hybrid approaches which combine IR-based techniques and end-to-end neural-based methods, and three pre-trained-based models. Four end-to-end neural-based models include CommitGen (Jiang et al., 2017), CoDiSum (Xu et al., 2019), NMTGen (Loyola et al., 2017), PtrGNCMsg (Liu et al., 2019) and ATOM (Liu et al., 2020). They all train models from scratch. Two IR-based methods are NNGen (Liu et al., 2018) and Lucene (Apache, 2011), they retrieve the similar code diff based on different similarity measurements and reuse the commit message of the similar code diff as the final result. CoRec and ATOM are all hybrid models which combine the neural-based models and IR-based techniques. Three pre-trained models are CommitBERT, CodeT5-small, and CodeT5-base. They are pre-trained on the large parallel code and natural language corpus and fine-tuned on the commit message generation dataset. All baselines except Lucene, CodeT5-small and CodeT5-base are introduced in Section 2. Lucene is a traditional IR baseline, which uses TF-IDF to represent a code diff as a vector and searches the similar code diff based on the cosine similarity between two vectors. CodeT5-small and CodeT5-base are source code pre-trained models and have achieved promising results in many code-related tasks (Wang et al., 2021b). We fine-tune them on MCMD as strong baselines. In addition, we only evaluate ATOM on Java dataset as the current implementation of ATOM only supports Java.
177
+
178
+ # 5 Experimental Results
179
+
180
+ # 5.1 How does RACE perform compared with baseline approaches?
181
+
182
+ To evaluate the effectiveness of RACE, we conduct the experiment by comparing it with the 11
183
+
184
+ <table><tr><td rowspan="2" colspan="2">Model</td><td colspan="4">Java</td><td colspan="4">C#</td><td colspan="4">C++</td><td colspan="4">Python</td><td colspan="4">JavaScript</td></tr><tr><td>BLEU</td><td>Met.</td><td>Rou.</td><td>Cid.</td><td>BLEU</td><td>Met.</td><td>Rou.</td><td>Cid.</td><td>BLEU</td><td>Met.</td><td>Rou.</td><td>Cid.</td><td>BLEU</td><td>Met.</td><td>Rou.</td><td>Cid.</td><td>BLEU</td><td>Met.</td><td>Rou.</td><td>Cid.</td></tr><tr><td rowspan="2">IR-based</td><td>NNGen</td><td>19.41</td><td>12.40</td><td>25.15</td><td>1.23</td><td>22.15</td><td>14.77</td><td>26.46</td><td>1.55</td><td>13.61</td><td>9.39</td><td>18.21</td><td>0.73</td><td>16.06</td><td>10.91</td><td>21.69</td><td>0.92</td><td>18.65</td><td>12.50</td><td>24.45</td><td>1.21</td></tr><tr><td>Lucene</td><td>15.61</td><td>10.56</td><td>19.43</td><td>0.94</td><td>20.68</td><td>13.34</td><td>23.02</td><td>1.36</td><td>13.43</td><td>8.81</td><td>16.78</td><td>0.67</td><td>15.16</td><td>9.63</td><td>18.85</td><td>0.85</td><td>17.66</td><td>11.25</td><td>21.75</td><td>1.02</td></tr><tr><td rowspan="4">End-to-end</td><td>CommitGen</td><td>14.07</td><td>7.52</td><td>18.78</td><td>0.66</td><td>13.38</td><td>8.31</td><td>17.44</td><td>0.63</td><td>11.52</td><td>6.98</td><td>16.75</td><td>0.45</td><td>11.02</td><td>6.43</td><td>16.64</td><td>0.42</td><td>18.67</td><td>11.88</td><td>24.10</td><td>1.08</td></tr><tr><td>CoDiSum</td><td>13.97</td><td>6.02</td><td>16.12</td><td>0.39</td><td>12.71</td><td>5.56</td><td>14.40</td><td>0.36</td><td>12.44</td><td>6.00</td><td>14.39</td><td>0.42</td><td>14.61</td><td>8.59</td><td>17.02</td><td>0.42</td><td>11.22</td><td>5.32</td><td>13.26</td><td>0.28</td></tr><tr><td>NMTGen</td><td>15.52</td><td>8.91</td><td>21.13</td><td>0.86</td><td>12.71</td><td>8.11</td><td>17.16</td><td>0.62</td><td>11.57</td><td>7.06</td><td>17.46</td><td>0.51</td><td>11.41</td><td>7.18</td><td>18.43</td><td>0.48</td><td>18.22</td><td>12.07</td><td>24.43</td><td>1.12</td></tr><tr><td>PtrGNCMsg</td><td>17.71</td><td>11.33</td><td>24.32</td><td>0.99</td><td>15.98</td><td>10.18</td><td>21.16</td><td>0.83</td><td>14.06</td><td>9.63</td><td>20.17</td><td>0.63</td><td>15.89</td><td>11.36</td><td>23.49</td><td>0.76</td><td>20.78</td><td>14.52</td><td>27.87</td><td>1.29</td></tr><tr><td rowspan="2">Hybrid</td><td>ATOM</td><td>16.42</td><td>11.66</td><td>22.67</td><td>0.91</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td><td>/</td></tr><tr><td>CoRec</td><td>18.51</td><td>11.26</td><td>24.78</td><td>1.13</td><td>18.41</td><td>11.70</td><td>23.73</td><td>1.12</td><td>14.02</td><td>8.63</td><td>20.10</td><td>0.72</td><td>15.09</td><td>9.60</td><td>22.35</td><td>0.80</td><td>21.30</td><td>13.84</td><td>27.53</td><td>1.40</td></tr><tr><td rowspan="3">Pre-trained</td><td>CommitBERT</td><td>22.32</td><td>12.63</td><td>28.03</td><td>1.42</td><td>20.67</td><td>12.31</td><td>25.76</td><td>1.25</td><td>16.16</td><td>10.05</td><td>19.90</td><td>0.94</td><td>17.29</td><td>11.31</td><td>22.36</td><td>1.01</td><td>23.40</td><td>15.64</td><td>30.51</td><td>1.54</td></tr><tr><td>CodeT5-small</td><td>22.28</td><td>14.16</td><td>29.71</td><td>1.37</td><td>18.92</td><td>11.71</td><td>24.95</td><td>1.05</td><td>16.08</td><td>11.19</td><td>21.60</td><td>0.79</td><td>17.49</td><td>12.46</td><td>24.65</td><td>0.90</td><td>21.97</td><td>14.48</td><td>28.65</td><td>1.42</td></tr><tr><td>CodeT5-base</td><td>22.76</td><td>14.57</td><td>30.23</td><td>1.43</td><td>22.21</td><td>14.51</td><td>29.08</td><td>1.33</td><td>16.73</td><td>11.69</td><td>22.86</td><td>0.85</td><td>17.99</td><td>12.74</td><td>25.27</td><td>0.96</td><td>22.87</td><td>15.12</td><td>29.81</td><td>1.50</td></tr><tr><td rowspan="2">Ours</td><td rowspan="2">RACE</td><td>25.66</td><td>15.46</td><td>32.02</td><td>1.76</td><td>26.33</td><td>16.37</td><td>31.31</td><td>1.84</td><td>19.13</td><td>12.55</td><td>24.52</td><td>1.14</td><td>21.79</td><td>14.68</td><td>28.35</td><td>1.40</td><td>25.55</td><td>16.31</td><td>31.79</td><td>1.84</td></tr><tr><td>↑13%</td><td>↑6%</td><td>↑6%</td><td>↑23%</td><td>↑19%</td><td>↑13%</td><td>↑8%</td><td>↑38%</td><td>↑14%</td><td>↑7%</td><td>↑7%</td><td>↑34%</td><td>↑21%</td><td>↑15%</td><td>↑12%</td><td>↑46%</td><td>↑12%</td><td>↑8%</td><td>↑7%</td><td>↑23%</td></tr><tr><td>Ablation</td><td>RACE -Guider</td><td>23.37</td><td>13.98</td><td>30.01</td><td>1.53</td><td>21.33</td><td>13.56</td><td>27.33</td><td>1.31</td><td>17.43</td><td>12.10</td><td>22.03</td><td>0.95</td><td>19.44</td><td>13.89</td><td>26.4</td><td>1.01</td><td>23.39</td><td>15.64</td><td>30.51</td><td>1.54</td></tr></table>
185
+
186
+ Table 2: Comparison of RACE with baselines under four metrics on five programming languages. Met., Rou., and Cide. are short for Meteor, Rouge-L, and Cider, respectively. All results are statistically significant (with $p < 0.01$ ).
187
+
188
+ basielines including two IR-based approaches, four end-to-end neural-based approaches, two hybrid approaches, and three pre-train-based approaches in terms of four evaluation metrics. The experimental results are shown in Table 2.
189
+
190
+ We can see that IR-based models NNGen and Lucene generally outperform end-to-end neural models on average in terms of four metrics. It indicates that retrieved similar results can provide important information for commit message generation. CoRec, which combines the IR-based method and neural method, performs better than NNGen on $\mathrm{C + + }$ and JavaScript dataset but lower than NNGen on Java, C# and Python. This is because CoRec only leverages the information similar code diff at the inference stage. ATOM, which priorities the generated result of the neural-based model and retrieved result of the IR-based method, also outperforms the IR-based approach Lucene and three neural-based models CommitGen, CoDiSum, and NMTGen. Three pre-trained-based approaches outperform other baselines in terms of four metrics on average. CodeT5-base performs best among them on average. Our approach performs the best among all approaches on 5 programming languages in terms of four metrics. This is because RACE treats the retrieved similar commit message as an exemplar and leverages it to guide the neural network model to generate an accurate commit message.
191
+
192
+ We also give an example of commit messages generated by our approach and the baselines in Figure 2. IR-based methods NNGen and Lucene can retrieve semantically similar but not completely
193
+
194
+ correct commit message. Specifically, retrieved commit messages contain not only the important semantic ("Filter out unavailable databases") of the current code diff but also the extra information ("Revert"). Neural network models generally capture the action of "add" but fail to further understand the intend of the code diff. The hybrid model CoRec cannot generate the correct commit message either. Our model treats the retrieved result (Revert "Filter out unavailable databases") as an exemplar, and guides the neural network model to generate the correct commit message.
195
+
196
+ # 5.2 What is the effectiveness of exemplar guider?
197
+
198
+ We conduct the ablation study to verify the effectiveness of exemplar guider module. Specifically, as shown at the bottom of Figure 1, we directly concatenated the representations of retrieved results and fed them to the decoder to generate commit messages without using the exemplar guider. As shown at the bottom of the Table 2, we can see that the performance of the ablated model (RACE-Guide) degrades in all programming languages in terms of four metrics. It demonstrates the effectiveness of our exemplar guider.
199
+
200
+ # 5.3 What is the performance when we retrieve $k$ relevant commits?
201
+
202
+ We also conduct experiments to recall $k$ ( $k = 1, 3, 5, 7, 9$ ) most relevant commits to augment the generation model. Specifically, as shown in Figure 1 the relevance of the code diff is measured by the cosine similarity their semantic vectors obtained by
203
+
204
+ ![](images/df3a27eb2a9292c61967484dfc545d7580e17fa9b238f8e959d09153ba050ce1.jpg)
205
+ Reference Filter out unavailable databases
206
+
207
+ <table><tr><td colspan="2">Baselines</td></tr><tr><td>NNGen</td><td>Revert “ Filter out unavailable databases”</td></tr><tr><td>Lucene</td><td>Revert “ filter out unavailable databases ”</td></tr><tr><td>CommitGen</td><td>Merge pull request from mistecrunch / UNK</td></tr><tr><td>NMTGen</td><td>Add &lt;unk&gt; to &lt;unk&gt;</td></tr><tr><td>PtrGNCMsg</td><td>Add support for dashboards in database</td></tr><tr><td>CoRec</td><td>Remove &lt;unk&gt;</td></tr><tr><td>CommitBERT</td><td>Add DatabaseFilter ( )</td></tr><tr><td>CodeT5-small</td><td>[database] Add databasefilter to filter all users</td></tr><tr><td>CodeT5-base</td><td>[hotfix] Adding databasefilter to core.py</td></tr><tr><td>RACE</td><td>Stage I : Revert “ Filter out unavailable databases ”Stage II : Filter out unavailable databases</td></tr></table>
208
+
209
+ Equation 3. Then retrieved $k$ relevant commits are encoded and fed to the exemplar guider to obtain semantic similarities by Equation 4, respectively. Finally, we weight representations of code diff and similar commit messages according to the semantic similarities and feed them to the decoder to generate commit messages step by step. The experimental results are shown in Figure 3. We can see that the performance is generally stable on different $k$ . In our future work, we will continue to study alternatives on leveraging the information of the retrieved results, e.g., how many commits to retrieve and how to model the corresponding information.
210
+
211
+ # 5.4 Can our framework boost the performance of existing models?
212
+
213
+ We further study whether our framework can enhance the performance of the existing Seq2Seq neural network model in commit message generation. Therefore, we adapt our framework to four Seq2Seq-based models, namely NMTGen (M1), CommitBERT (M2), CodeT5-small (M3) and CodeT5-base (M4). Specifically, we use the encoder of these models as our code diff encoder and obtain the high-dimensional semantic vectors in the retrieval module (Figure 1). In the generation module, we use the encoder of their models
214
+
215
+ ![](images/b4d214e23a6fc1e743eadac9e9166b2bfd94da310cb8e92fa792f7ee376b5cb3.jpg)
216
+ Figure 3: Performance of models augmented with $k$ retrieved relevant commits.
217
+
218
+ ![](images/1ddc39bfbd5f0f31491e972522e759c290281c890cdf3cad3b1005a0b64b862e.jpg)
219
+ Figure 2: An example of generated commit messages. Reference is the developer-written commit message. The results of our approach in stage I and II are returned by the retrieved module and generation module, respectively.
220
+ Figure 4: Performance gains on four models. The original performance of the models are in yellow and gains from our framework are in green. The percentage value in each bar is the rate of improvement.
221
+
222
+ to encode input code diffs, similar code diffs, and similar commit messages. We also use the decoder of their models to generate commit messages.
223
+
224
+ The experimental results are shown in Figure 4, we present the performance of four original models (yellow) and gains (green) from our framework on five programming languages in terms of $\mathrm{BLEU}^2$ score. Overall, we can see that our framework can improve the performance of all four neural models in all programming languages. Our framework can improve the performance of the original model from $7\%$ to $73\%$ . Especially, after applying our framework, the performance of NMTGen has more than $20\%$ improvement on all programming languages. In addition, Our framework can boost the performance of NMTGen on BLUE, Meteor, Rouge-L, and Cider by $43\%$ , $49\%$ , $33\%$ , and $61\%$ on average, boost CommitBERT by $11\%$ , $9\%$ , $11\%$ , and $12\%$ , boost CodeT5-small by $15\%$ , $14\%$ , $11\%$ , and $26\%$ , and boost CodeT5-base by $16\%$ , $10\%$ ,
225
+
226
+ <table><tr><td>Model</td><td>Informativeness</td><td>Conciseness</td><td>Expressiveness</td></tr><tr><td>CommitBERT</td><td>1.22 (±1.02)</td><td>2.03 (±1.04)</td><td>2.46 (±0.99)</td></tr><tr><td>NNGen</td><td>1.03 (±1.00)</td><td>1.74 (±1.01)</td><td>2.36 (±0.95)</td></tr><tr><td>NMTGen</td><td>0.74 (±0.92)</td><td>1.56 (±0.93)</td><td>2.11 (±0.94)</td></tr><tr><td>CoRec</td><td>1.05 (±1.09)</td><td>1.80 (±1.05)</td><td>2.43 (±0.88)</td></tr><tr><td>RACE</td><td>2.49 (±1.10)</td><td>3.08 (±0.96)</td><td>2.85 (±0.84)</td></tr></table>
227
+
228
+ Table 3: Results of human evaluation (standard deviation in parentheses).
229
+
230
+ $8\%$ , and $32\%$
231
+
232
+ # 5.5 Human evaluation
233
+
234
+ We also conduct a human evaluation by following the previous works (Moreno et al., 2013; Panichella et al., 2016; Shi et al., 2021b) to evaluate the semantic similarity of the commit message generated by RACE and four baselines NNGen, NMTGen, CommitBERT, and CoRec. The four baselines are IR-based, end-to-end neural network-based, hybrid, and pre-trained-based approaches, respectively. We randomly choose 50 code diff from the testing sets and their commit message generated by four approaches. Finally, we sample $250 < \text{code diff}$ , commit message> pairs to score. Specifically, we invite 4 volunteers with excellent English ability and more than three years of software development experience. Each volunteer is asked to assign scores from 0 to 4 (the higher the better) to the generated commit message from the three aspects: Informativeness (the amount of important information about the code diff reflected in the commit message), Conciseness (the extend of extraneous information included in the commit message), and Expressiveness (grammaticality and fluency). Each pair is evaluated by four volunteers, and the final score is the average of them.
235
+
236
+ To verify the agreement among the volunteers, we calculate the Krippendorff's alpha (Hayes and Krippendorff, 2007) and Kendall rank correlation coefficient (Kendall's Tau) values (Kendall, 1945). The value of Krippendorff's alpha is 0.90 and the values of pairwise Kendall's Tau range from 0.73 to 0.95, which indicates that there is a high degree of agreement between the 4 volunteers and that scores are reliable. Table 3 shows the result of human evaluation. RACE is better than other approaches in Informative, Conciseness, and Expressiveness, which means that our approach tends to generate concise and readable commit messages with more
237
+
238
+ comprehensive semantics. In addition, we confirm the superiority of our approach using Wilcoxon signed-rank tests (Wilcoxon et al., 1970) for the human evaluation. Results show that the improvement of RACE over other approaches is statistically significant with all p-values smaller than 0.05 at $95\%$ confidence level.
239
+
240
+ # 6 Conclusion
241
+
242
+ This paper proposes a new retrieval-augmented neural commit message generation method, which treats the retrieved similar commit message as an exemplar and uses it to guide the neural network model to generate an accurate and readable commit message. Extensive experimental results demonstrate that our approach outperforms recent baselines and our framework can significantly boost the performance of four neural network models. Our data, source code and Appendix are available at https://github.com/DeepSoftwareAnalytics/RACE.
243
+
244
+ # Limitations
245
+
246
+ We have identified the following main limitations:
247
+
248
+ Programming Languages. We only conduct experiments on five programming languages. Although in principle, our framework is not specifically designed for certain languages, models perform differently in different programming languages. Therefore, more experiments are needed to confirm the generality of our framework. In the future, we will extend our study to other programming languages.
249
+
250
+ Code base. Compared with purely neural network-based models, our method needs a code base to retrieve the most similar example from that. This limitation is inherited from IR-based techniques.
251
+
252
+ Training Time. In addition to modeling the information of input code diffs, our model needs to retrieve similar diffs and encode them. Thus, our model takes a long time to train (about 35 hours to train the model).
253
+
254
+ Long Code Diffs. Longer code diffs may contain more complex semantics or behaviors. Long diffs (over 512 tokens) are truncated in our approach and some information would be lost. In our future work, we will design mechanisms to better handle long diffs.
255
+
256
+ # Acknowledgement
257
+
258
+ We thank reviewers for their valuable comments on this work. This research was supported by National Key R&D Program of China (No. 2017YFA0700800). We would like to thank Jiaqi Guo and Wenchao Gu for their valuable suggestions and feedback during the work discussion process. We also thank the participants of our human evaluation for their time.
259
+
260
+ # References
261
+
262
+ Apache. 2011. Apache lucene.
263
+ Satanjeev Banerjee and Alon Lavie. 2005. METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In IEEvaluation@ACL.
264
+ Mike Barnett, Christian Bird, João Brunet, and Shuvendu K. Lahiri. 2015. Helping developers help themselves: Automatic decomposition of code review changesets. In ICSE (1), pages 134-144. IEEE Computer Society.
265
+ Raymond P. L. Buse and Westley Weimer. 2010. Automatically documenting program changes. In ASE, pages 33-42. ACM.
266
+ Luis Fernando Cortes-Coy, Mario Linares Vásquez, Jairo Aponte, and Denys Poshyvanyk. 2014. On automatically generating commit messages via summarization of source code changes. In SCAM, pages 275-284. IEEE Computer Society.
267
+ Martin Dias, Alberto Bacchelli, Georgios Gousios, Damien Cassou, and Stephane Ducasse. 2015. Untangling fine-grained code changes. In SANER, pages 341-350. IEEE Computer Society.
268
+ Jinhao Dong, Yiling Lou, Qihao Zhu, Zeyu Sun, Zhilin Li, Wenjie Zhang, and Dan Hao. 2022. Fira: Fine-grained graph-based code change representation for automated commit message generation.
269
+ Lun Du, Xiaozhou Shi, Yanlin Wang, Ensheng Shi, Shi Han, and Dongmei Zhang. 2021. Is a single model enough? mucos: A multi-model ensemble learning approach for semantic code search. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 2994-2998.
270
+ Robert Dyer, Hoan Anh Nguyen, Hridesh Rajan, and Tien N. Nguyen. 2013. Boa: a language and infrastructure for analyzing ultra-large-scale software repositories. In ICSE, pages 422-431. IEEE Computer Society.
271
+ Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020.
272
+
273
+ Codebert: A pre-trained model for programming and natural languages. In EMNLP (Findings), volume EMNLP 2020 of Findings of ACL, pages 1536-1547. Association for Computational Linguistics.
274
+ Xiaodong Gu, Hongyu Zhang, and Sunghun Kim. 2018. Deep code search. In ICSE, pages 933-944. ACM.
275
+ Andrew F Hayes and Klaus Krippendorff. 2007. Answering the call for a standard reliability measure for coding data. Communication methods and measures, 1(1):77-89.
276
+ Yuan Huang, Nan Jia, Hao-Jie Zhou, Xiangping Chen, Zibin Zheng, and Mingdong Tang. 2020. Learning human-written commit messages to document code changes. J. Comput. Sci. Technol., 35(6):1258-1277.
277
+ Yuan Huang, Qiaoyang Zheng, Xiangping Chen, Yingfei Xiong, Zhiyong Liu, and Xiaonan Luo. 2017. Mining version control system for automatically generating commit comment. In ESEM, pages 414-423. IEEE Computer Society.
278
+ Siyuan Jiang, Ameer Armaly, and Collin McMillan. 2017. Automatically generating commit messages from diffs using neural machine translation. In ASE.
279
+ Tae Hwan Jung. 2021. Commitbert: Commit message generation using pre-trained programming language model. In Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021), pages 26-33.
280
+ Maurice G Kendall. 1945. The treatment of ties in ranking problems. Biometrika, 33(3):239-251.
281
+ Patrick S. H. Lewis, Ethan Perez, Aleksandra Pik-tus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Roektaschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive NLP tasks. In NeurIPS.
282
+ Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out.
283
+ Qin Liu, Zihe Liu, Hongming Zhu, Hongfei Fan, Bowen Du, and Yu Qian. 2019. Generating commit messages from diffs using pointer-generator network. In MSR, pages 299-309. IEEE / ACM.
284
+ Shangqing Liu, Cuiyun Gao, Sen Chen, Lun Yiu Nie, and Yang Liu. 2020. ATOM: commit message generation based on abstract syntax tree and hybrid ranking. TSE, PP:1-1.
285
+ Zhongxin Liu, Xin Xia, Ahmed E. Hassan, David Lo, Zhenchang Xing, and Xinyu Wang. 2018. Neural-machine-translation-based commit message generation: how far are we? In ASE, pages 373-384. ACM.
286
+ Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In ICLR.
287
+
288
+ Pablo Loyola, Edison Marrese-Taylor, and Yutaka Matsuo. 2017. A neural architecture for generating natural language descriptions from source code changes. In ACL (2), pages 287-292. Association for Computational Linguistics.
289
+ Laura Moreno, Jairo Aponte, Giriprasad Sridhara, Andrian Marcus, Lori L. Pollock, and K. Vijay-Shanker. 2013. Automatic generation of natural language summaries for java classes. In ICPC, pages 23-32. IEEE Computer Society.
290
+ Lun Yiu Nie, Cuiyun Gao, Zhicong Zhong, Wai Lam, Yang Liu, and Zenglin Xu. 2021. Coregen: Contextualized code representation learning for commit message generation. Neurocomputing, 459:97-107.
291
+ Sebastiano Panichella, Annibale Panichella, Moritz Beller, Andy Zaidman, and Harald C. Gall. 2016. The impact of test case summaries on bug fixing performance: an empirical investigation. In ICSE, pages 547-558. ACM.
292
+ Sheena Panthaplackel, Pengyu Nie, Milos Gligoric, Junyi Jessy Li, and Raymond J. Mooney. 2020. Learning to update natural language comments based on code changes. In ACL, pages 1853-1868. Association for Computational Linguistics.
293
+ Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL.
294
+ Jinfeng Shen, Xiaobing Sun, Bin Li, Hui Yang, and Jiajun Hu. 2016. On automatic summarization of what and why information in source code changes. In COMPSAC, pages 103-112. IEEE Computer Society.
295
+ Ensheng Shi, Wenchao Gub, Yanlin Wang, Lun Du, Hongyu Zhang, Shi Han, Dongmei Zhang, and Hongbin Sun. 2022a. Enhancing semantic code search with multimodal contrastive learning and soft data augmentation. arXiv preprint arXiv:2204.03293.
296
+ Ensheng Shi, Yanlin Wang, Lun Du, Junjie Chen, Shi Han, Hongyu Zhang, Dongmei Zhang, and Hongbin Sun. 2022b. On the evaluation of neural code summarization. In ICSE.
297
+ Ensheng Shi, Yanlin Wang, Lun Du, Hongyu Zhang, Shi Han, Dongmei Zhang, and Hongbin Sun. 2021a. Cast: Enhancing code summarization with hierarchical splitting and reconstruction of abstract syntax trees. In EMNLP.
298
+ Ensheng Shi, Yanlin Wang, Lun Du, Hongyu Zhang, Shi Han, Dongmei Zhang, and Hongbin Sun. 2021b. CAST: enhancing code summarization with hierarchical splitting and reconstruction of abstract syntax trees. In EMNLP (1), pages 4053-4062. Association for Computational Linguistics.
299
+
300
+ Wei Tao, Yanlin Wang, Ensheng Shi, Lun Du, Shi Han, Hongyu Zhang, Dongmei Zhang, and Wenqiang Zhang. 2021. On the evaluation of commit message generation models: An experimental study. In ICSME.
301
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 5998-6008.
302
+ Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In CVPR.
303
+ Haoye Wang, Xin Xia, David Lo, Qiang He, Xinyu Wang, and John Grundy. 2021a. Context-aware retrieval-based deep commit message generation. ACM Trans. Softw. Eng. Methodol., 30(4):56:1-56:30.
304
+ Yanlin Wang, Lun Du, Ensheng Shi, Yuxuan Hu, Shi Han, and Dongmei Zhang. 2020. Cocogum: Contextual code summarization with multi-relational gnn on ums. Technical report, Microsoft, MSR-TR-2020-16. [Online].
305
+ Yue Wang, Weishi Wang, Shafiq R. Joty, and Steven C. H. Hoi. 2021b. Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. In EMNLP (1), pages 8696-8708. Association for Computational Linguistics.
306
+ Bolin Wei, Yongmin Li, Ge Li, Xin Xia, and Zhi Jin. 2020. Retrieve and refine: exemplar-based neural comment generation. In 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 349-360. IEEE.
307
+ Frank Wilcoxon, SK Katti, and Roberta A Wilcox. 1970. Critical values and probability levels for the wilcoxon rank sum test and the wilcoxon signed rank test. Selected tables in mathematical statistics, 1:171-259.
308
+ Shengbin Xu, Yuan Yao, Feng Xu, Tianxiao Gu, Hanghang Tong, and Jian Lu. 2019. Commit message generation for source code changes. In *IJCAI*, pages 3975-3981. ijcai.org.
309
+ HongChien Yu, Chenyan Xiong, and Jamie Callan. 2021. Improving query representations for dense retrieval with pseudo relevance feedback. In CIKM, pages 3592-3596. ACM.
310
+ Jian Zhang, Xu Wang, Hongyu Zhang, Hailong Sun, and Xudong Liu. 2020. Retrieval-based neural source code summarization. In ICSE.
2203.02xxx/2203.02700/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:883454eba91f283c141045ba1d58e9e61574a8df1ae6e353ecb46c51a4716424
3
+ size 440088
2203.02xxx/2203.02700/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2203.02xxx/2203.02719/6a2a0cc6-43cc-4438-b5c9-38412d95a76d_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2203.02xxx/2203.02719/6a2a0cc6-43cc-4438-b5c9-38412d95a76d_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2203.02xxx/2203.02719/6a2a0cc6-43cc-4438-b5c9-38412d95a76d_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76a78eaae70a49c0be5811ba78d2b8d945da7523ba15b047ef38a7797ef02e74
3
+ size 5181418
2203.02xxx/2203.02719/full.md ADDED
@@ -0,0 +1,445 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DroidRL: Feature Selection for Android Malware Detection with Reinforcement Learning\*
2
+
3
+ Yinwei Wu $^{a}$ , Meijin Li $^{a}$ , Qi Zeng $^{c}$ , Tao Yang $^{c}$ , Junfeng Wang $^{c}$ , Zhiyang Fang $^{b,*}$ and Luyu Cheng $^{d}$
4
+
5
+ aCollege of Software Engineering, Sichuan University, Chengdu, China
6
+ $^{b}$ School of Cyber Science and Engineering, Sichuan University, Chengdu, China
7
+ $^{c}$ College of Computer Science, Sichuan University, Chengdu, China
8
+ $^{d}$ School of Business, Sichuan University, Chengdu, China
9
+
10
+ # ARTICLE INFO
11
+
12
+ Keywords:
13
+
14
+ Reinforcement Learning
15
+
16
+ Android Malware Detection
17
+
18
+ Feature Selection
19
+
20
+ RNN
21
+
22
+ Sequence Processing
23
+
24
+ # ABSTRACT
25
+
26
+ Due to the completely open-source nature of Android, the exploitable vulnerability of malware attacks is increasing. Machine learning, leading to a great evolution in Android malware detection in recent years, is typically applied in the classification phase. Since the correlation between features is ignored in some traditional ranking-based feature selection algorithms, applying wrapper-based feature selection models is a topic worth investigating. Though considering the correlation between features, wrapper-based approaches are time-consuming for exploring all possible valid feature subsets when processing a large number of Android features. To reduce the computational expense of wrapper-based feature selection, a framework named DroidRL is proposed. The framework deploys DDQN algorithm to obtain a subset of features which can be used for effective malware classification. To select a valid subset of features over a larger range, the exploration-exploitation policy is applied in the model training phase. The recurrent neural network (RNN) is used as the decision network of DDQN to give the framework the ability to sequentially select features. Word embedding is applied for feature representation to enhance the framework's ability to find the semantic relevance of features. The framework's feature selection exhibits high performance without any human intervention and can be ported to other feature selection tasks with minor changes. The experiment results show a significant effect when using the Random Forest as DroidRL's classifier, which reaches $95.6\%$ accuracy with only 24 features selected.
27
+
28
+ # 1. INTRODUCTION
29
+
30
+ Android is the fastest-growing computing platform on the mobile terminal. In 2021, there were 1.39 billion smartphones manufactured worldwide, and Android dominated the global market by $72.2\%$ . However, as an open-source operating system, Android has been attacked by various malware. According to the report released by Qianxin Threat Intelligence Center [33], a total of 2.3 million samples of malicious programs were intercepted on the Android platform in 2020, with an average of 6,301 new samples of malicious programs every day. The AdbMiner Mining Trojan family is active in attacks, capturing tens of thousands of Internet of things devices worldwide, and the number of Internet of things devices captured in China is close to 1000. Therefore, Android malware becomes so serious that many researchers endeavor to seek effective detection methods.
31
+
32
+ The advent of machine learning had a significant impact on Android malware detection for the classification stage. Currently, advanced Android malware detection approaches can be categorized into static analysis [19] [44] and dynamic analysis [19] [31]. Some researchers utilize state-of-the-art machine learning models like deep learning [6], online learning [28] or ensemble learning [23] to identify multi-class attacks effectively in the Android environment.
33
+
34
+ Eliminating redundant or irrelevant features is a significant procedure of machine learning. Babaagba et al. [5] demonstrated the influence of feature engineering in Android malware detection by contrasting the performance of the model with or without feature selection algorithms applied.
35
+
36
+ As the commonly applied feature selection approach, filter-based Android feature selection models [36] [29] [46] are unable to utilize the feedback from the accuracy of the classifier in Android malware detection, consequently the correlation information between different features obtained from the classifier is ignored. However, the number of
37
+
38
+ Table 1 Feature subsets selection in Android
39
+
40
+ <table><tr><td>Method</td><td>Advantage</td><td>Disadvantage</td><td>Algorithm</td></tr><tr><td>Filter</td><td>Fast, lower computational cost</td><td>Without considering feature relevance</td><td>Correlation-based Feature Selection (CFS), The Consistency-based Filter [10], Information Gain [24], ReliefF [37]</td></tr><tr><td>Wrapper</td><td>Capture feature relevance, optimize the predictor</td><td>High computational cost</td><td>FFSR [18], Wrapper SubsetEval [42]</td></tr></table>
41
+
42
+ possible combinations of these features is so large that it is not infeasible for an exhaustive search in a wrapper-based approach [17], which always incurs a high computational expense.
43
+
44
+ In this paper, a wrapper-based feature selection model using DDQN [26], DroidRL, is proposed to automatically select valid Android feature subsets. The main contributions of this paper are summarized as follows:
45
+
46
+ (1) Reinforcement learning (RL) is leveraged in the wrapper-based feature selection to address the problem of inexhaustible feature subsets of the raw Android features. Reinforcement learning and exploration-exploitation policy are utilized in DroidRL to explore an optimal feature subset for malware detection.
47
+
48
+ (2) DroidRL presents an extensible prototype of a feature reduction algorithm for machine learning in other scenarios. A highly efficient approach is proposed in this paper for researchers to reprocess raw features on their datasets when machine learning models are used. DroidRL takes advantage of the reinforcement learning nature to automatically perform feature selection for dimension reduction, sufficient to replace the burdensome manual feature engineering in the malware detection task.
49
+
50
+ (3) The feature dimension is notably reduced (1083 to 24) using the DroidRL framework while maintaining high accuracy $(95.6\%)$ . Extensive experiments demonstrate that the DroidRL framework performs better than the traditional feature selection methods, improving detection performance on a variety of classifiers.
51
+
52
+ The paper is structured as follows: Section 2 gives an introduction to related works on feature selection for Android malware and reinforcement learning in cyber security. Section 3 describes the fundamental principles of the DroidRL applied in feature selection in Android malware detection. Section 4 introduces the training process of the DroidRL. Section 5 presents the dataset to carry out the comparative experiment, the feature extraction method, and the data preprocessing process. Section 6 discusses the results of the experiment.
53
+
54
+ # 2. RELATED WORK
55
+
56
+ # 2.1. Feature Selection for Android Malware Detection
57
+
58
+ Feature selection is the process of selecting a subset from the original feature set to improve detection efficiency. Determined by whether independent of the accuracy of the classifier, feature selection algorithms can be categorized into filter and wrapper based algorithms. Table 1 depicts the traits of each category and the difference between them. Yu and Liu [20] introduced a correlation-based filter (FCBF) feature selection which made improvements in the traditional filter-based approach for reducing the redundancy among relevant features. Priya et al. [32] detected Android malware using an improved filter-based technique; the k-nearest neighbor (KNN) based Relief algorithm. Huda et al. [17] applied the filter's ranking score in the wrapper selection process and combined the properties of wrapper and filters with API call statistics to detect malware based on the nature of infectious actions. Xu et al. [43] employed correlation-based feature selection (CFS)[14] to identify and remove the redundant features, reducing 121,621 to 5,000 features. Yuan et al. [45] utilized only features that deep learning essentially exploited to characterize malware and reached $96.76\%$ detection accuracy. Allix et al. [1] created basic blocks as features, sequences of instructions in the controlflow graph with only one entry point and one exit point, thus representing the smallest piece of the program that was always executed altogether
59
+
60
+ The typical filter-based feature selection algorithm is ranking-based. Each feature is assigned a score according to its importance and then the top N features are selected as input for the classification stage after ranking all the features. Huang et al. [16] proposed a parameterless feature ranking approach for feature selection and a modified greedy feature selection algorithm. Wang et al. [41] ranked individual permissions based on the risk of single permission
61
+
62
+ and the group of permissions. Mahindru, Arvind, and Sangal [21] applied six different feature ranking approaches to select significant features, including Gain-ratio feature selection, Chi-Square, Information-gain, and logistic regression analysis. In another experiment [22] they combined six distinct kinds of feature ranking and four distinct kinds of feature subset selection approaches to select the valid feature subsets.
63
+
64
+ Traditional machine learning models can be optimized to select valid feature subsets. Youn [35] presented an algorithm based on support vector machines (SVM) for feature selection to decrease the computation time. Priya, Varna, and Visalakshi [32] proposed KNN based relief algorithm for feature selection. The optimized SVM algorithm was applied for malware detection with the result equivalent to the performance of neural network.
65
+
66
+ The state-of-the-art machine learning algorithms like the genetic algorithm [12] and neural network [40] are also used in feature subsets selection. For neural networks, the score derived from the sum of the softmax weights of the input features can be adopted as an evaluation indicator to select valid feature subsets.
67
+
68
+ From the above discussion, the conclusion can be reached that little research utilizes the feedback from the accuracy of the classifier in Android malware detection. Filter-based feature selection is less computationally expensive compared with wrapper-based feature selection, but the relevance between different features is ignored, which can consequently choose a large number of redundant features while processing high-dimension feature vectors. To make a improvement in the efficiency of the malware detection classifier, the problem of inexhaustible feature combinations in selected valid subsets in the previous wrapper-based method should be addressed.
69
+
70
+ # 2.2. Reinforcement Learning in Cyber Security
71
+
72
+ The prevailing algorithms of reinforcement learning are Q-learning [25], Deep Q Network (DQN) [26] and Double Deep Q Network (DDQN) [15]. DQN is introduced by Mnih et al. to address the problem of the difficulty to use Q-table for high-dimensional, continuous state and action space. DDQN made a noteworthy improvement on DQN in the training algorithm. The generation method of target Q value is modified in DDQN [15] to deal with the overestimation of the Q value of action in traditional DQN.
73
+
74
+ Applications of reinforcement learning in software security achieved significant improvements in recent years. CyberBattleSim [38] implemented a automated defender agent that detected and mitigated ongoing attacks based on pre-defined probabilities of success, with the simulation environment parameterized by fixed network topology and a set of predefined vulnerabilities.
75
+
76
+ Especially in virus detection, reinforcement learning has been commonly applied in malware classification [7] or adversarial sample generating [11] [34]. Fang et al. [11] trained an AI agent to automatically generate adversarial samples by rewarding it if the modified malware escaped the classifier detection. Rathore et al. [34] generated malware using reinforcement learning for maximizing fooling rate while making minimum modifications to the Android application. To address the problem of the slow learning rate in the game with the high dimension of Q-learning, Wan et al. [39] applied deep Q-network technique with a deep convolutional neural network in mobile malware detection, which initiates the quality values based on the malware detection experience.
77
+
78
+ In this work, by combing the advantages of using existing experience while automatically exploring other optimal subsets in reinforcement learning and the utilization of feature relevance in wrapper-based feature selection, DroidRL tackled the problems of the traditional feature selection algorithm to make the feature selection phase faster and the malware classification more efficient.
79
+
80
+ # 3. DROIDRL FRAMEWORK
81
+
82
+ For the DroidRL framework, the primary task is to train a learning agent to sequentially choose valid Android feature subsets by interacting with the environment and utilizing its learned knowledge. This section describes how the DroidRL framework achieves its goal.
83
+
84
+ # 3.1. Overview of DroidRL
85
+
86
+ Figure 1 shows the schematic diagram of the DroidRL. The core part of the DroidRL framework is built up by the DDQN-based decision network. In each step, the autonomous agent independently carries out an action decided by the decision network, to select one feature into its observed state from the environment using their prior knowledge. To evaluate the quality of the feature subset and differentiation of the selected individual features, the reward of the action produced from the malware classifier is determined by the malware classification accuracy using selected features as input. Furthermore, the state of the agent, the chosen action, and the reward of this epoch are saved in the replay
87
+
88
+ ![](images/d32eb178e2ab103fe40c2f5352e043e1451f50a2c0d2e5857e627d0a60fd44c9.jpg)
89
+ Figure 1: DroidRL for feature selection in Android malware detection
90
+
91
+ memory for training the decision network. The exploration-exploitation policy is enhanced to address the problem of computational expense due to inexhaustible feature subsets.
92
+
93
+ # 3.2. Key Components in Reinforcement Learning
94
+
95
+ (1) Environment: The environment is the place for the agent to explore and get feedback.
96
+
97
+ In our framework, the environment contains all candidate features and is responsible for putting the agent's current state into the malware classifier after each action is executed. The accuracy of the classification will be returned to the agent as reward. The total number of given features and the length of the valid feature subsets that need to select are defined in environment. When the agent has selected enough features according to the declared length of the valid feature subset, the environment instructs the agent to end this round, return the final reward, and reset itself.
98
+
99
+ (2) Action: Action is the critical step the agent in reinforcement learning needs to take from action space based on its experience and current state.
100
+
101
+ In the DroidRL framework, action space contains features in the raw feature sets extracted through decompiled APK files. The state of the agent describes the currently selected features as the result of a series of actions. The primary task of the agent is to find the optimal feature subset that is highly distinguishable between malware and benign Android software.
102
+
103
+ For each action in the DroidRL framework, one unselected feature is added to the state. $\varepsilon$ -Greedy algorithm is employed to make the agent trade-off between exploration and exploitation. Each action is explored with the probability of $\varepsilon$ , while the action with the largest Q value is exploited with the probability of $1 - \varepsilon$ . Aimed to enable the agent in the training phase to explore more at the early stage and exploit more using the existing experience at the later stage, some improvements are made to $\varepsilon$ -Greedy algorithm as displayed in Equation 1, where episode is the current training round, $E$ is the total training round, and $P$ is a probability parameter between 0 and 1. More details are given in section 4.
104
+
105
+ $$
106
+ \varepsilon = 1 - \frac {\text {e p i s o d e}}{E} \times p \tag {1}
107
+ $$
108
+
109
+ (3) Reward: The reward is feedback as a result of interaction between the agent and the environment through taking action.
110
+
111
+ In this paper, the reward is determined by the accuracy of the Android malware classifier, with the selected features in agent's current state as input. The agent enters a state $s$ after executing feature selection action $a$ . Then the reinforcement learning environment returns the corresponding reward from the malware classifier to evaluate the action value function $Q$ .
112
+
113
+ With the goal to obtain the highest Q value by action, the agent can consequently find the valid feature combination devoted to the highest accuracy. In Equation 2, $s$ and $a$ respectively represent the current state and the action taken at the current step. $r$ is the obtained reward, $s'$ represents the next state reached by the agent, and $a'$ refers to the action that can obtain the highest Q value in the next state. As $a'$ can also be calculated by Q function, the original formula is equivalent to Equation 3, where $\theta_{1}$ and $\theta_{2}$ represent the parameters of two networks in DDQN respectively.
114
+
115
+ $$
116
+ Q _ {(s, a)} = r + \gamma Q \left(s ^ {\prime}, a ^ {\prime}\right) \tag {2}
117
+ $$
118
+
119
+ $$
120
+ Q _ {(s, a, \theta_ {1})} = r + \gamma Q \left(s ^ {\prime}, \underset {a} {\arg \max } Q \left(s ^ {\prime}, a, \theta_ {2}\right), \theta_ {1}\right) \tag {3}
121
+ $$
122
+
123
+ # 3.3. Decision Network
124
+
125
+ In the DroidRL, the decision network is the brain of the agent. When exploitation is performed, the agent puts the current state represented by a vector into DroidRL's decision network, then the decision network returns the guidance of the next action to the agent.
126
+
127
+ It is worth mentioning that the length of the agent's state is continuing to grow. It causes the input of the decision network to have an unfixed length. DroidRL's decision network needs to be specially designed since the input to a neural network is normally in constant shape. RNN is a kind of neural network widely used in natural language processing. Because of the unfixed length of natural languages, RNN-like networks are designed to accept input of indeterminate length. For this reason, DroidRL's decision network adopts RNN and its variants.
128
+
129
+ The DroidRL also applies some tricks that can do help to the training effect and the ability of the feature selection.
130
+
131
+ (1)Word Embedding: The input of our decision network is a sequence of selected features. Instead of being presented as the one-hot vector, word embedding is applied to process the input. If the one-hot vector is used to represent the sequence of the features, the entire input matrix will be large and sparse. It will lead to a huge amount of computation and storage. Additionally, there is no semantic information when features are represented by one-hot vectors, which is not conducive for the decision network to find the correlation between features. Applying word embedding into DroidRL's decision network can improve the framework in the following two aspects:
132
+
133
+ 1. Compressing a one-hot vector into a denser one. Word embedding greatly reduces the input dimension and improves the training speed of the model.
134
+ 2. Compressed feature vectors are more semantic. The DroidRL's main task is to select an optimal subset of features. After word embedding is added to the decision network, the DroidRL can cluster features in high dimension space according to their semantics. Then it can better find the features that can be combined with the current selected features.
135
+
136
+ (2)Features Ordering: There is a special consideration for applying natural language processing methods to DroidRL feature selection. Natural language is sequential in nature which means replacing two words in a sentence can make the sentence confusing and meaningless. However, in feature selection, replacing any two selected features in the feature sequence should not have any influence on the decision network making its decision. The input state [1,2,3] and the input state [1,3,2] are identical in meaning since they contain the same features and should produce the same output in the decision network. Exchanges positions of any two features in the input state should not influence the result. This character of feature selection differs from that of natural language. If this particular property is not addressed, it may have a negative impact on RNN-like decision network learning.
137
+
138
+ A trick is applied to deal with this problem. Before the features are fed into the decision network, they are sorted by index. In this way, the same feature set can be guaranteed to produce only one input regardless of the order of feature selection.
139
+
140
+ After applying the above tricks, the ultimate decision network structure is shown in Figure 2. The agent puts its state, a sequence that represents the selected features, into the decision network. The first layer of the decision network
141
+
142
+ ![](images/89515a8a63e35d11f44aac7facdac320f24d527e230ff3baac0b99fe3c76afbc.jpg)
143
+ Figure 2: DroidRL's Decision Network. The decision network takes in the agent's current state and predicts a new feature as action.
144
+
145
+ is the embedding layer. Features represented by one-hot vectors go through the embedding layer and become more dense vectors. These vectors then are fed into the RNN-like network and finally enter a fully connected layer and a softmax layer.
146
+
147
+ # 4. TRAINING PHASE
148
+
149
+ The DroidRL training process is elaborated in this section. The training algorithm and evaluating algorithm are illustrated in Algorithm 1.
150
+
151
+ To overcome the problems of correlated data and non-stationary distribution of training data, replay memory is adopted in DroidRL. Before training begins, DroidRL obtains some initial samples by running warm-up episodes and feeds them into the replay memory. At the beginning of each training episode, the state of the agent is cleared and then the agent begins to select features. Each episode ends after a sufficient number of features have been selected. During the training episodes, a strategy that traded off between exploration and exploitation is adopted. As depicted in equation (1), the agent has a very high probability of exploration at the beginning but more possibility of exploitation as the increase of episode.
152
+
153
+ In the case of exploration, the agent randomly selects a feature (that is not in its state) in an action. Exploration allows the agent to try more possible feature combinations and choice space.
154
+
155
+ When the exploitation policy is executed, the agent uses previous experience to select the optimal feature. Instead of randomly selecting a feature, the agent puts its current state into the decision network and gets a vector of the same length as the feature dictionary that denotes the confidence of each feature. The agent takes the feature with the highest confidence as action. If the highest confidence feature has already been selected, the agent takes the second highest one instead, and so on. As mentioned before, after each time a new feature is added to the state, the state is reordered to ensure consistency of feature set representation. After each action is taken, the features in the state are used to classify malware and benign. The accuracy of the classification as the reward combines with the previous state and the current state after taking the action are put into the replay memory. The agent continues exploit and explore until enough features are selected.
156
+
157
+ After training all training episodes, the agent will finally run an evaluation episode. In this episode, each step of agent will utilize what it has learned during the training phase. The output of this episode is the final optimal feature subset.
158
+
159
+ # 5. EXPERIMENT SETUP
160
+
161
+ This section provides information on the hardware environment for training DroidRL, our dataset, and hyperparameters setting.
162
+
163
+ # 5.1. Training Environment
164
+
165
+ We performed all the experiments on the server with single Tesla V100 GPU and a CPU with two cores. The GPU was used to accelerate the training of decision network in DroidRL, while the training and prediction of the classifiers in DroidRL used CPU only. After the training process, DroidRL's classifier can be extracted separately for testing or deployed on any hardware that capable of running machine learning algorithms.
166
+
167
+ Algorithm 1: training and evaluation process
168
+ Data: length of feature dictionary $N$ , number of features to be selected $F$ total training episode $E$ replay memory warm-up episodes $M$ , initial replay memory rmp, initial network weights $\theta_{1},\theta_{2}$ , initial probability of exploration $\varepsilon$ Result: Optimal feature subset for malware detection
169
+ 1 initialization;
170
+ 2 for $m\gets 1$ to $M$ do
171
+ 3 sample some random states and feed into rmp;
172
+ 4 end
173
+ 5 for episode $\leftarrow 1$ to $E$ do
174
+ 6 state $\leftarrow$ an empty array
175
+ 7 for $f\gets 1$ to $F$ do
176
+ 8 previous state $\leftarrow$ state;
177
+ 9 with probability $\varepsilon$ random choose a feature and put it into the state;
178
+ 10 or
179
+ 11 input the state to Decision network and get an N dimension vector, find the index of the max value in the vector and put it into the state;
180
+ 12 calculate reward of the state;
181
+ 13 put reward, previous state, state into rpm;
182
+ 14 end
183
+ 15 if episode mod Network_Learn_Frequency $\equiv = 0$ then sampling some samples from rpm and update $\theta_{1}$ .
184
+ 17 end
185
+ 18 if episode mod Sync_Frequency $\equiv = 0$ then sampling some samples from rpm and learning; Synchronize $\theta_{1},\theta_{2}$ .
186
+ 21 end
187
+ 22 update $\varepsilon$ by equation (1);
188
+ 23 end
189
+ 24 start evaluating;
190
+ 25 optimal state $\leftarrow$ an empty array;
191
+ 26 for $f\gets 1$ to $F$ do
192
+ 27 input the current optimal state to Decision network and get an N dimension vector, find the index of the max value in the vector and put it into the optimal state;
193
+ 28 end
194
+ 29 recalculate final Reward of the optimal state;
195
+ 30 return optimal state, final Reward;
196
+
197
+ # 5.2. Dataset
198
+
199
+ DroidRL's dataset contains 5000 benign samples from AndroZoo and 5560 malware from Drebin to train and test the model. Both data sources are universally utilized in recent years' research focusing on Android malware detection, which renders it easier to carry out comparison experiments with other feature selection methods. AndroZoo [2] updates the collection of 16,000k different APKs from several sources including Google Play, with each application analyzed by different AntiVirus products to label the Malware. Malware samples in this research are mainly selected from Drebin [4], a commonly used dataset that contains 5,560 applications from 179 different malware families.
200
+
201
+ Static analysis is applied in this work, extracting the permissions, intent actions, and opcode as original features from android samples decompiled by APKtool and Androguard for further reinforcement learning based feature selection.
202
+
203
+ In total 457 permissions and 126 intent actions that are typically considered to be highly relevant to the malicious behavior of Android applications, are chosen in this paper to construct original feature set. Permissions indicate what sensitive user data (e.g., contacts and SMS) need to be accessed by an application, essential in Android malware
204
+
205
+ ![](images/c81221743fbcdde24b76409cede7c9663c76b42ff72a5fe71d394878b5e9c35a.jpg)
206
+ Figure 3: Overview of DroidRL
207
+
208
+ Table 2 Dalvik Instruction Transformation Table
209
+
210
+ <table><tr><td>Letter</td><td>Dalvik Instruction</td></tr><tr><td>M</td><td>move, move/from16, move/16, move-wide, move-wide/from16, move-result, move-wide/16, move-object, move-object/from16, move-object/16 ...</td></tr><tr><td>R</td><td>return-void, return, return-wide, return-object</td></tr><tr><td>G</td><td>goto, goto/16, goto/32</td></tr><tr><td>I</td><td>if-eq, if-ne, if-lt, if-ge, if-gt, if-le, if-eqz, if-nez, if-ltz, if-gez, if-gtz, if-lez</td></tr><tr><td>T</td><td>aget, aget-wide, aget-object, aget-boolean, aget-byte, aget-char, aget-short, iget, iget-wide, iget-object, iget-boolean, iget-byte, iget-char ...</td></tr><tr><td>P</td><td>aput, aput-wide, aput-object, aput-boolean, aput-byte, aput-char, aput-short, iput, iput-wide, iput-object, iput-boolean, iput-byte, iput-char...</td></tr><tr><td>V</td><td>invoke-virtual, invoke-super, invoke-direct, invoke-static, invoke-interface, invoke-virtual/range, invoke-super/range, invoke-direct/range...</td></tr></table>
211
+
212
+ detection. Intent actions are abstract objects containing information on the operation to be performed for an app component.
213
+
214
+ After disassembling the class.dex to generate the smalis files, Dalvik bytecode (example: invoke-direct) is gained through scanning the method field in smalis files with regular expression. Opcode are obtained by mapping the Dalvik bytecode to a series of letters as described in Table 2.
215
+
216
+ Opcode features are segmented by N-gram to obtain the transformation sequence. The dimensionality reduction approach is employed to address the problem of the high dimensionality of the feature vector due to the increase of the number of N-grams with the value of N. Firstly, the N-gram set of malicious samples is extracted by the N-gram extraction process proposed in [47]. The top k high-frequency N-grams are selected, presented as $Set = \{x_1, x_2, x_3, \dots, x_k\}$ . Subsequently, a k-dimensional binary feature vector $f_{\text{feature}} = [m_1, m_2, m_3, \dots, m_k]$ is constructed for the sample based on this feature set, where $m_i$ is "1" indicates that the N-gram set of the sample contains the element $x_i$ in the feature
217
+
218
+ set.
219
+
220
+ From the above discussion, the whole process to detect Android malware using DroidRL for feature selection is depicted in Figure 3. Android application samples are collected from Drebin and Androzoo datasets, decompiled by Android reverse engineering tools to extract the permissions, intent actions, and N-grams as the original features. Then DroidRL feature selection is applied to select the valid feature subsets from the original features. The valid feature subset is saved and employed to validate the performance of the malware classifier, using the number of features and accuracy as evaluating index. 10-fold cross-validation is used in the experiment to evaluate the models and avoid overfitting. The features selected by DroidRL will be used to train a final classifier for malware detection.
221
+
222
+ # 5.3. Hyperparameters setting
223
+
224
+ The detailed description and setting of DroidRL's parameters are shown in Table 8. The classifiers in DroidRL are built with scikit-learn and all hyperparameters used are set by default.
225
+
226
+ # 6. EXPERIMENT RESULTS
227
+
228
+ The following research questions have been brought out to help follow the process of experiment conduction:
229
+
230
+ RQ1 Is the result produced by the DroidRL framework stable selecting only a dozen features from a high-dimensional exploration space (e.g. 1083, the dimensionality of the original feature vector)?
231
+
232
+ RQ2 Does the decision network learn the key information conducive to the next optimal feature selection in the process of training?
233
+
234
+ RQ3 What is the performance of different classifiers using the optimal feature subset selected by the DroidRL framework as input?
235
+
236
+ RQ4 What is the impact on the training time of malware classifiers of using the feature subset selected by DroidRL rather than the original features?
237
+
238
+ RQ5 How is the performance of the DroidRL framework compared with other advanced methods in related work?
239
+
240
+ # 6.1. Stability Evaluation of Feature Selection Results
241
+
242
+ The key to adopting reinforcement learning to explore the best subset of features for Android malware detection is to find the best combination of features. However, with the exploration-exploitation strategy, DroidRL could take different actions even in the same state. There are inevitable differences in the optimal feature subset selected by the DroidRL. Moreover, the space that can be explored by reinforcement learning is gigantic. Only about 24 of the total set of 1083 features in the experiment are selected as input for malware detection. The different results may also cause the instability of the experiment results.
243
+
244
+ In an attempt to verify whether the randomness brought by the exploration-exploitation strategy will affect the final detection accuracy, and to further prove the stability of the feature selection result of DroidRL, this experiment uses decision tree (DT) as the classifier and tested the accuracy using five different feature subsets results obtained from DroidRL as input. The result is illustrated in Figure 4
245
+
246
+ Although the final results of the features selected by reinforcement learning in the five experiments are not strictly identical, it can be seen from Figure 4 that the detection accuracy always lies in the $92\% - 95\%$ range with few points appearing deviation due to the randomness brought by the exploration-exploitation strategy. Though the action taken by the agent each time can not be exactly the same when exploring the inexhaustible combination of feature subsets, the detection accuracy of using the selected features for malware classification generally remains stable.
247
+
248
+ # 6.2. Evaluation of the Learning Procedure of DroidRL
249
+
250
+ To illustrate the learning procedure of the decision network in the DroidRL framework, the reward, the training classification accuracy (Train Acc in Figure 5), and the testing classification accuracy (Test Acc in Figure 5) in each training episode were tracked. The accuracy in one episode (e.a., training evaluation episode or testing evaluation episode) is gained from the classifier using the selected features as input. In the training episode, the agent select features using exploration-exploitation strategy and the decision network is in the training mode; Train Acc is the malware classification accuracy by running one training evaluation episode on the training dataset. In the testing evaluation episode, the feature selection process of the agent is only guided by the decision network without random exploration; Test Acc is the average of the accuracy in the five testing evaluation episodes.
251
+
252
+ After each training episode, the DroidRL framework was tested by running five testing evaluation episodes in Figure 5 (a). In addition, taking 50 training episodes as a period, the DroidRL framework was tested after every period
253
+
254
+ ![](images/9fafbcfb4f7ea57151cc0fc91378784e94ec3a41caea3b43547c74d0b988a7c0.jpg)
255
+ Figure 4: Stability verification of different feature selection results by DroidRL
256
+
257
+ in Figure 5 (b) with the same approach to calculate the testing classification accuracy. Moreover, the training accuracy of one period was recorded as the average malware classification accuracy of the 50 training episodes.
258
+
259
+ In this experiment, the Long Short-Term Memory (LSTM) [13] severs as DroidRL's decision network since it has better contextual memory than RNN. The Decision Tree (DT) is applied as the classifier Similar tendencies are observed both in Figure 5 (a) and Figure 5 (b), as both results display the increasing reward and the malware classification accuracy. However, significant differences can be witnessed in the two figures, the training classification accuracy fluctuate greatly in Figure 5 (a), but remains comparatively low before 20 periods (e.g., 1000 episodes) and then rises to a higher stable state after in Figure 5 (b). Also, the testing classification accuracy is always slightly higher than the training classification accuracy. The above experimental results can be explained as follows:
260
+
261
+ (1) Both figures indicate the increasing reward and malware classification accuracy, showing learning performance of the DroidRL framework.
262
+ (2) In the training episodes of the DroidRL, $\epsilon$ -Greedy algorithm was used to balance between exploitation and exploration, but it was inevitable to select some redundant or irrelevant features in the exploration process. As shown in 5 (a), the fluctuated training accuracy is always higher than the testing accuracy. After about 20 periods, The testing accuracy becomes stable through utilizing existing experience while the training accuracy still has some sudden drops caused by $\epsilon$ -Greedy algorithm.
263
+ (3) After using the average accuracy of 50 episodes, the training classification accuracy in Figure 5 (b) is more stable in the first 20 periods compared with Figure 5 (a), which clearly shows its changing trend. Due to the gradually decreasing exploration probability of $\epsilon$ -Greedy algorithm in the later stage, the agent tended to use the existing experience from the decision network for feature selection. It led to higher and more stable training classification accuracy, as well as an accuracy curve more similar to the testing phase in the later periods, as shown in Figure 5 (b).
264
+
265
+ # 6.3. Comparison with Different Decision Networks
266
+
267
+ It is common practice to clarify the decision network selection criteria for a reinforcement learning based algorithm. Therefore, experiments on comparison with different decision networks are conducted and the results are described detailed in this section.
268
+
269
+ Effective for processing data with sequence characteristics, RNN and its variants have the capability to exploit the temporal and semantic information in the input data and are universally applied to predict the following content according to the context in natural language processing. Therefore, the DroidRL framework adopted an RNN-like network as the decision network to predict the feature to be selected in the next step according to the Android features selected in the previous steps.
270
+
271
+ ![](images/b90fe2f19f9edfdc623b1a0d98b0e1b4c8355dfbe5856ec3ed5fc2a9af38d447.jpg)
272
+ (a) Test the DroidRL model after each training episode
273
+
274
+ ![](images/3d266feb077d252e4b4ace04e4b6fd5cb2f15c48f7764055ad012cd5dfc47597.jpg)
275
+ (b) Test the DroidRL model after every 50 training episode
276
+
277
+ ![](images/aa97da4e3728a9b0bc53f87ce90c15c9b018114765e8bd8f51285a0548fb4770.jpg)
278
+ Figure 5: The evaluation of the performance of the DroidRL framework during training
279
+ Figure 6: The evaluation of different decision networks
280
+
281
+ With an attempt to explore which was the most suitable recurrent neural network as the decision network of DroidRL, this experiment applied RNN, Long Short-Term Memory (LSTM) [13] or Gated Recurrent Unit (GRU) [9] as the decision network for training respectively.
282
+
283
+ In the experiment, 10-fold cross-validation is employed on the shuffled dataset. The average accuracy on the testing set was taken as the classification accuracy. The results are presented in Figure 6.
284
+
285
+ As the experimental results figures sketch, with more selected Android features in the final valid feature subset, steadily increased higher accuracy is witnessed in RNN, GRU, and SLTM. The accuracy fluctuation was the most stable when using GRU as the decision network. The model obtained the best performance (eg. $95.6\%$ accuracy) with 24 features selected in LSTM as input for malware classification, and the computational overhead was reduced by $97.78\%$ .
286
+
287
+ Based on the above experimental results, the following explanations are concluded according to the principle of the DroidRL framework. Reinforcement learning has a significant effect on selecting the optimal Android feature subset. Adopting the traditional LSTM as the decision network, DroidRL reduced the computational overhead by $97.78\%$ and retains the accuracy of $95.6\%$ .
288
+
289
+ # 6.4. Comparison with Different Classifiers
290
+
291
+ To demonstrate the performance of our model, we conducted a series of comparative experiments to find the combination of different quantities of features and different classifiers. As shown in Figure 7, the vertical axis represents the max number of features selected by reinforcement learning-based algorithm for malware detection. More precisely, to make the learning procedure efficient and meaningful, the number of features selected by the model is limited from 6 to 24.
292
+
293
+ It can be seen in Figure 7, when the number of selected features is relatively small, the accuracy is about $90\%$ . As the number of selected features increases, the accuracy gradually improves, raising to $95\%$ .
294
+
295
+ Figure 7 shows that reinforcement learning can be applied to Android malware detection to select valid feature sets for malware classification. The accuracy is stable on Random Forest (RF), Decision Tree (DT), and Support Vector Machine (SVM). It can be observed in the Table 6 and Table 7 that DT, RF, and SVM achieve higher accuracy with fewer features and have a stable performance. K-Nearest Neighbors (KNN) models perform not so well, as seen from the fluctuation of the accuracy in figures. After the verification experiment, it is found that different hyperparameters k are suitable for classifying different feature numbers, so the accuracy fluctuates greatly with the number of selected features.
296
+
297
+ # 6.5. Comparison of the Classifiers' Training Time
298
+
299
+ The computational complexity of machine learning algorithms grows with the number of samples and features. While the increase in sample size can bring more robustness to the classifier, the increasing number of features could bring redundancy. A large number of features also increase the computational complexity and resources required for training. To evaluate the ability of DroidRL to retain training efficiency, this experiment uses feature subsets with different lengths selected by DroidRL to train various malware classifiers. To measure the improvement in training efficiency, we calculated the ratio of the time consumed to train the classifier using a subset of features to the time spent using the full feature set.
300
+
301
+ As shown in Table 3, using the subsets to train the models significantly improves the training efficiency. The reason for the relatively large ratio on Random Forest mainly lies in that there are many subtrees in the model. There is a lower bound on the time to train subtrees as the number of features decreases.
302
+
303
+ Another noteworthy phenomenon is that the training time ratio does not strictly improve with the number of features, and sometimes even drops. This indicates that DroidRL filters out features that are useful for classification, and the added features allow the classifiers to easier find decision boundaries, which in turn speeds up training.
304
+
305
+ # 6.6. Comparison with Related Work
306
+
307
+ To make a comprehensive comparison between the proposed DroidRL framework and related Android malware detection methods, we conduct a comparative experiment based on the dataset in our work. The implementation of detection methods in this section refers to the code of the work [27].
308
+
309
+ Firstly, to illustrate the effect of feature selection in machine learning based Android malware detection, we compare DroidRL with other Android malware detection approaches without applying the feature selection method. The results are displayed in Table 4. The proposed DroidRL framework outperforms DroidDet [48], HMMDetector [8], and
310
+
311
+ ![](images/f72582272158d3d5fe1cd4b01da2e3cfde8c3486a0ed87bb88ce052edc8fd575.jpg)
312
+ DroidRL
313
+ (a) DT as classifier
314
+
315
+ ![](images/1d7b4d946c2e310332021513b995cd77be65443588c74a75c7d8f25f6a65c338.jpg)
316
+ (b) RF as classifier
317
+
318
+ ![](images/e19adea091c363813107fd467f041b8d8f0f3f02f508d924eb4b8ff46f9b890d.jpg)
319
+ (c) SVM as classifier
320
+
321
+ ![](images/8cb9c82fd10ebc689b4a384b3a2748f93a82d23a66295a316d2dbb1e41b13abd.jpg)
322
+ (d) KNN as classifier
323
+ Figure 7: The performance on different classifiers
324
+
325
+ Table 3 Training time ratio
326
+
327
+ <table><tr><td>Ratio(%)1 Features Used
328
+ Classifier</td><td>16</td><td>17</td><td>18</td><td>19</td><td>20</td><td>21</td><td>22</td><td>23</td><td>24</td></tr><tr><td>Decision Tree</td><td>1.23</td><td>1.64</td><td>1.36</td><td>1.78</td><td>1.63</td><td>1.62</td><td>1.91</td><td>2.06</td><td>2.78</td></tr><tr><td>Random Forest</td><td>15.07</td><td>16.64</td><td>15.66</td><td>17.35</td><td>16.64</td><td>16.36</td><td>16.34</td><td>16.85</td><td>17.81</td></tr><tr><td>Support Vector Machine</td><td>3.53</td><td>4.22</td><td>4.13</td><td>4.50</td><td>4.51</td><td>4.49</td><td>4.79</td><td>4.74</td><td>4.88</td></tr></table>
329
+
330
+ 1 The Ratio represents the percentage of time consumed for training with a subset of features versus training with all(1083) features
331
+
332
+ Table 4 Comparison with detection approaches without feature selection
333
+
334
+ <table><tr><td>Method</td><td>Number of Features</td><td>Accuracy</td><td>ML Based Detection Model</td></tr><tr><td>MamaDroid[30]</td><td>190,096</td><td>0.989</td><td>Random Forest</td></tr><tr><td>DroidDet[48]</td><td>3,122</td><td>0.921</td><td>Rotation Forest</td></tr><tr><td>HMMDetector[8]</td><td>-</td><td>0.871</td><td>Hidden Markov Model, Random Forest</td></tr><tr><td>Drebin[3]</td><td>545,356</td><td>0.881</td><td>Support Vector Machine</td></tr><tr><td>DroidRL (ours)</td><td>24</td><td>0.956</td><td>Random Forest</td></tr></table>
335
+
336
+ Table 5 Comparison with different feature selection approaches
337
+
338
+ <table><tr><td>Method</td><td>Number of Features</td><td>Accuracy</td><td>Feature Selection Method</td><td>ML Based Detection Model</td></tr><tr><td>ICCDetector [43]</td><td>40</td><td>0.948</td><td>Correlation-based Feature Selection</td><td>Support Vector Machine</td></tr><tr><td>BasicBlocks [1]</td><td>45</td><td>0.947</td><td>Information Gain</td><td>Random Forest</td></tr><tr><td>DroidRL (ours)</td><td>24</td><td>0.956</td><td>Reinforcement Learning</td><td>Random Forest</td></tr></table>
339
+
340
+ Drebin [3] with higher accuracy and fewer features used in the detection, which demonstrates the features selected by reinforcement learning are highly relevant to the malware attributes. With 190,072 more features extracted compared to DroidRL, MamaDroid [30] only obtains a 0.033 higher accuracy, increasing accuracy at the huge running cost. It is highly time-consuming to extract 190,096 features and detect malware with so many features. Android malware detection is much more computationally efficient with valid feature subsets containing only 24 features selected by reinforcement learning used in our model.
341
+
342
+ To further demonstrate the optimality of the features selected by DroidRL, the performance of DroidRL is compared with other feature selection methods used for machine learning based Android malware detection. The results of these experiments are shown in Table 5 with the number of features and detection performance accuracy as indicators. We implemented these methods with a specified number of features used in detection as shown in the table (not necessarily the same number in the original work). Compared with other traditional feature selection methods listed in the table, DroidRL obtains higher accuracy with a smaller number of features used in the detection, displaying that reinforcement learning retains its power to filter optimal features. Because, as the wrapper-based feature selection method that can utilize the feedback from the classifier to evaluate the feature subset, DroidRL can select an optimal subset of features.
343
+
344
+ # 7. CONCLUSION
345
+
346
+ The proposed DroidRL applies the DDQN algorithm to the feature selection phase to select the optimal subset of features for Android malware detection. Especially, the RNN-like network is applied as the decision network in DDQN for its capability of processing variable-length sequences. For the purpose of finding the correlation between features, DroidRL uses word embedding to semantically represent the features. During the training phase, the train-off policy is used to increase the feature search space of the DroidRL. Experiments on Drebin and Androzoo demonstrate that the DroidRL framework shows better performance than the traditional static feature extraction models, markedly
347
+
348
+ Table 6 Detailed performance of each classifier(1)
349
+
350
+ <table><tr><td>Classifier\Features Used</td><td>6</td><td>7</td><td>8</td><td>9</td><td>10</td><td>11</td><td>12</td><td>13</td><td>14</td></tr><tr><td>Decision Tree</td><td>0.929</td><td>0.927</td><td>0.932</td><td>0.943</td><td>0.938</td><td>0.935</td><td>0.932</td><td>0.938</td><td>0.942</td></tr><tr><td>Random Forest</td><td>0.929</td><td>0.927</td><td>0.933</td><td>0.942</td><td>0.940</td><td>0.936</td><td>0.932</td><td>0.940</td><td>0.942</td></tr><tr><td>KNN</td><td>0.911</td><td>0.732</td><td>0.811</td><td>0.891</td><td>0.810</td><td>0.660</td><td>0.920</td><td>0.860</td><td>0.820</td></tr><tr><td>SVM</td><td>0.928</td><td>0.927</td><td>0.930</td><td>0.941</td><td>0.935</td><td>0.936</td><td>0.932</td><td>0.940</td><td>0.938</td></tr></table>
351
+
352
+ Table 7 Detailed performance of each classifier(2)
353
+
354
+ <table><tr><td>Classifier\Features Used</td><td>15</td><td>16</td><td>17</td><td>18</td><td>19</td><td>20</td><td>21</td><td>22</td><td>23</td><td>24</td></tr><tr><td>Decision Tree</td><td>0.936</td><td>0.941</td><td>0.946</td><td>0.944</td><td>0.945</td><td>0.943</td><td>0.940</td><td>0.950</td><td>0.948</td><td>0.955</td></tr><tr><td>Random Forest</td><td>0.936</td><td>0.942</td><td>0.948</td><td>0.944</td><td>0.946</td><td>0.943</td><td>0.939</td><td>0.952</td><td>0.948</td><td>0.956</td></tr><tr><td>KNN</td><td>0.793</td><td>0.940</td><td>0.934</td><td>0.940</td><td>0.936</td><td>0.936</td><td>0.937</td><td>0.938</td><td>0.945</td><td>0.945</td></tr><tr><td>SVM</td><td>0.936</td><td>0.939</td><td>0.941</td><td>0.939</td><td>0.942</td><td>0.941</td><td>0.938</td><td>0.943</td><td>0.945</td><td>0.951</td></tr></table>
355
+
356
+ Table 8 Hyperparameter setting
357
+
358
+ <table><tr><td>Parameter</td><td>Value</td><td>Description</td></tr><tr><td>replay start size</td><td>50,000</td><td>The number of steps carried out by the agency using uniform random policy</td></tr><tr><td>replay buffer size</td><td>200,000</td><td>The capacity of replay buffer memory</td></tr><tr><td>batch size</td><td>32</td><td>The number of training cases over which each gradient descent update is computed</td></tr><tr><td>discount factor</td><td>0.99</td><td>The factor that determines the importance of future rewards in the Q-learning</td></tr><tr><td>start learning rate</td><td>0.0003</td><td>The start factor of the linear decay scheduler</td></tr><tr><td>training interval</td><td>5</td><td>The frequency that the decision network is trained</td></tr></table>
359
+
360
+ improving detection performance on a variety of classifiers. The DroidRL is shown to be effective on feature selection tasks and it will hopefully serve as an element to build up a robust malware detector in the future.
361
+
362
+ # References
363
+
364
+ [1] Allix, K., Bissyandé, T.F., Jérôme, Q., Klein, J., State, R., Le Traon, Y., 2016a. Empirical assessment of machine learning-based malware detectors for android. Empirical Softw. Engg. 21, 183-211. URL: https://doi.org/10.1007/s10664-014-9352-6, doi:10.1007/s10664-014-9352-6.
365
+ [2] Allix, K., Bissyandé, T.F., Klein, J., Le Traon, Y., 2016b. Androzoo: Collecting millions of android apps for the research community, in: Proceedings of the 13th International Conference on Mining Software Repositories, ACM, New York, NY, USA. pp. 468-471. URL: http://doi.acm.org/10.1145/2901739.2903508, doi:10.1145/2901739.2903508.
366
+ [3] Arp, D., Spreitzenbarth, M., Hübner, M., Gascon, H., Rieck, K., 2014a. Drebin: Effective and explainable detection of android malware in your pocket doi:10.14722/ndss.2014.23247.
367
+ [4] Arp, D., Spreitzenbarth, M., Hubner, M., Gascon, H., Rieck, K., Siemens, C., 2014b. Drebin: Effective and explainable detection of android malware in your pocket., in: Ndss, pp. 23-26.
368
+ [5] Babaagba, K.O., Adesanya, S.O., 2019. A study on the effect of feature selection on malware analysis using machine learning. ACM International Conference Proceeding Series Part F148151, 51-55. doi:10.1145/3318396.3318448.
369
+ [6] Bibi, I., Akhunzada, A., Malik, J., Iqbal, J., Mussaddiq, A., Kim, S., 2020. A dynamic dl-driven architecture to combat sophisticated android malware. IEEE Access 8, 129600-129612.
370
+ [7] Binxiang, L., Gang, Z., Ruoying, S., 2019. A Deep Reinforcement Learning Malware Detection Method Based on PE Feature Distribution, 23-27doi:10.1109/ICISCE48695.2019.00014.
371
+ [8] Canfora, G., Mercaldo, F., Visaggio, C.A., 2016. An hmm and structural entropy based detector for android malware: An empirical study. Computers & Security 61, 1-18. URL: https://www.sciencedirect.com/science/article/pii/S0167404816300499, doi:https://doi.org/10.1016/j.cose.2016.04.009.
372
+
373
+ [9] Cho, K., Van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougaes, F., Schwenk, H., Bengio, Y., 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
374
+ [10] Dash, M., Liu, H., 2003. Consistency-based search in feature selection. Artificial Intelligence 151, 155-176.
375
+ [11] Fang, Z., Wang, J., Geng, J., Kan, X., 2019. Feature Selection for Malware Detection Based on Reinforcement Learning. IEEE Access 7, 176177-176187. doi:10.1109/ACCESS.2019.2957429.
376
+ [12] Fatima, A., Maurya, R., Dutta, M.K., Burget, R., Masek, J., 2019. Android malware detection using genetic algorithm based optimized feature selection and machine learning. 2019 42nd International Conference on Telecommunications and Signal Processing, TSP 2019, 220-223doi:10.1109/TSP.2019.8769039.
377
+ [13] Graves, A., 2012. Long short-term memory. Springer Berlin Heidelberg.
378
+ [14] Hall, M.A., 2000. Correlation-based feature selection for machine learning. Morgan Kaufmann Publishers Inc..
379
+ [15] Hasselt, H.V.A.N., Guez, A., Silver, D., Deepmind, G., 2015. Deep reinforcement learning with double q-learning arXiv:arXiv:1509.06461v1.
380
+ [16] Huang, J.J., Cai, Y.Z., Xu, X.M., 2008. A parameterless feature ranking algorithm based on MI. Neurocomputing 71, 1656-1668. doi:10.1016/j.neucom.2007.04.012.
381
+ [17] Huda, S., Abawajy, J., Alazab, M., Abdollahihian, M., Islam, R., Yearwood, J., 2016. Hybrids of support vector machine wrapper and filter based framework for malware detection. Future Generation Computer Systems 55, 376-390. URL: https://www.sciencedirect.com/science/article/pii/S0167739X14001228, doi:https://doi.org/10.1016/j_future.2014.06.001.
382
+ [18] Ji-Xiang, Y.E., Gong, X.L., 2010. A novel fast wrapper for feature subset selection. Journal of Changsha University of Science & Technology(Natural Science).
383
+ [19] Kouliaridis, V., Barmpatsalou, K., Kambourakis, G., Chen, S., 2020. A survey on mobile malware detection techniques. IEICE Transactions on Information and Systems 103, 204-211.
384
+ [20] Lei, Y., Huan, L., 2003. Feature selection for high-dimensional data: A fast correlation-based filter solution, in: Proceedings of the Twentieth International Conference on International Conference on Machine Learning, AAAI Press. p. 856-863.
385
+ [21] Mahindru, A., Sangal, A.L., 2020. SOMDROID: android malware detection by artificial neural network trained using unsupervised learning. Springer Berlin Heidelberg. URL: https://doi.org/10.1007/s12065-020-00518-1, doi:10.1007/s12065-020-00518-1.
386
+ [22] Mahindru, A., Sangal, A.L., 2021. FSDroid: - A feature selection technique to detect malware from Android using Machine Learning Techniques: FSDroid. Multimedia Tools and Applications doi:10.1007/s11042-020-10367-w.
387
+ [23] Mantoo, B.A., 2020. A hybrid approach with intrinsic feature-based android malware detection using lda and machine learning, in: The International Conference on Recent Innovations in Computing, Springer. pp. 295-306.
388
+ [24] Mcwilliams, G., Sezer, S., Yerima, S.Y., 2014. Analysis of bayesian classification-based approaches for android malware detection. Information Security Let 8, 25-36.
389
+ [25] Melo, F.S., 2001. Convergence of Q-learning: A simple proof. Institute Of Systems and Robotics, Tech. Rep , 1-4arXiv:arXiv:1011.1669v3.
390
+ [26] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D., 2015. Human-level control through deep reinforcement learning. Nature 518, 529-533. URL: http://dx.doi.org/10.1038/nature14236, doi:10.1038/nature14236.
391
+ [27] Molina-Coronado, B., Mori, U., Mendiburu, A., Miguel-Alonso, J., 2023. Towards a fair comparison and realistic evaluation framework of android malware detectors based on static analysis and machine learning. Computers & Security 124, 102996. URL: https://www.sciencedirect.com/science/article/pii/S0167404822003881, doi:https://doi.org/10.1016/j.cose.2022.102996.
392
+ [28] Narayanan, A., Chandramohan, M., Chen, L., Liu, Y., 2017. Context-aware, adaptive, and scalable android malware detection through online learning. IEEE Transactions on Emerging Topics in Computational Intelligence 1, 157-175.
393
+ [29] O Yildiz, D., Alper, I., 2019. Permission-based android malware detection system using feature selection with genetic algorithm. International Journal of Software Engineering and Knowledge Engineering 29, 245-262. doi:10.1142/S0218194019500116.
394
+ [30] Onwuzurike, L., Mariconti, E., Andriotis, P., Cristofaro, E.D., Ross, G., Stringhini, G., 2019. Mamadroid: Detecting android malware by building markov chains of behavioral models (extended version). ACM Trans. Priv. Secur. 22. URL: https://doi.org/10.1145/3313391, doi:10.1145/3313391.
395
+ [31] Papamartzivanos, D., Damopoulos, D., Kambourakis, G., 2014. A cloud-based architecture to crowdsourced mobile app privacy leaks, in: Proceedings of the 18th panhellenic conference on informatics, pp. 1-6.
396
+ [32] Priya, V.D., P., V., 2020. Detecting android malware using an improved filter based technique in embedded software. Microprocessors and Microsystems 76.
397
+ [33] security team of Qianxin Threat Intelligence Center, M., 2020. Security situation analysis report of android platform in 2020. URL: https://www.qianxin.com/threat/reportdetail?report_id=125.
398
+ [34] Rathore, H., Sahay, S.K., Nikam, P., Sewak, M., 2020. Robust Android Malware Detection System Against Adversarial Attacks Using Q-Learning. Information Systems Frontiers doi:10.1007/s10796-020-10083-8.
399
+ [35] S., Y.E., 2002. Feature selection in support vector machines. University of Florida 7, 1-28.
400
+ [36] Salah, A., Shalabi, E., Khedr, W., 2020. A lightweight android malware classifier using novel feature selection methods. Symmetry 12, 858.
401
+ [37] Spolar, N., Cherman, E.A., Monard, M.C., Lee, H.D., 2013. Relief for multi-label feature selection, in: Proceedings of the 2013 Brazilian Conference on Intelligent Systems.
402
+ [38] Team, M.D.R., 2021. Cyberbattlesim. URL: https://github.com/microsoft/cyberbattlesim. created by Christian Seifert, Michael Betser, William Blum, James Bono, Kate Farris, Emily Goren, Justin Grana, Kristian Holsheimer, Brandon Marken, Joshua Neil, Nicole Nichols, Jugal Parikh, Haoran Wei.
403
+ [39] Wan, X., Sheng, G., Li, Y., Xiao, L., Du, X., 2017. Reinforcement Learning Based Mobile Offloading for Cloud-based Malware Detection.
404
+
405
+ [40] Wang, S., Chen, Z., Yan, Q., Ji, K., Peng, L., Yang, B., 2020. Deep and broad URL feature mining for android malware detection 513, 600-613. doi:10.1016/j.ins.2019.11.008.
406
+ [41] Wang, W., Wang, X., Feng, D., Liu, J., Han, Z., Zhang, X., 2014. Exploring permission-induced risk in android applications for malicious application detection. IEEE Transactions on Information Forensics and Security 9, 1869-1882. doi:10.1109/TIFS.2014.2353996.
407
+ [42] Witten, I.H., Frank, E., 2011. Data mining: practical machine learning tools and techniques. Acm Sigmoid Record 31, 76-77.
408
+ [43] Xu, K., Li, Y., Deng, R.H., 2016. Iccdetector: Icc-based malware detection on android. IEEE Transactions on Information Forensics and Security 11, 1252-1264. doi:10.1109/TIFS.2016.2523912.
409
+ [44] Yan, P., Yan, Z., 2018. A survey on dynamic mobile malware detection. Software Quality Journal 26, 891-919.
410
+ [45] Yuan, Z., Lu, Y., Xue, Y., 2016. Droid detector: Android malware characterization and detection using deep learning. Tsinghua Sci. Technol.
411
+ [46] Zhang, N., Tan, Y.a., Yang, C., Li, Y., 2021. Deep learning feature exploration for android malware detection. Applied Soft Computing 102, 107069.
412
+ [47] ZHANG Zong-mei, GUI Sheng-lin, R.F., 2019. Android malware detection based on n-gram. COMPUTER SCIENCE v.46, 154–160.
413
+ [48] Zhu, H.J., You, Z.H., Zhu, Z.X., Shi, W.L., Chen, X., Cheng, L., 2018. Droiddet: Effective and robust detection of android malware using static analysis along with rotation forest model. Neurocomputing 272, 638-646. URL: https://www.sciencedirect.com/science/article/pii/S0925231217312870, doi:https://doi.org/10.1016/j.neucom.2017.07.030.
414
+
415
+ # DroidRL
416
+
417
+ ![](images/895fa5e438989f4a97e48132a9d80ff7dfa7a845075a7e61f73b308fda9d729b.jpg)
418
+
419
+ Yinwei Wu studies in Software College of Sichuan University, Chengdu, China. He is now involved in research work on information security. His research interests include software security, deep learning and reinforcement learning.
420
+
421
+ ![](images/c5563a977dfd7086c4c044339fe2acf0140eb62699052f12af552d7a4ecf969a.jpg)
422
+
423
+ Meijin Li is from Sichuan University, Chengdu, China. She is currently engaged in research in the field of network security and is interested in machine learning and mobile security.
424
+
425
+ ![](images/55e6d54f464e24359d957ca51fc03970879b4840c69c9015433d8f9eba15aab4.jpg)
426
+
427
+ Zeng Qi is expected to receive a bachelor's degree in Computer Science and Technology from Sichuan University, Chengdu in 2023. His research interests include machine learning and software security
428
+
429
+ # DroidRL
430
+
431
+ ![](images/3bcdda24009ed06e87786717c8202e967f5e26a2e894692ff7a310bc62e032cb.jpg)
432
+
433
+ ![](images/2d3548fb2f25a92a03afcdb09a0318c1e94698c0ddd659701da10fdd11bbb989.jpg)
434
+
435
+ Tao Yang is currently pursuing a bachelor's degree in Computer Science and Technology from Sichuan University, Chengdu in 2022. He is recently involved in research work on information security. His research interests include android malware detection and machine learning.
436
+
437
+ Junfeng Wang received the M.S. degree in Computer Application Technology from Chongqing University of Posts and Telecommunications, Chongqing in 2001 and Ph.D. degree in Computer Science from University of Electronic Science and Technology of China, Chengdu in 2004. From July 2004 to August 2006, he held a postdoctoral position in Institute of Software, Chinese Academy of Sciences. From August 2006, Dr. Wang is with the College of Computer Science and the School of Aeronautics & Astronautics, Sichuan University as a professor. He is currently serving as an associate editor for IEEE Access, IEEE Internet of Things and Security and Communication Networks, etc. His recent research interests include network and information security, spatial information networks and data mining.
438
+
439
+ ![](images/f3bbdd111cdc29cae8eedef4424eed3814aec0d28cf3cf2857de0d82789cadfb.jpg)
440
+
441
+ Zhiyang Fang received his Ph.D. degree in Computer Science and Technology from Sichuan University, Chengdu in 2020. He is currently involved in research work on information security. His research interests include software security, deep learning and software engineering.
442
+
443
+ ![](images/329038a50b78b68fd8a9a12c3f37c39f2f58c552181d61b661344e1c89270e93.jpg)
444
+
445
+ Luyu Cheng studies in Business School from Sichuan University, Chengdu, China. Her research interests include supply chain management, data analysis, etc.
2203.02xxx/2203.02719/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e748ef41562599bb2c27fe4dcf7a34ec61949572efdd281c6cacffb25addf83a
3
+ size 732497
2203.02xxx/2203.02719/layout.json ADDED
The diff for this file is too large to render. See raw diff