JimmyUnleashed commited on
Commit
95d95dd
·
verified ·
1 Parent(s): d34942d

Upload 8 files

Browse files
Files changed (8) hide show
  1. README.html +314 -0
  2. README.md +261 -3
  3. evaluation.m +118 -0
  4. example_evaluation.m +43 -0
  5. vl_argparse.m +72 -0
  6. vl_pr.m +234 -0
  7. vl_roc.m +234 -0
  8. vl_tpfp.m +62 -0
README.html ADDED
@@ -0,0 +1,314 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
2
+ "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
3
+ <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
4
+ <head>
5
+ <meta http-equiv="content-type" content="text/html; charset=utf-8" />
6
+ <title>FGVC-Aircraft</title>
7
+ <style type="text/css">
8
+ html {
9
+ font-family: Helvetica, Arial, Sans ;
10
+ }
11
+ body {
12
+ max-width: 60em ;
13
+ margin: 0 auto ;
14
+ padding: 1em ;
15
+ }
16
+ pre {
17
+ background-color: #fafafa ;
18
+ border: 1px solid #ccc ;
19
+ padding: 1em ;
20
+ }
21
+ p {
22
+ line-height: 1.4em ;
23
+ }
24
+ </style>
25
+ </head>
26
+ <body>
27
+
28
+ <h1>FGVC-Aircraft Benchmark</h1>
29
+
30
+ <p><strong>Fine-Grained Visual Classification of Aircraft (FGVC-Aircraft)</strong> is
31
+ a benchmark dataset for the fine grained visual categorization of
32
+ aircraft.</p>
33
+
34
+ <ul>
35
+ <li><a href="archives/fgvc-aircraft-2013b.tar.gz">Data, annotations, and evaluation code</a> [2.75 GB | <a href="archives/fgvc-aircraft-2013b.html">MD5 Sum</a>].</li>
36
+ <li><a href="archives/fgvc-aircraft-2013b-annotations.tar.gz">Annotations and evaluation code only</a> [375 KB | <a href="archives/fgvc-aircraft-2013b-annotations.html">MD5 Sum</a>].</li>
37
+ <li>Project <a href="http://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/">home page</a>.</li>
38
+ <li>This data was used as part of the fine-grained recognition challenge
39
+ <a href="https://sites.google.com/site/fgcomp2013/">FGComp 2013</a> which ran
40
+ jointly with the ImageNet Challenge 2013
41
+ (<a href="https://sites.google.com/site/fgcomp2013/results">results</a>). Please
42
+ note that <em>the evaluation code provided here may differ</em> from the
43
+ one used in the challenge.</li>
44
+ </ul>
45
+
46
+ <p>Please use the following citation when referring to this dataset:</p>
47
+
48
+ <p><em>Fine-Grained Visual Classification of Aircraft</em>, S. Maji, J. Kannala,
49
+ E. Rahtu, M. Blaschko, A. Vedaldi, <a href="http://arxiv.org/abs/1306.5151">arXiv.org</a>, 2013</p>
50
+
51
+ <pre><code>@techreport{maji13fine-grained,
52
+ title = {Fine-Grained Visual Classification of Aircraft},
53
+ author = {S. Maji and J. Kannala and E. Rahtu
54
+ and M. Blaschko and A. Vedaldi},
55
+ year = {2013},
56
+ archivePrefix = {arXiv},
57
+ eprint = {1306.5151},
58
+ primaryClass = "cs-cv",
59
+ }
60
+ </code></pre>
61
+
62
+ <p>For further information see:</p>
63
+
64
+ <ul>
65
+ <li><a href="#quick">Quick start</a>
66
+ <ul>
67
+ <li><a href="#aircraft">About aircraft</a></li>
68
+ </ul></li>
69
+ <li><a href="#format">Data and annotation format</a></li>
70
+ <li><a href="#evaluation">Evaluation</a>
71
+ <ul>
72
+ <li><a href="#metric">Evaluation metric</a></li>
73
+ <li><a href="#code">Evaluation code</a></li>
74
+ </ul></li>
75
+ <li><a href="#ack">Ackwonledgments</a></li>
76
+ <li><a href="#release">Release notes</a></li>
77
+ </ul>
78
+
79
+ <p><strong>Note.</strong> This data has been used as part of the <em>ImageNet FGVC
80
+ challenge in conjuction with the International Conference on Computer
81
+ Vision (ICCV) 2013</em>. Test labels were not made available until the
82
+ challenge due to the ImageNet challenge policy. They have now been
83
+ released as part of the download above. If you arelady downloaded the
84
+ iamge archive and want to have access to the test labels, simply
85
+ download the annotations archive again.</p>
86
+
87
+ <p><strong>Note.</strong> Images in the benchmark are generously made available <strong>for
88
+ non-commercial research purposes only</strong> by a number of <em>airplane
89
+ spotters</em>. Please note that the original authors retain the copyright
90
+ of the respective photographs and should be contacted for any other
91
+ use. For further details see the <a href="#ack">copyright note</a> below.</p>
92
+
93
+ <h1><a id=quick></a> Quick start</h1>
94
+
95
+ <p>The dataset contains 10,200 images of aircraft, with 100 images for
96
+ each of 102 different aircraft model variants, most of which are
97
+ airplanes. The (main) aircraft in each image is annotated with a tight
98
+ bounding box and a hierarchical airplane model label.</p>
99
+
100
+ <p>Aircraft models are organized in a four-levels hierarchy. The four
101
+ levels, from finer to coarser, are:</p>
102
+
103
+ <ul>
104
+ <li><strong>Model</strong>, e.g. <em>Boeing 737-76J</em>. Since certain models are nearly visually
105
+ indistinguishable, this level is not used in the evaluation.</li>
106
+ <li><strong>Variant</strong>, e.g. <em>Boeing 737-700</em>. A variant collapses all the
107
+ models that are visually indistinguishable into one class. The
108
+ dataset comprises 102 different variants.</li>
109
+ <li><strong>Family</strong>, e.g. <em>Boeing 737</em>. The dataset comprises 70 different
110
+ families.</li>
111
+ <li><strong>Manufacturer</strong>, e.g. <em>Boeing</em>. The dataset comprises 41
112
+ different manufacturers.</li>
113
+ </ul>
114
+
115
+ <p>The data is divided into three equally-sized <em>training</em>, <em>validation</em>
116
+ and <em>test</em> subsets. The first two sets can be used for development,
117
+ and the latter should be used for final evaluation only. The format of
118
+ the data is described <a href="#format">next</a>.</p>
119
+
120
+ <p>The performance of a fine-grained classification algorithm is
121
+ evaluated in term of average class-prediction accuracy. This is
122
+ defined as the average of the diagonal of the row-normalized confusion
123
+ matrix, as used for example in Caltech-101. Three classification
124
+ challenges are considered: variant, family, and manufacturer. An
125
+ <a href="#software">evaluation script</a> in MATLAB is provided.</p>
126
+
127
+ <h2><a href=aircraft></a> About aircraft</h2>
128
+
129
+ <p>Aircraft, and in particular airplanes, are alternative to objects
130
+ typically considered for fine-grained categorization such as birds and
131
+ pets. There are several aspects that make aircraft model recognition
132
+ particularly interesting. Firstly, aircraft designs span a hundred
133
+ years, including many thousand different models and hundreds of
134
+ different makes and airlines. Secondly, aircraft designs vary
135
+ significantly depending on the size (from home-built to large
136
+ carriers), destination (private, civil, military), purpose
137
+ (transporter, carrier, training, sport, fighter, etc.), propulsion
138
+ (glider, propeller, jet), and many other factors including
139
+ technology. One particular axis of variation, which is is not shared
140
+ with categories such as animals, is the fact that the <em>structure</em> of
141
+ the aircraft changes with their design (number of wings,
142
+ undercarriages, wheel per undercarriage, engines, etc.). Thirdly, any
143
+ given aircraft model can be re-purposed or used by different
144
+ companies, which causes further variations in appearance
145
+ (livery). These, depending on the identification task, may be consider
146
+ as noise or as useful information to be extracted. Finally, aircraft
147
+ are largely rigid objects, which simplifies certain aspects of their
148
+ modeling (compared to highly-deformable animals such as cats),
149
+ allowing one to focus on the core aspects of the fine-grained
150
+ recognition problem.</p>
151
+
152
+ <h1><a id=format></a> Data format</h1>
153
+
154
+ <p>The directory <code>data</code> contains the images as well as a number of text
155
+ files with the data annotations.</p>
156
+
157
+ <p>Images are contained in the <code>data/images</code> sub-directory. They are in
158
+ JPEG format and have a name composed of seven digits and the <code>.jpg</code>
159
+ suffix (e.g. <code>data/images/1187707.jpg</code>). The image resolution is about
160
+ 1-2MP. Each image has at the bottom a banner 20 pixels high containing
161
+ <a href="#ack">copyright</a> information. Please make sure to remove this banner
162
+ when using the images to train and evaluate algorithms.</p>
163
+
164
+ <p>The annotations come in a number of text files. Each line of these
165
+ files contains an image name optionally followed by an image
166
+ annotation, either a textual label or a sequence of numbers.</p>
167
+
168
+ <p><code>data/images_train.txt</code> contains the list of training images:</p>
169
+
170
+ <pre>
171
+ 0787226
172
+ 1481091
173
+ 1548899
174
+ 0674300
175
+ ...
176
+ </pre>
177
+
178
+ <p>Similar files <code>data/images_val.txt</code> and <code>data/images_test.txt</code> contain the list
179
+ of validation and test images.</p>
180
+
181
+ <p><code>data/images_variant_train.txt</code>, <code>data/images_family_train.txt</code>, and
182
+ <code>data/images_manufacturer_train.txt</code> contain the list of training
183
+ images annotated with the model variant, family, and manufacturer
184
+ names respectively:</p>
185
+
186
+ <pre>
187
+ 0787226 Abingdon Spherical Free Balloon
188
+ 1481091 AEG Wagner Eule
189
+ 1548899 Aeris Naviter AN-2 Enara
190
+ 0674300 Aeritalia F-104S Starfighter
191
+ ...
192
+ </pre>
193
+
194
+ <p>Similar files are provided for the validation and test subsets.</p>
195
+
196
+ <p>Finally, <code>data/images_box.txt</code> contains the aircraft bounding
197
+ boxes, one per image. The bounding box is specified by four numbers:
198
+ <em>xmin</em>, <em>ymin</em>, <em>xmax</em> and <em>ymax</em>. The top-left pixel of an image has
199
+ coordinate (1,1).</p>
200
+
201
+ <h1><a id=evaluation></a> Evaluation</h1>
202
+
203
+ <p>The performance of a classifier is measured in term of its average
204
+ classification accuracy, as detailed next.</p>
205
+
206
+ <h2><a id=metric></a> Evaluation metric</h2>
207
+
208
+ <p>The output of a classification algorithm must be a list of triplets of
209
+ the type (<em>image</em>,<em>label</em>,<em>score</em>), where</p>
210
+
211
+ <ul>
212
+ <li><em>image</em> is an image label, i.e. a seven-digit number,</li>
213
+ <li><em>label</em> is an image label, i.e.. an aircraft model variant, family, or manufacturer, and</li>
214
+ <li><em>score</em> is a real number expressing the belief in the judgment.</li>
215
+ </ul>
216
+
217
+ <p>When computing the classification accuracy, an image is assigned the
218
+ label contained in its highest-scoring triplet. An image that has no
219
+ triplets is considered unclassified and always count as a
220
+ classification error (therefore it is better to guess at least one
221
+ label for each image rather than leaving it unclassified).</p>
222
+
223
+ <p>The quality of the predictions is measured in term of <em>average
224
+ accuracy</em>, obtained as follows:</p>
225
+
226
+ <ul>
227
+ <li>The confusion matrix is square, with one row per class.</li>
228
+ <li>Each element of the confusion matrix is the number of time aircraft
229
+ of a given class (specified by the row) are classified as a second
230
+ class (column). Ideally, the confusion matrix should be diagonal.</li>
231
+ <li>The confusion matrix is row-normalized by the number of images of
232
+ the corresponding aircraft class (each row therefore sums to one if
233
+ there are no unclassified images).</li>
234
+ <li>The average accuracy is computed as the average of the diagonal of
235
+ the confusion matrix.</li>
236
+ </ul>
237
+
238
+ <p>There are three challenges: classifying the aircraft variant, family, and manufacturer.</p>
239
+
240
+ <h2><a id=code></a> Evaluation code</h2>
241
+
242
+ <p>The evaluation protocol has been implemented in the MATLAB m-file
243
+ <code>evaluation.m</code>. This function takes the path to the <code>data</code> folder, a
244
+ composite name indicating the evaluation subset and challenge
245
+ (e.g. <code>'manufacturer_test'</code> or <code>'family_val'</code>), and the list of
246
+ triplets, and returns the confusion matrix. For example</p>
247
+
248
+ <pre>
249
+ images = {'2074164'} ;
250
+ labels = {'McDonnell Douglas MD-90-30'} ;
251
+ scores = 1 ;
252
+ confusion = evaluate('/path/fgcv-aircraft/data', 'test', images, labels, scores) ;
253
+ accuracy = mean(diag(confusion)) ;
254
+ </pre>
255
+
256
+ <p>evaluates a classifier output containing exactly one triplet (image,
257
+ label, score), where the image is <code>'2074164'</code>, its predicted class is
258
+ <code>'McDonnell Douglas MD-90-30'</code>, and the score of the prediction is
259
+ <code>1</code>. In practice, a complete set of predictions (one for each
260
+ image-class pair) is usually evaluated.</p>
261
+
262
+ <p>See the builtin help of the <code>evaluation</code> MATLAB functions for further
263
+ practical details. See also <code>example_evaluation.m</code> for examples on how
264
+ to use this function.</p>
265
+
266
+ <h1><a id=ack></a> Acknowledgments</h1>
267
+
268
+ <p>The creation of this dataset started during the <em>Johns Hopkins CLSP
269
+ Summer Workshop 2012</em>
270
+ <a href="http://www.clsp.jhu.edu/workshops/archive/ws-12/groups/tduosn/">Towards a Detailed Understanding of Objects and Scenes in Natural Images</a>
271
+ with, in alphabetical order, Matthew B. Blaschko, Ross B. Girshick,
272
+ Juho Kannala, Iasonas Kokkinos, Siddharth Mahendran, Subhransu Maji,
273
+ Sammy Mohamed, Esa Rahtu, Naomi Saphra, Karen Simonyan, Ben Taskar,
274
+ Andrea Vedaldi, and David Weiss.</p>
275
+
276
+ <p>The CLSP workshop was supported by the National Science Foundation via
277
+ Grant No 1005411, the Office of the Director of National Intelligence
278
+ via the JHU Human Language Technology Center of Excellence; and Google
279
+ Inc.</p>
280
+
281
+ <p>A special thanks goes to Pekka Rantalankila for helping with the
282
+ creation of the airplane hieararchy.</p>
283
+
284
+ <p>Many thanks to the photographers that kindly made available their
285
+ images for research purposes. Each photographer is listed below, along
286
+ with a link to his/her <a href="http://airliners.net">airlners.net</a> page:</p>
287
+
288
+ <ul>
289
+ <li><a href="http://www.airliners.net/profile/dendrobatid">Mick Bajcar</a></li>
290
+ <li><a href="http://www.airliners.net/profile/aldobid">Aldo Bidini</a></li>
291
+ <li><a href="http://www.airliners.net/profile/minoeke">Wim Callaert</a></li>
292
+ <li><a href="http://www.airliners.net/profile/tommypilot">Tommy Desmet</a></li>
293
+ <li><a href="http://www.airliners.net/profile/snorre">Thomas Posch</a></li>
294
+ <li><a href="http://www.airliners.net/profile/lemonkitty">James Richard Covington</a></li>
295
+ <li><a href="http://www.airliners.net/profile/stegi">Gerry Stegmeier</a></li>
296
+ <li><a href="http://www.airliners.net/profile/aal151heavy">Ben Wang</a></li>
297
+ <li><a href="http://www.airliners.net/profile/dazbo5">Darren Wilson</a></li>
298
+ <li><a href="http://www.airliners.net/profile/fly-k">Konstantin von Wedelstaedt</a></li>
299
+ </ul>
300
+
301
+ <p>Please note that the images are made available <strong>exclusively for
302
+ non-commercial research purposes</strong>. The original authors retain the
303
+ copyright on the respective pictures and should be contacted for any
304
+ other usage of them.</p>
305
+
306
+ <h1><a id=release></a> Release notes</h1>
307
+
308
+ <ul>
309
+ <li><em>FGVC-Aircraft 2013b</em> - The same as 2013a, but with test annotations included.</li>
310
+ <li><em>FGVC-Aircraft 2013a</em> - First public release of the data.</li>
311
+ </ul>
312
+
313
+ </body>
314
+ </html>
README.md CHANGED
@@ -1,3 +1,261 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # FGVC-Aircraft Benchmark
2
+
3
+ **Fine-Grained Visual Classification of Aircraft (FGVC-Aircraft)** is
4
+ a benchmark dataset for the fine grained visual categorization of
5
+ aircraft.
6
+
7
+ * [Data, annotations, and evaluation code](archives/fgvc-aircraft-2013b.tar.gz) [2.75 GB | [MD5 Sum](archives/fgvc-aircraft-2013b.html)].
8
+ * [Annotations and evaluation code only](archives/fgvc-aircraft-2013b-annotations.tar.gz) [375 KB | [MD5 Sum](archives/fgvc-aircraft-2013b-annotations.html)].
9
+ * Project [home page](http://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/).
10
+ * This data was used as part of the fine-grained recognition challenge
11
+ [FGComp 2013](https://sites.google.com/site/fgcomp2013/) which ran
12
+ jointly with the ImageNet Challenge 2013
13
+ ([results](https://sites.google.com/site/fgcomp2013/results)). Please
14
+ note that *the evaluation code provided here may differ* from the
15
+ one used in the challenge.
16
+
17
+ Please use the following citation when referring to this dataset:
18
+
19
+ *Fine-Grained Visual Classification of Aircraft*, S. Maji, J. Kannala,
20
+ E. Rahtu, M. Blaschko, A. Vedaldi, [arXiv.org](http://arxiv.org/abs/1306.5151), 2013
21
+
22
+ @techreport{maji13fine-grained,
23
+ title = {Fine-Grained Visual Classification of Aircraft},
24
+ author = {S. Maji and J. Kannala and E. Rahtu
25
+ and M. Blaschko and A. Vedaldi},
26
+ year = {2013},
27
+ archivePrefix = {arXiv},
28
+ eprint = {1306.5151},
29
+ primaryClass = "cs-cv",
30
+ }
31
+
32
+ For further information see:
33
+
34
+ * [Quick start](#quick)
35
+ * [About aircraft](#aircraft)
36
+ * [Data and annotation format](#format)
37
+ * [Evaluation](#evaluation)
38
+ * [Evaluation metric](#metric)
39
+ * [Evaluation code](#code)
40
+ * [Ackwonledgments](#ack)
41
+ * [Release notes](#release)
42
+
43
+ **Note.** This data has been used as part of the *ImageNet FGVC
44
+ challenge in conjuction with the International Conference on Computer
45
+ Vision (ICCV) 2013*. Test labels were not made available until the
46
+ challenge due to the ImageNet challenge policy. They have now been
47
+ released as part of the download above. If you arelady downloaded the
48
+ iamge archive and want to have access to the test labels, simply
49
+ download the annotations archive again.
50
+
51
+ **Note.** Images in the benchmark are generously made available **for
52
+ non-commercial research purposes only** by a number of *airplane
53
+ spotters*. Please note that the original authors retain the copyright
54
+ of the respective photographs and should be contacted for any other
55
+ use. For further details see the [copyright note](#ack) below.
56
+
57
+ # <a id=quick></a> Quick start
58
+
59
+ The dataset contains 10,200 images of aircraft, with 100 images for
60
+ each of 102 different aircraft model variants, most of which are
61
+ airplanes. The (main) aircraft in each image is annotated with a tight
62
+ bounding box and a hierarchical airplane model label.
63
+
64
+ Aircraft models are organized in a four-levels hierarchy. The four
65
+ levels, from finer to coarser, are:
66
+
67
+ * **Model**, e.g. *Boeing 737-76J*. Since certain models are nearly visually
68
+ indistinguishable, this level is not used in the evaluation.
69
+ * **Variant**, e.g. *Boeing 737-700*. A variant collapses all the
70
+ models that are visually indistinguishable into one class. The
71
+ dataset comprises 102 different variants.
72
+ * **Family**, e.g. *Boeing 737*. The dataset comprises 70 different
73
+ families.
74
+ * **Manufacturer**, e.g. *Boeing*. The dataset comprises 41
75
+ different manufacturers.
76
+
77
+ The data is divided into three equally-sized *training*, *validation*
78
+ and *test* subsets. The first two sets can be used for development,
79
+ and the latter should be used for final evaluation only. The format of
80
+ the data is described [next](#format).
81
+
82
+ The performance of a fine-grained classification algorithm is
83
+ evaluated in term of average class-prediction accuracy. This is
84
+ defined as the average of the diagonal of the row-normalized confusion
85
+ matrix, as used for example in Caltech-101. Three classification
86
+ challenges are considered: variant, family, and manufacturer. An
87
+ [evaluation script](#software) in MATLAB is provided.
88
+
89
+ ## <a href=aircraft></a> About aircraft
90
+
91
+ Aircraft, and in particular airplanes, are alternative to objects
92
+ typically considered for fine-grained categorization such as birds and
93
+ pets. There are several aspects that make aircraft model recognition
94
+ particularly interesting. Firstly, aircraft designs span a hundred
95
+ years, including many thousand different models and hundreds of
96
+ different makes and airlines. Secondly, aircraft designs vary
97
+ significantly depending on the size (from home-built to large
98
+ carriers), destination (private, civil, military), purpose
99
+ (transporter, carrier, training, sport, fighter, etc.), propulsion
100
+ (glider, propeller, jet), and many other factors including
101
+ technology. One particular axis of variation, which is is not shared
102
+ with categories such as animals, is the fact that the *structure* of
103
+ the aircraft changes with their design (number of wings,
104
+ undercarriages, wheel per undercarriage, engines, etc.). Thirdly, any
105
+ given aircraft model can be re-purposed or used by different
106
+ companies, which causes further variations in appearance
107
+ (livery). These, depending on the identification task, may be consider
108
+ as noise or as useful information to be extracted. Finally, aircraft
109
+ are largely rigid objects, which simplifies certain aspects of their
110
+ modeling (compared to highly-deformable animals such as cats),
111
+ allowing one to focus on the core aspects of the fine-grained
112
+ recognition problem.
113
+
114
+ # <a id=format></a> Data format
115
+
116
+ The directory `data` contains the images as well as a number of text
117
+ files with the data annotations.
118
+
119
+ Images are contained in the `data/images` sub-directory. They are in
120
+ JPEG format and have a name composed of seven digits and the `.jpg`
121
+ suffix (e.g. `data/images/1187707.jpg`). The image resolution is about
122
+ 1-2MP. Each image has at the bottom a banner 20 pixels high containing
123
+ [copyright](#ack) information. Please make sure to remove this banner
124
+ when using the images to train and evaluate algorithms.
125
+
126
+ The annotations come in a number of text files. Each line of these
127
+ files contains an image name optionally followed by an image
128
+ annotation, either a textual label or a sequence of numbers.
129
+
130
+ `data/images_train.txt` contains the list of training images:
131
+ <pre>
132
+ 0787226
133
+ 1481091
134
+ 1548899
135
+ 0674300
136
+ ...
137
+ </pre>
138
+ Similar files `data/images_val.txt` and `data/images_test.txt` contain the list
139
+ of validation and test images.
140
+
141
+ `data/images_variant_train.txt`, `data/images_family_train.txt`, and
142
+ `data/images_manufacturer_train.txt` contain the list of training
143
+ images annotated with the model variant, family, and manufacturer
144
+ names respectively:
145
+ <pre>
146
+ 0787226 Abingdon Spherical Free Balloon
147
+ 1481091 AEG Wagner Eule
148
+ 1548899 Aeris Naviter AN-2 Enara
149
+ 0674300 Aeritalia F-104S Starfighter
150
+ ...
151
+ </pre>
152
+ Similar files are provided for the validation and test subsets.
153
+
154
+ Finally, `data/images_box.txt` contains the aircraft bounding
155
+ boxes, one per image. The bounding box is specified by four numbers:
156
+ *xmin*, *ymin*, *xmax* and *ymax*. The top-left pixel of an image has
157
+ coordinate (1,1).
158
+
159
+ # <a id=evaluation></a> Evaluation
160
+
161
+ The performance of a classifier is measured in term of its average
162
+ classification accuracy, as detailed next.
163
+
164
+ ## <a id=metric></a> Evaluation metric
165
+
166
+ The output of a classification algorithm must be a list of triplets of
167
+ the type (*image*,*label*,*score*), where
168
+
169
+ * *image* is an image label, i.e. a seven-digit number,
170
+ * *label* is an image label, i.e.. an aircraft model variant, family, or manufacturer, and
171
+ * *score* is a real number expressing the belief in the judgment.
172
+
173
+ When computing the classification accuracy, an image is assigned the
174
+ label contained in its highest-scoring triplet. An image that has no
175
+ triplets is considered unclassified and always count as a
176
+ classification error (therefore it is better to guess at least one
177
+ label for each image rather than leaving it unclassified).
178
+
179
+ The quality of the predictions is measured in term of *average
180
+ accuracy*, obtained as follows:
181
+
182
+ * The confusion matrix is square, with one row per class.
183
+ * Each element of the confusion matrix is the number of time aircraft
184
+ of a given class (specified by the row) are classified as a second
185
+ class (column). Ideally, the confusion matrix should be diagonal.
186
+ * The confusion matrix is row-normalized by the number of images of
187
+ the corresponding aircraft class (each row therefore sums to one if
188
+ there are no unclassified images).
189
+ * The average accuracy is computed as the average of the diagonal of
190
+ the confusion matrix.
191
+
192
+ There are three challenges: classifying the aircraft variant, family, and manufacturer.
193
+
194
+ ## <a id=code></a> Evaluation code
195
+
196
+ The evaluation protocol has been implemented in the MATLAB m-file
197
+ `evaluation.m`. This function takes the path to the `data` folder, a
198
+ composite name indicating the evaluation subset and challenge
199
+ (e.g. `'manufacturer_test'` or `'family_val'`), and the list of
200
+ triplets, and returns the confusion matrix. For example
201
+
202
+ <pre>
203
+ images = {'2074164'} ;
204
+ labels = {'McDonnell Douglas MD-90-30'} ;
205
+ scores = 1 ;
206
+ confusion = evaluate('/path/fgcv-aircraft/data', 'test', images, labels, scores) ;
207
+ accuracy = mean(diag(confusion)) ;
208
+ </pre>
209
+
210
+ evaluates a classifier output containing exactly one triplet (image,
211
+ label, score), where the image is `'2074164'`, its predicted class is
212
+ `'McDonnell Douglas MD-90-30'`, and the score of the prediction is
213
+ `1`. In practice, a complete set of predictions (one for each
214
+ image-class pair) is usually evaluated.
215
+
216
+ See the builtin help of the `evaluation` MATLAB functions for further
217
+ practical details. See also `example_evaluation.m` for examples on how
218
+ to use this function.
219
+
220
+ # <a id=ack></a> Acknowledgments
221
+
222
+ The creation of this dataset started during the *Johns Hopkins CLSP
223
+ Summer Workshop 2012*
224
+ [Towards a Detailed Understanding of Objects and Scenes in Natural Images](http://www.clsp.jhu.edu/workshops/archive/ws-12/groups/tduosn/)
225
+ with, in alphabetical order, Matthew B. Blaschko, Ross B. Girshick,
226
+ Juho Kannala, Iasonas Kokkinos, Siddharth Mahendran, Subhransu Maji,
227
+ Sammy Mohamed, Esa Rahtu, Naomi Saphra, Karen Simonyan, Ben Taskar,
228
+ Andrea Vedaldi, and David Weiss.
229
+
230
+ The CLSP workshop was supported by the National Science Foundation via
231
+ Grant No 1005411, the Office of the Director of National Intelligence
232
+ via the JHU Human Language Technology Center of Excellence; and Google
233
+ Inc.
234
+
235
+ A special thanks goes to Pekka Rantalankila for helping with the
236
+ creation of the airplane hieararchy.
237
+
238
+ Many thanks to the photographers that kindly made available their
239
+ images for research purposes. Each photographer is listed below, along
240
+ with a link to his/her [airlners.net](http://airliners.net) page:
241
+
242
+ * [Mick Bajcar](http://www.airliners.net/profile/dendrobatid)
243
+ * [Aldo Bidini](http://www.airliners.net/profile/aldobid)
244
+ * [Wim Callaert](http://www.airliners.net/profile/minoeke)
245
+ * [Tommy Desmet](http://www.airliners.net/profile/tommypilot)
246
+ * [Thomas Posch](http://www.airliners.net/profile/snorre)
247
+ * [James Richard Covington](http://www.airliners.net/profile/lemonkitty)
248
+ * [Gerry Stegmeier](http://www.airliners.net/profile/stegi)
249
+ * [Ben Wang](http://www.airliners.net/profile/aal151heavy)
250
+ * [Darren Wilson](http://www.airliners.net/profile/dazbo5)
251
+ * [Konstantin von Wedelstaedt](http://www.airliners.net/profile/fly-k)
252
+
253
+ Please note that the images are made available **exclusively for
254
+ non-commercial research purposes**. The original authors retain the
255
+ copyright on the respective pictures and should be contacted for any
256
+ other usage of them.
257
+
258
+ # <a id=release></a> Release notes
259
+
260
+ * *FGVC-Aircraft 2013b* - The same as 2013a, but with test annotations included.
261
+ * *FGVC-Aircraft 2013a* - First public release of the data.
evaluation.m ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ function [confusion, results] = evaluation(datasetPath, split, images, labels, scores)
2
+ % EVALUATION Evaluate classification results
3
+ % CONFUSION = EVALUATION(DATASETPATH, SPLIT, IMAGES, LABELS, SCORES)
4
+ % evaluate the classification results IMAGES,LABELS,SCORES on the
5
+ % data split SPLIT.
6
+ %
7
+ % IMAGES, LABELS and SCORES are one-dimensional arrays of the same length,
8
+ % specifying a number of (image,lable,score) triplets. IMAGES and
9
+ % LABELS can be either cell arrays of strings, with the name of
10
+ % images and airplane models respectively, or indexes in the list of
11
+ % images (images.txt) and evaluated airplane models
12
+ % (models_evaluated.txt) -- the latter option is generally more
13
+ % efficient. SCORES is a numeric array containing the score of the
14
+ % corresponding predictions.
15
+ %
16
+ % CONFUSION is confusion matrix, with one row
17
+ % per ground truth class and one column per estimated class. The
18
+ % average accuracy is simply the average of the diagonal of the confusion.
19
+ %
20
+ % [~,RESULTS] = EVALUATION() returns an additional struct array with
21
+ % one entry for each evaluated class. It has the following
22
+ % fields:
23
+ %
24
+ % RESULTS.RC - Recall
25
+ % RESULTS.PR - Precision
26
+ % RESULTS.TP - True positive rate
27
+ % RESULTS.TN - True negative rate
28
+ % RESULTS.AP - Average precision
29
+ % RESULTS.ROCEER - ROC Equal Error Rate
30
+
31
+ % Author: Andrea Vedaldi
32
+
33
+ % Copyright (C) 2013 Andrea Vedaldi
34
+ % This code is released in the public domain.
35
+
36
+ % Get the ground truth image list and labels for the set.
37
+
38
+ [images0, labels0] = textread(fullfile(datasetPath, ['images_' split '.txt']), '%7s%*1s%s', 'delimiter', '\n', 'whitespace', '') ;
39
+ [classes0, ~, y0] = unique(labels0) ;
40
+
41
+ % Convert character labels to indexes. Images and ground truth classes
42
+ % are assigned a number in the same order as the training data.
43
+
44
+ ok = true(size(labels)) ;
45
+ if isnumeric(labels)
46
+ y = labels ;
47
+ else
48
+ [~,y] = ismember(labels, classes0) ;
49
+ if any(y == 0)
50
+ for i = find(y == 0)
51
+ warning('Class %s not found in set of ground truth classes\n', labels{i}) ;
52
+ ok(i) = false ;
53
+ end
54
+ end
55
+ end
56
+
57
+ if isnumeric(images)
58
+ x = images ;
59
+ else
60
+ [~, x] = ismember(images, images0) ;
61
+ if any(y == 0)
62
+ for i = find(y == 0)
63
+ warning('Image %s was not found in set of ground truth images\n', images{i}) ;
64
+ ok(i) = false ;
65
+ end
66
+ end
67
+ end
68
+ y0 = y0' ;
69
+ y = y(ok)' ;
70
+ x = x(ok)' ;
71
+
72
+ numImages = numel(images0) ;
73
+ numClasses = numel(classes0) ;
74
+
75
+ fprintf('%s: %s split, %d classes, %d images\n', mfilename, split, numClasses, numImages) ;
76
+
77
+ % Iterate over predicted classes. For each, initialize all prediction
78
+ % scores for all images to -infinity. Then, replace the score for
79
+ % those image-label pairs that appear in the input.
80
+
81
+ scorem = -inf(numClasses, numImages) ;
82
+ for y1 = 1:numClasses
83
+ scorem(y1, x(y == y1)) = scores(y == y1) ;
84
+
85
+ [rc,pr,info] = vl_pr(2 * (y0 == y1) - 1, scorem(y1, :), 'IncludeInf', false) ;
86
+ results(y1).rc = rc ;
87
+ results(y1).pr = pr ;
88
+ results(y1).ap = info.ap ;
89
+
90
+ [tp,tn,info] = vl_roc(2 * (y0 == y1) - 1, scorem(y1, :), 'IncludeInf', false) ;
91
+ results(y1).tp = tp ;
92
+ results(y1).tn = tn ;
93
+
94
+ results(y1).roceer = info.eer ;
95
+ results(y1).name = classes0{y1} ;
96
+ results(y1).numGtSamples = sum(y0 == y1) ;
97
+ results(y1).numCandidates = sum(y == y1) ;
98
+
99
+ fprintf('%s: %25s [%5d gt,%5d cands] AP %5.2f%%, ROC-EER %5.2f%%\n', ...
100
+ mfilename, ...
101
+ results(y1).name, ...
102
+ results(y1).numGtSamples, ...
103
+ results(y1).numCandidates, ...
104
+ results(y1).ap * 100, ...
105
+ results(y1).roceer * 100) ;
106
+ end
107
+
108
+ confusion = zeros(numClasses) ;
109
+ [~, preds] = max([-inf(1, numImages) ; scorem]) ;
110
+ preds = preds - 1 ;
111
+
112
+ for y1 = 1:numClasses
113
+ z = accumarray(preds(preds > 0 & y0 == y1)', 1, [numClasses 1])' ;
114
+ z = z/results(y1).numGtSamples ;
115
+ confusion(y1,:) = z ;
116
+ end
117
+
118
+ fprintf('%s: mean accuracy: %.2f %%\n', mfilename, mean(diag(confusion))*100) ;
example_evaluation.m ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ % Demonstrates the use of the EVALUATION() functions.
2
+
3
+ % choose a task-set combination
4
+ split = 'variant_test' ;
5
+ %split = 'variant_trainval' ;
6
+ %split = 'family_test' ;
7
+ %split = 'manufacturer_test' ;
8
+
9
+ switch 1
10
+ case 1
11
+ % Example 1: the evaluation set contains exactly one image-label pair
12
+ images = {'0900914'} ;
13
+ labels = {'747-400'} ;
14
+ scores = 1 ;
15
+ case 2
16
+ % Example 2: the evaluation set contains exactly all the ground truth image-label pairs (perfect
17
+ % performance).
18
+ [images, labels] = textread(fullfile('data', ['images_' split '.txt']), '%7s%*1s%s', 'delimiter', '\n', 'whitespace', '') ;
19
+ scores = ones(size(labels)) ;
20
+ case 3
21
+ % Example 3: the evaluation set contains all the possible
22
+ % image-label pair and random scores. Numeric inputs are used
23
+ % for efficiency.
24
+ [images0, labels0] = textread(fullfile('data', ['images_' split '.txt']), '%7s%*1s%s', 'delimiter', '\n', 'whitespace', '') ;
25
+ n = numel(images0) ;
26
+ clear images labels scores ;
27
+ for ci = 1:100
28
+ images{ci} = 1:n ;
29
+ labels{ci} = repmat(ci,1,n) ;
30
+ scores{ci} = randn(1,n) ;
31
+ end
32
+ images = [images{:}] ;
33
+ labels = [labels{:}] ;
34
+ scores = [scores{:}] ;
35
+ end
36
+
37
+ [confusion, results] = evaluation('data', split, images, labels, scores) ;
38
+
39
+ figure(1) ; clf ;
40
+ imagesc(confusion) ; axis tight equal ;
41
+ xlabel('predicted') ;
42
+ ylabel('ground truth') ;
43
+ title(sprintf('mean accuracy: %.2f %%\n', mean(diag(confusion))*100)) ;
vl_argparse.m ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ function [conf, args] = vl_argparse(conf, args, varargin)
2
+ % VL_ARGPARSE Parse list of parameter-value pairs
3
+ % CONF = VL_ARGPARSE(CONF, ARGS) updates the structure CONF based on
4
+ % the specified parameter-value pairs ARGS={PAR1, VAL1, ... PARN,
5
+ % VALN}. The function produces an error if an unknown parameter name
6
+ % is passed in.
7
+ %
8
+ % [CONF, ARGS] = VL_ARGPARSE(CONF, ARGS) copies any parameter in
9
+ % ARGS that does not match CONF back to ARGS instead of producing an
10
+ % error.
11
+ %
12
+ % Example::
13
+ % The function can be used to parse a list of arguments
14
+ % passed to a MATLAB functions:
15
+ %
16
+ % function myFunction(x,y,z,varargin)
17
+ % conf.parameterName = defaultValue ;
18
+ % conf = vl_argparse(conf, varargin)
19
+ %
20
+ % If only a subset of the options should be parsed, for example
21
+ % because the other options are interpreted by a subroutine, then
22
+ % use the form
23
+ %
24
+ % [conf, varargin] = vl_argparse(conf, varargin)
25
+ %
26
+ % that copies back to VARARGIN any unknown parameter.
27
+ %
28
+ % See also: VL_OVERRIDE(), VL_HELP().
29
+
30
+ % Authors: Andrea Vedaldi
31
+
32
+ % Copyright (C) 2007-12 Andrea Vedaldi and Brian Fulkerson.
33
+ % All rights reserved.
34
+ %
35
+ % This file is part of the VLFeat library and is made available under
36
+ % the terms of the BSD license (see the COPYING file).
37
+
38
+ if ~isstruct(conf), error('CONF must be a structure') ; end
39
+
40
+ if length(varargin) > 0, args = {args, varargin{:}} ; end
41
+
42
+ remainingArgs = {} ;
43
+ names = fieldnames(conf) ;
44
+
45
+ if mod(length(args),2) == 1
46
+ error('Parameter-value pair expected (missing value?).') ;
47
+ end
48
+
49
+ for ai = 1:2:length(args)
50
+ paramName = args{ai} ;
51
+ if ~ischar(paramName)
52
+ error('The name of the parameter number %d is not a string.', (ai-1)/2+1) ;
53
+ end
54
+ value = args{ai+1} ;
55
+ if isfield(conf,paramName)
56
+ conf.(paramName) = value ;
57
+ else
58
+ % try case-insensitive
59
+ i = find(strcmpi(paramName, names)) ;
60
+ if isempty(i)
61
+ if nargout < 2
62
+ error('Unknown parameter ''%s''.', paramName) ;
63
+ else
64
+ remainingArgs(end+1:end+2) = args(ai:ai+1) ;
65
+ end
66
+ else
67
+ conf.(names{i}) = value ;
68
+ end
69
+ end
70
+ end
71
+
72
+ args = remainingArgs ;
vl_pr.m ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ function [recall, precision, info] = vl_pr(labels, scores, varargin)
2
+ %VL_PR Precision-recall curve.
3
+ % [RECALL, PRECISION] = VL_PR(LABELS, SCORES) computes the
4
+ % precision-recall (PR) curve. LABELS are the ground truth labels,
5
+ % greather than zero for a positive sample and smaller than zero for
6
+ % a negative one. SCORES are the scores of the samples obtained from
7
+ % a classifier, where lager scores should correspond to positive
8
+ % samples.
9
+ %
10
+ % Samples are ranked by decreasing scores, starting from rank 1.
11
+ % PRECISION(K) and RECALL(K) are the precison and recall when
12
+ % samples of rank smaller or equal to K-1 are predicted to be
13
+ % positive and the remaining to be negative. So for example
14
+ % PRECISION(3) is the percentage of positive samples among the two
15
+ % samples with largest score. PRECISION(1) is the precision when no
16
+ % samples are predicted to be positive and is conventionally set to
17
+ % the value 1.
18
+ %
19
+ % Set to zero the lables of samples that should be ignored in the
20
+ % evaluation. Set to -INF the scores of samples which are not
21
+ % retrieved. If there are samples with -INF score, then the PR curve
22
+ % may have maximum recall smaller than 1, unless the INCLUDEINF
23
+ % option is used (see below). The options NUMNEGATIVES and
24
+ % NUMPOSITIVES can be used to add additional surrogate samples with
25
+ % -INF score (see below).
26
+ %
27
+ % [RECALL, PRECISION, INFO] = VL_PR(...) returns an additional
28
+ % structure INFO with the following fields:
29
+ %
30
+ % info.auc::
31
+ % The area under the precision-recall curve. If the INTERPOLATE
32
+ % option is set to FALSE, then trapezoidal interpolation is used
33
+ % to integrate the PR curve. If the INTERPOLATE option is set to
34
+ % TRUE, then the curve is piecewise constant and no other
35
+ % approximation is introduced in the calculation of the area. In
36
+ % the latter case, INFO.AUC is the same as INFO.AP.
37
+ %
38
+ % info.ap::
39
+ % Average precision as defined by TREC. This is the average of the
40
+ % precision observed each time a new positive sample is
41
+ % recalled. In this calculation, any sample with -INF score
42
+ % (unless INCLUDEINF is used) and any additional positive induced
43
+ % by NUMPOSITIVES has precision equal to zero. If the INTERPOLATE
44
+ % option is set to true, the AP is computed from the interpolated
45
+ % precision and the result is the same as INFO.AUC. Note that AP
46
+ % as defined by TREC normally does not use interpolation [1].
47
+ %
48
+ % info.ap_interp_11::
49
+ % 11-points interpolated average precision as defined by TREC.
50
+ % This is the average of the maximum precision for recall levels
51
+ % greather than 0.0, 0.1, 0.2, ..., 1.0. This measure was used in
52
+ % the PASCAL VOC challenge up to the 2008 edition.
53
+ %
54
+ % info.auc_pa08::
55
+ % Deprecated. It is the same of INFO.AP_INTERP_11.
56
+ %
57
+ % VL_PR(...) with no output arguments plots the PR curve in the
58
+ % current axis.
59
+ %
60
+ % VL_PR() accepts the following options:
61
+ %
62
+ % Interpolate:: false
63
+ % If set to true, use interpolated precision. The interpolated
64
+ % precision is defined as the maximum precision for a given recall
65
+ % level and onwards. Here it is implemented as the culumative
66
+ % maximum from low to high scores of the precision.
67
+ %
68
+ % NumPositives:: []
69
+ % NumNegatives:: []
70
+ % If set to a number, pretend that LABELS contains this may
71
+ % positive/negative labels. NUMPOSITIVES/NUMNEGATIVES cannot be
72
+ % smaller than the actual number of positive/negative entrires in
73
+ % LABELS. The additional positive/negative labels are appended to
74
+ % the end of the sequence, as if they had -INF scores (not
75
+ % retrieved). This is useful to evaluate large retrieval systems
76
+ % for which one stores ony a handful of top results for efficiency
77
+ % reasons.
78
+ %
79
+ % IncludeInf:: false
80
+ % If set to true, data with -INF score SCORES is included in the
81
+ % evaluation and the maximum recall is 1 even if -INF scores are
82
+ % present. This option does not include any additional positive or
83
+ % negative data introduced by specifying NUMPOSITIVES and
84
+ % NUMNEGATIVES.
85
+ %
86
+ % Stable:: false
87
+ % If set to true, RECALL and PRECISION are returned the same order
88
+ % of LABELS and SCORES rather than being sorted by decreasing
89
+ % score (increasing recall). Samples with -INF scores are assigned
90
+ % RECALL and PRECISION equal to NaN.
91
+ %
92
+ % NormalizePrior:: []
93
+ % If set to a scalar, reweights positive and negative labels so
94
+ % that the fraction of positive ones is equal to the specified
95
+ % value. This computes the normalised PR curves of [2]
96
+ %
97
+ % About the PR curve::
98
+ % This section uses the same symbols used in the documentation of
99
+ % the VL_ROC() function. In addition to those quantities, define:
100
+ %
101
+ % PRECISION(S) = TP(S) / (TP(S) + FP(S))
102
+ % RECALL(S) = TPR(S) = TP(S) / P
103
+ %
104
+ % The precision is the fraction of positivie predictions which are
105
+ % correct, and the recall is the fraction of positive labels that
106
+ % have been correctly classified (recalled). Notice that the recall
107
+ % is also equal to the true positive rate for the ROC curve (see
108
+ % VL_ROC()).
109
+ %
110
+ % REFERENCES:
111
+ % [1] C. D. Manning, P. Raghavan, and H. Schutze. An Introduction to
112
+ % Information Retrieval. Cambridge University Press, 2008.
113
+ % [2] D. Hoiem, Y. Chodpathumwan, and Q. Dai. Diagnosing error in
114
+ % object detectors. In Proc. ECCV, 2012.
115
+ %
116
+ % See also VL_ROC(), VL_HELP().
117
+
118
+ % Author: Andrea Vedaldi
119
+
120
+ % Copyright (C) 2007-12 Andrea Vedaldi and Brian Fulkerson.
121
+ % All rights reserved.
122
+ %
123
+ % This file is part of the VLFeat library and is made available under
124
+ % the terms of the BSD license (see the COPYING file).
125
+
126
+ % TP and FP are the vectors of true positie and false positve label
127
+ % counts for decreasing scores, P and N are the total number of
128
+ % positive and negative labels. Note that if certain options are used
129
+ % some labels may actually not be stored explicitly by LABELS, so P+N
130
+ % can be larger than the number of element of LABELS.
131
+
132
+ [tp, fp, p, n, perm, varargin] = vl_tpfp(labels, scores, varargin{:}) ;
133
+ opts.stable = false ;
134
+ opts.interpolate = false ;
135
+ opts.normalizePrior = [] ;
136
+ opts = vl_argparse(opts,varargin) ;
137
+
138
+ % compute precision and recall
139
+ small = 1e-10 ;
140
+ recall = tp / max(p, small) ;
141
+ if isempty(opts.normalizePrior)
142
+ precision = max(tp, small) ./ max(tp + fp, small) ;
143
+ else
144
+ a = opts.normalizePrior ;
145
+ precision = max(tp * a/max(p,small), small) ./ ...
146
+ max(tp * a/max(p,small) + fp * (1-a)/max(n,small), small) ;
147
+ end
148
+
149
+ % interpolate precision if needed
150
+ if opts.interpolate
151
+ precision = fliplr(vl_cummax(fliplr(precision))) ;
152
+ end
153
+
154
+ % --------------------------------------------------------------------
155
+ % Additional info
156
+ % --------------------------------------------------------------------
157
+
158
+ if nargout > 2 || nargout == 0
159
+
160
+ % area under the curve using trapezoid interpolation
161
+ if ~opts.interpolate
162
+ if numel(precision) > 1
163
+ info.auc = 0.5 * sum((precision(1:end-1) + precision(2:end)) .* diff(recall)) ;
164
+ else
165
+ info.auc = 0 ;
166
+ end
167
+ end
168
+
169
+ % average precision (for each recalled positive sample)
170
+ sel = find(diff(recall)) + 1 ;
171
+ info.ap = sum(precision(sel)) / p ;
172
+ if opts.interpolate
173
+ info.auc = info.ap ;
174
+ end
175
+
176
+ % TREC 11 points average interpolated precision
177
+ info.ap_interp_11 = 0.0 ;
178
+ for rc = linspace(0,1,11)
179
+ pr = max([0, precision(recall >= rc)]) ;
180
+ info.ap_interp_11 = info.ap_interp_11 + pr / 11 ;
181
+ end
182
+
183
+ % legacy definition
184
+ info.auc_pa08 = info.ap_interp_11 ;
185
+ end
186
+
187
+ % --------------------------------------------------------------------
188
+ % Plot
189
+ % --------------------------------------------------------------------
190
+
191
+ if nargout == 0
192
+ cla ; hold on ;
193
+ plot(recall,precision,'linewidth',2) ;
194
+ if isempty(opts.normalizePrior)
195
+ randomPrecision = p / (p + n) ;
196
+ else
197
+ randomPrecision = opts.normalizePrior ;
198
+ end
199
+ spline([0 1], [1 1] * randomPrecision, 'r--', 'linewidth', 2) ;
200
+ axis square ; grid on ;
201
+ xlim([0 1]) ; xlabel('recall') ;
202
+ ylim([0 1]) ; ylabel('precision') ;
203
+ title(sprintf('PR (AUC: %.2f%%, AP: %.2f%%, AP11: %.2f%%)', ...
204
+ info.auc * 100, ...
205
+ info.ap * 100, ...
206
+ info.ap_interp_11 * 100)) ;
207
+ if opts.interpolate
208
+ legend('PR interp.', 'PR rand.', 'Location', 'SouthEast') ;
209
+ else
210
+ legend('PR', 'PR rand.', 'Location', 'SouthEast') ;
211
+ end
212
+ clear recall precision info ;
213
+ end
214
+
215
+ % --------------------------------------------------------------------
216
+ % Stable output
217
+ % --------------------------------------------------------------------
218
+
219
+ if opts.stable
220
+ precision(1) = [] ;
221
+ recall(1) = [] ;
222
+ precision_ = precision ;
223
+ recall_ = recall ;
224
+ precision = NaN(size(precision)) ;
225
+ recall = NaN(size(recall)) ;
226
+ precision(perm) = precision_ ;
227
+ recall(perm) = recall_ ;
228
+ end
229
+
230
+ % --------------------------------------------------------------------
231
+ function h = spline(x,y,spec,varargin)
232
+ % --------------------------------------------------------------------
233
+ prop = vl_linespec2prop(spec) ;
234
+ h = line(x,y,prop{:},varargin{:}) ;
vl_roc.m ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ function [tpr,tnr,info] = vl_roc(labels, scores, varargin)
2
+ %VL_ROC ROC curve.
3
+ % [TPR,TNR] = VL_ROC(LABELS, SCORES) computes the Receiver Operating
4
+ % Characteristic (ROC) curve. LABELS are the ground truth labels,
5
+ % greather than zero for a positive sample and smaller than zero for
6
+ % a negative one. SCORES are the scores of the samples obtained from
7
+ % a classifier, where lager scores should correspond to positive
8
+ % labels.
9
+ %
10
+ % Samples are ranked by decreasing scores, starting from rank 1.
11
+ % TPR(K) and TNR(K) are the true positive and true negative rates
12
+ % when samples of rank smaller or equal to K-1 are predicted to be
13
+ % positive. So for example TPR(3) is the true positive rate when the
14
+ % two samples with largest score are predicted to be
15
+ % positive. Similarly, TPR(1) is the true positive rate when no
16
+ % samples are predicted to be positive, i.e. the constant 0.
17
+ %
18
+ % Set the zero the lables of samples that should be ignored in the
19
+ % evaluation. Set to -INF the scores of samples which are not
20
+ % retrieved. If there are samples with -INF score, then the ROC curve
21
+ % may have maximum TPR and TNR smaller than 1.
22
+ %
23
+ % [TPR,TNR,INFO] = VL_ROC(...) returns an additional structure INFO
24
+ % with the following fields:
25
+ %
26
+ % info.auc:: Area under the ROC curve (AUC).
27
+ % The ROC curve has a `staircase shape' because for each sample
28
+ % only TP or TN changes, but not both at the same time. Therefore
29
+ % there is no approximation involved in the computation of the
30
+ % area.
31
+ %
32
+ % info.eer:: Equal error rate (EER).
33
+ % The equal error rate is the value of FPR (or FNR) when the ROC
34
+ % curves intersects the line connecting (0,0) to (1,1).
35
+ %
36
+ % VL_ROC(...) with no output arguments plots the ROC curve in the
37
+ % current axis.
38
+ %
39
+ % VL_ROC() acccepts the following options:
40
+ %
41
+ % Plot:: []
42
+ % Setting this option turns on plotting unconditionally. The
43
+ % following plot variants are supported:
44
+ %
45
+ % tntp:: Plot TPR against TNR (standard ROC plot).
46
+ % tptn:: Plot TNR against TPR (recall on the horizontal axis).
47
+ % fptp:: Plot TPR against FPR.
48
+ % fpfn:: Plot FNR against FPR (similar to DET curve).
49
+ %
50
+ % NumPositives:: []
51
+ % NumNegatives:: []
52
+ % If set to a number, pretend that LABELS contains this may
53
+ % positive/negative labels. NUMPOSITIVES/NUMNEGATIVES cannot be
54
+ % smaller than the actual number of positive/negative entrires in
55
+ % LABELS. The additional positive/negative labels are appended to
56
+ % the end of the sequence, as if they had -INF scores (not
57
+ % retrieved). This is useful to evaluate large retrieval systems in
58
+ % which one stores ony a handful of top results for efficiency
59
+ % reasons.
60
+ %
61
+ % About the ROC curve::
62
+ % Consider a classifier that predicts as positive all samples whose
63
+ % score is not smaller than a threshold S. The ROC curve represents
64
+ % the performance of such classifier as the threshold S is
65
+ % changed. Formally, define
66
+ %
67
+ % P = overall num. of positive samples,
68
+ % N = overall num. of negative samples,
69
+ %
70
+ % and for each threshold S
71
+ %
72
+ % TP(S) = num. of samples that are correctly classified as positive,
73
+ % TN(S) = num. of samples that are correctly classified as negative,
74
+ % FP(S) = num. of samples that are incorrectly classified as positive,
75
+ % FN(S) = num. of samples that are incorrectly classified as negative.
76
+ %
77
+ % Consider also the rates:
78
+ %
79
+ % TPR = TP(S) / P, FNR = FN(S) / P,
80
+ % TNR = TN(S) / N, FPR = FP(S) / N,
81
+ %
82
+ % and notice that by definition
83
+ %
84
+ % P = TP(S) + FN(S) , N = TN(S) + FP(S),
85
+ % 1 = TPR(S) + FNR(S), 1 = TNR(S) + FPR(S).
86
+ %
87
+ % The ROC curve is the parametric curve (TPR(S), TNR(S)) obtained
88
+ % as the classifier threshold S is varied in the reals. The TPR is
89
+ % also known as recall (see VL_PR()).
90
+ %
91
+ % The ROC curve is contained in the square with vertices (0,0) The
92
+ % (average) ROC curve of a random classifier is a line which
93
+ % connects (1,0) and (0,1).
94
+ %
95
+ % The ROC curve is independent of the prior probability of the
96
+ % labels (i.e. of P/(P+N) and N/(P+N)).
97
+ %
98
+ % REFERENCES:
99
+ % [1] http://en.wikipedia.org/wiki/Receiver_operating_characteristic
100
+ %
101
+ % See also: VL_PR(), VL_DET(), VL_HELP().
102
+
103
+ % Copyright (C) 2007-12 Andrea Vedaldi and Brian Fulkerson.
104
+ % All rights reserved.
105
+ %
106
+ % This file is part of the VLFeat library and is made available under
107
+ % the terms of the BSD license (see the COPYING file).
108
+
109
+ [tp, fp, p, n, perm, varargin] = vl_tpfp(labels, scores, varargin{:}) ;
110
+ opts.plot = [] ;
111
+ opts.stable = false ;
112
+ opts = vl_argparse(opts,varargin) ;
113
+
114
+ % compute the rates
115
+ small = 1e-10 ;
116
+ tpr = tp / max(p, small) ;
117
+ fpr = fp / max(n, small) ;
118
+ fnr = 1 - tpr ;
119
+ tnr = 1 - fpr ;
120
+
121
+ % --------------------------------------------------------------------
122
+ % Additional info
123
+ % --------------------------------------------------------------------
124
+
125
+ if nargout > 2 || nargout == 0
126
+ % Area under the curve. Since the curve is a staircase (in the
127
+ % sense that for each sample either tn is decremented by one
128
+ % or tp is incremented by one but the other remains fixed),
129
+ % the integral is particularly simple and exact.
130
+
131
+ info.auc = sum(tnr .* diff([0 tpr])) ;
132
+
133
+ % Equal error rate. One must find the index S for which there is a
134
+ % crossing between TNR(S) and TPR(s). If such a crossing exists,
135
+ % there are two cases:
136
+ %
137
+ % o tnr o
138
+ % / \
139
+ % 1-eer = tnr o-x-o 1-eer = tpr o-x-o
140
+ % / \
141
+ % tpr o o
142
+ %
143
+ % Moreover, if the maximum TPR is smaller than 1, then it is
144
+ % possible that neither of the two cases realizes (then EER=NaN).
145
+
146
+ s = max(find(tnr > tpr)) ;
147
+ if s == length(tpr)
148
+ info.eer = NaN ;
149
+ else
150
+ if tpr(s) == tpr(s+1)
151
+ info.eer = 1 - tpr(s) ;
152
+ else
153
+ info.eer = 1 - tnr(s) ;
154
+ end
155
+ end
156
+ end
157
+
158
+ % --------------------------------------------------------------------
159
+ % Plot
160
+ % --------------------------------------------------------------------
161
+
162
+ if ~isempty(opts.plot) || nargout == 0
163
+ if isempty(opts.plot), opts.plot = 'fptp' ; end
164
+ cla ; hold on ;
165
+ switch lower(opts.plot)
166
+ case {'truenegatives', 'tn', 'tntp'}
167
+ hroc = plot(tnr, tpr, 'b', 'linewidth', 2) ;
168
+ hrand = spline([0 1], [1 0], 'r--', 'linewidth', 2) ;
169
+ spline([0 1], [0 1], 'k--', 'linewidth', 1) ;
170
+ plot(1-info.eer, 1-info.eer, 'k*', 'linewidth', 1) ;
171
+ xlabel('true negative rate') ;
172
+ ylabel('true positive rate (recall)') ;
173
+ loc = 'sw' ;
174
+
175
+ case {'falsepositives', 'fp', 'fptp'}
176
+ hroc = plot(fpr, tpr, 'b', 'linewidth', 2) ;
177
+ hrand = spline([0 1], [0 1], 'r--', 'linewidth', 2) ;
178
+ spline([1 0], [0 1], 'k--', 'linewidth', 1) ;
179
+ plot(info.eer, 1-info.eer, 'k*', 'linewidth', 1) ;
180
+ xlabel('false positive rate') ;
181
+ ylabel('true positive rate (recall)') ;
182
+ loc = 'se' ;
183
+
184
+ case {'tptn'}
185
+ hroc = plot(tpr, tnr, 'b', 'linewidth', 2) ;
186
+ hrand = spline([0 1], [1 0], 'r--', 'linewidth', 2) ;
187
+ spline([0 1], [0 1], 'k--', 'linewidth', 1) ;
188
+ plot(1-info.eer, 1-info.eer, 'k*', 'linewidth', 1) ;
189
+ xlabel('true positive rate (recall)') ;
190
+ ylabel('false positive rate') ;
191
+ loc = 'sw' ;
192
+
193
+ case {'fpfn'}
194
+ hroc = plot(fpr, fnr, 'b', 'linewidth', 2) ;
195
+ hrand = spline([0 1], [1 0], 'r--', 'linewidth', 2) ;
196
+ spline([0 1], [0 1], 'k--', 'linewidth', 1) ;
197
+ plot(info.eer, info.eer, 'k*', 'linewidth', 1) ;
198
+ xlabel('false positive (false alarm) rate') ;
199
+ ylabel('false negative (miss) rate') ;
200
+ loc = 'ne' ;
201
+
202
+ otherwise
203
+ error('''%s'' is not a valid PLOT type.', opts.plot);
204
+ end
205
+
206
+ grid on ;
207
+ xlim([0 1]) ;
208
+ ylim([0 1]) ;
209
+ axis square ;
210
+ title(sprintf('ROC (AUC: %.2f%%, EER: %.2f%%)', info.auc * 100, info.eer * 100), ...
211
+ 'interpreter', 'none') ;
212
+ legend([hroc hrand], 'ROC', 'ROC rand.', 'location', loc) ;
213
+ end
214
+
215
+ % --------------------------------------------------------------------
216
+ % Stable output
217
+ % --------------------------------------------------------------------
218
+
219
+ if opts.stable
220
+ tpr(1) = [] ;
221
+ tnr(1) = [] ;
222
+ tpr_ = tpr ;
223
+ tnr_ = tnr ;
224
+ tpr = NaN(size(tpr)) ;
225
+ tnr = NaN(size(tnr)) ;
226
+ tpr(perm) = tpr_ ;
227
+ tnr(perm) = tnr_ ;
228
+ end
229
+
230
+ % --------------------------------------------------------------------
231
+ function h = spline(x,y,spec,varargin)
232
+ % --------------------------------------------------------------------
233
+ prop = vl_linespec2prop(spec) ;
234
+ h = line(x,y,prop{:},varargin{:}) ;
vl_tpfp.m ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ function [tp, fp, p, n, perm, varargin] = vl_tpfp(labels, scores, varargin)
2
+ % VL_TPFP Compute true positives and false positives
3
+ % This is an helper function used by VL_PR(), VL_ROC(), VL_DET().
4
+ %
5
+ % See also: VL_PR(), VL_ROC(), VL_DET(), VL_HELP().
6
+
7
+ % Author: Andrea Vedaldi
8
+
9
+ % Copyright (C) 2007-12 Andrea Vedaldi and Brian Fulkerson.
10
+ % All rights reserved.
11
+ %
12
+ % This file is part of the VLFeat library and is made available under
13
+ % the terms of the BSD license (see the COPYING file).
14
+
15
+ % extraNeg and extraPos depend on numNegatives and numPositives:
16
+ %
17
+ % [ labels | -1 +1 ]
18
+ % [ +inf | finite scores | -inf | extraNeg extraPos]
19
+
20
+ opts.includeInf = false ;
21
+ opts.numNegatives = [] ;
22
+ opts.numPositives = [] ;
23
+ [opts, varargin] = vl_argparse(opts, varargin) ;
24
+
25
+ % make row vectors
26
+ labels = labels(:)' ;
27
+ scores = scores(:)' ;
28
+
29
+ % count labels
30
+ p = sum(labels > 0) ;
31
+ n = sum(labels < 0) ;
32
+
33
+ if ~isempty(opts.numPositives)
34
+ if opts.numPositives < p
35
+ warning('NUMPOSITIVES is smaller than the number of positives in LABELS.') ;
36
+ end
37
+ p = opts.numPositives ;
38
+ end
39
+
40
+ if ~isempty(opts.numNegatives)
41
+ if opts.numNegatives < n
42
+ warning('NUMNEGATIVES is smaller than the number of negatives in LABELS.') ;
43
+ end
44
+ n = opts.numNegatives ;
45
+ end
46
+
47
+ % sort by descending scores
48
+ [scores, perm] = sort(scores, 'descend') ;
49
+
50
+ % assume that data with -INF score is never retrieved
51
+ if opts.includeInf
52
+ stop = length(scores) ;
53
+ else
54
+ stop = max(find(scores > -inf)) ;
55
+ end
56
+ perm = perm(1:stop) ;
57
+ labels = labels(perm) ;
58
+
59
+ % accumulate true positives and false positives by scores
60
+ % in descending order
61
+ tp = [0 cumsum(labels > 0)] ;
62
+ fp = [0 cumsum(labels < 0)] ;