File size: 12,340 Bytes
d4035c1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 |
<!DOCTYPE group PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<group>
<p>The <em>Scale-Invariant Feature Transform (SIFT)</em> bundles a
feature detector and a feature descriptor. The detector extracts from
an image a number of frames (attributed regions) in a way which is
consistent with (some) variations of the illumination, viewpoint and
other viewing conditions. The descriptor associates to the regions a
signature which identifies their appearance compactly and
robustly. For a more in-depth description of the algorithm, see our
<a href="%pathto:root;api/sift_8h.html">API reference for SIFT</a>.</p>
<ul>
<li><a href="%pathto:tut.sift.extract;">Extracting frames and descriptors</a></li>
<li><a href="%pathto:tut.sift.match;">Basic matching</a></li>
<li><a href="%pathto:tut.sift.param;">Detector parameters</a></li>
<li><a href="%pathto:tut.sift.custom;">Custom frames</a></li>
<li><a href="%pathto:tut.sift.conventions;">Conventions</a></li>
<li><a href="%pathto:tut.sift.ubc;">Comparison with D. Lowe's SIFT</a></li>
</ul>
<!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<h1 id="tut.sift.extract">Extracting frames and descriptors</h1>
<!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<p>Both the detector and descriptor are accessible by
the <code>vl_sift</code> MATLAB command (there is a similar command
line utility). Open MATLAB and load a test image</p>
<pre>
pfx = fullfile(vl_root,'data','a.jpg') ;
I = imread(pfx) ;
image(I) ;
</pre>
<div class="figure">
<img src="%pathto:root;demo/sift_basic_0.jpg"/>
<div class="caption">
<span class="content">
Input image.
</span>
</div>
</div>
<p>The <code>vl_sift</code> command requires a single precision gray
scale image. It also expects the range to be normalized in the [0,255]
interval (while this is not strictly required, the default values of
some internal thresholds are tuned for this case). The image
<code>I</code> is converted in the appropriate format by</p>
<pre>
I = single(rgb2gray(I)) ;
</pre>
<p>We compute the SIFT frames (keypoints) and descriptors by</p>
<pre>
[f,d] = vl_sift(I) ;
</pre>
<p>The matrix <code>f</code> has a column for each frame. A frame is a
disk of center <code>f(1:2)</code>, scale <code>f(3)</code> and
orientation <code>f(4)</code> . We visualize a random selection of 50
features by:</p>
<pre>
perm = randperm(size(f,2)) ;
sel = perm(1:50) ;
h1 = vl_plotframe(f(:,sel)) ;
h2 = vl_plotframe(f(:,sel)) ;
set(h1,'color','k','linewidth',3) ;
set(h2,'color','y','linewidth',2) ;
</pre>
<div class="figure">
<img src="%pathto:root;demo/sift_basic_2.jpg"/>
<div class="caption">
Some of the detected SIFT frames.
</div>
</div>
<p>We can also overlay the descriptors by</p>
<pre>
h3 = vl_plotsiftdescriptor(d(:,sel),f(:,sel)) ;
set(h3,'color','g') ;
</pre>
<div class="figure">
<img src="%pathto:root;demo/sift_basic_3.jpg"/>
<div class="caption">
A test image for the peak threshold parameter.
</div>
</div>
<!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<h1 id="tut.sift.match">Basic matching</h1>
<!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<p>SIFT descriptors are often used find similar regions in two
images. <code>vl_ubcmatch</code> implements a basic matching
algorithm. Let <code>Ia</code> and <code>Ib</code> be images of the
same object or scene. We extract and match the descriptors by:
</p>
<pre>
[fa, da] = vl_sift(Ia) ;
[fb, db] = vl_sift(Ib) ;
[matches, scores] = vl_ubcmatch(da, db) ;
</pre>
<div class="figure">
<img src="%pathto:root;demo/sift_match_1.jpg"/>
<div class="caption">
Matching of SIFT descriptors with <code>vl_ubcmatch</code>. Notice
that SIFT is able to cope with a large change in scale and image
rotation.
</div>
</div>
<p>
For each descriptor in <code>da</code>, <code>vl_ubcmatch</code> finds
the closest descriptor in <code>db</code> (as measured by the L2 norm
of the difference between them). The index of the original match and
the closest descriptor is stored in each column of
<code>matches</code> and the distance between the pair is stored in
<code>scores</code>.
</p>
<p>
Matches also can be filtered for uniqueness by passing a third
parameter to <code>vl_ubcmatch</code> which specifies a threshold.
Here, the uniqueness of a pair is measured as the ratio of the
distance between the best matching keypoint and the distance to the
second best one (see <code>vl_ubcmatch</code> for further details).
</p>
<!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<h1 id="tut.sift.param">Detector parameters</h1>
<!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<p>The SIFT detector is controlled mainly by two parameters: the peak
threshold and the (non) edge threshold.</p>
<p>The <em>peak threshold</em> filters peaks of the DoG scale space
that are too small (in absolute value). For instance, consider a
test image of 2D Gaussian blobs:</p>
<pre><![CDATA[
I = double(rand(100,500) <= .005) ;
I = (ones(100,1) * linspace(0,1,500)) .* I ;
I(:,1) = 0 ; I(:,end) = 0 ;
I(1,:) = 0 ; I(end,:) = 0 ;
I = 2*pi*4^2 * vl_imsmooth(I,4)
I = single(255 * I) ;]]>
</pre>
<div class="figure">
<img src="%pathto:root;demo/sift_peak_0.jpg"/>
<div class="caption">
A test image for the peak threshold parameter.
</div>
</div>
<p>We run the detector with peak threshold <code>peak_thresh</code> by</p>
<pre>
f = vl_sift(I, 'PeakThresh', peak_thresh) ;
</pre>
<p>obtaining fewer features as <code>peak_thresh</code> is increased.</p>
<div class="figure">
<img src="%pathto:root;demo/sift_peak_1.jpg"/>
<img src="%pathto:root;demo/sift_peak_2.jpg"/>
<img src="%pathto:root;demo/sift_peak_3.jpg"/>
<img src="%pathto:root;demo/sift_peak_4.jpg"/>
<div class="caption">
<span class="content">
Detected frames for increasing peak threshold.<br/>
From top: <code>peak_thresh = {0, 10, 20, 30}</code>.
</span>
</div>
</div>
<p>The <em>edge threshold</em> eliminates peaks of
the DoG scale space whose curvature is too small (such peaks yield
badly localized frames). For instance, consider the test image</p>
<pre>
I = zeros(100,500) ;
for i=[10 20 30 40 50 60 70 80 90]
I(50-round(i/3):50+round(i/3),i*5) = 1 ;
end
I = 2*pi*8^2 * vl_imsmooth(I,8) ;
I = single(255 * I) ;
</pre>
<div class="figure">
<img src="%pathto:root;demo/sift_edge_0.jpg"/>
<div class="caption">
<span class="content">
A test image for the edge threshold parameter.
</span>
</div>
</div>
<p>We run the detector with edge threshold <code>edge_thresh</code> by</p>
<pre>
f = vl_sift(I, 'edgethresh', edge_thresh) ;
</pre>
<p>obtaining more features as <code>edge_thresh</code> is increased:</p>
<div class="figure">
<img src="%pathto:root;demo/sift_edge_1.jpg"/>
<img src="%pathto:root;demo/sift_edge_2.jpg"/>
<img src="%pathto:root;demo/sift_edge_3.jpg"/>
<img src="%pathto:root;demo/sift_edge_4.jpg"/>
<div class="caption">
<span class="content">
Detected frames for increasing edge threshold.<br/>
From top: <code>edge_thresh = {3.5, 5, 7.5, 10}</code>
</span>
</div>
</div>
<!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<h1 id="tut.sift.custom">Custom frames</h1>
<!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<p>The MATLAB command <code>vl_sift</code> (and the command line utility)
can bypass the detector and compute the descriptor on custom frames using
the <code>Frames</code> option.</p>
<p>For instance, we can compute the descriptor of a SIFT frame
centered at position <code>(100,100)</code>, of scale <code>10</code>
and orientation <code>-pi/8</code> by</p>
<pre>
fc = [100;100;10;-pi/8] ;
[f,d] = vl_sift(I,'frames',fc) ;
</pre>
<div class="figure">
<img src="%pathto:root;demo/sift_basic_4.jpg"/>
<div class="caption">
<span class="content">
Custom frame at with fixed orientation.
</span>
</div>
</div>
<p>Multiple frames <code>fc</code> may be specified as well. In this
case they are re-ordered by increasing
scale. The <code>Orientations</code> option instructs the program to
use the custom position and scale but to compute the keypoint
orientations, as in</p>
<pre>
fc = [100;100;10;0] ;
[f,d] = vl_sift(I,'frames',fc,'orientations') ;
</pre>
<div class="figure">
<img src="%pathto:root;demo/sift_basic_5.jpg"/>
<div class="caption">
<span class="content">
Custom frame with computed orientations.
</span>
</div>
</div>
<p>Notice that, depending on the local appearance, a keypoint may
have <em>multiple</em> orientations. Moreover, a keypoint computed on
a constant image region (such as a one pixel region) has no
orientations!</p>
<!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<h1 id="tut.sift.conventions">Conventions</h1>
<!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<p>In our implementation SIFT frames are expressed in the standard
image reference. The only difference between the command line and
MATLAB drivers is that the latter assumes that the image origin
(top-left corner) has coordinate (1,1) as opposed to (0,0). Lowe's
original implementation uses a different reference system, illustrated
next:</p>
<div class="figure">
<img src="%pathto:root;figures/sift-conv-vlfeat.png"/> <br/>
<img src="%pathto:root;figures/sift-conv.png"/>
<div class="caption">
<span class="content">
Our conventions (top) compared to Lowe's (bottom).
</span>
</div>
</div>
<p>Our implementation uses the standard image reference system, with
the <code>y</code> axis pointing downward. The frame
orientation <code>θ</code> and descriptor use the same reference
system (i.e. a small positive rotation of the <code>x</code> moves it
towards the <code>y</code> axis). Recall that each descriptor element
is a bin indexed by <code>(θ,x,y)</code>; the histogram is
vectorized in such a way that <code>θ</code> is the fastest
varying index and <code>y</code> the slowest.</p>
<p>By comparison, D. Lowe's implementation (see bottom half of the
figure) uses a slightly different convention: Frame centers are
expressed relatively to the standard image reference system, but the
frame orientation and the descriptor assume that the <em> y </em>
axis points upward. Consequently, to map from our to D. Lowe's
convention, frames orientations need to be negated and the descriptor
elements must be re-arranged.</p>
<!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<h1 id="tut.sift.ubc">Comparison with D. Lowe's SIFT</h1>
<!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<p>VLFeat SIFT implementation is largely compatible with
<a href="http://www.cs.ubc.ca/~lowe/keypoints/">UBC (D. Lowe's)
implementation</a> (note however that the keypoints are stored in a
slightly different format, see <code>vl_ubcread</code>). The following
figure compares SIFT keypoints computed by the VLFeat (blue) and UBC
(red) implementations.</p>
<div class="figure">
<img src="%pathto:root;demo/sift_vs_ubc_1.jpg"/>
<div class="caption">
<span class="content">
VLFeat keypoints (blue) superimposed to D. Lowe's keypoints
(red). Most keypoints match nearly exactly.
</span>
</div>
</div>
<p>The large majority of keypoints correspond nearly exactly. The
following figure shows the percentage of keypoints computed by the two
implementations whose center matches with a precision of at least 0.01
pixels and 0.05 pixels respectively.</p>
<div class="figure">
<img src="%pathto:root;demo/sift_vs_ubc_2.jpg"/>
<div class="caption">
<span class="content">
Percentage of keypoints obtained from the VLFeat and UBC implementations
that match up to 0.01 pixels and 0.05 pixels.
</span>
</div>
</div>
<p>Descriptors are also very similar. The following figure shows the
percentage of descriptors computed by the two implementations whose
distance is less than 5%, 10% and 20% of the average descriptor
distance.</p>
<div class="figure">
<img src="%pathto:root;demo/sift_vs_ubc_3.jpg"/>
<div class="caption">
<span class="content">
Percentage of descriptors obtained from the VLFeat and UBC
implementations whose distance is within 10% and 20% of the average
descriptor distance.
</span>
</div>
</div>
</group>
|