Monketoo's picture
Add files using upload-large-folder tool
43e97d4 verified

Unfortunately the computation of the optical flow field leads to a number of well known difficulties. The input is the projected (gray-level) image of the sur- roundings as a function of time, i.e. a three-dimensional structure. It is in general not possible to uniquely identify what path through the spatio-temporal image is a projection of a certain object point. Thus, further assumptions are needed, the most common one is the brightness constancy assumption, that the projec- tion of each object point has a constant gray level. The brightness constancy assumption breaks down if the light changes, if the object have non-Lambertian reflection, or, if it has specular reflections. However, the problem is still under- determined, generically. Except at local extrema in the gray-level image, points with a certain gray-level lie along curves, and these curves sweep out surfaces in the spatio-temporal image. A point along such a curve can therefore corre- spond to any point on the surface at later instants of time. This is refered to as the aperture problem and is usually treated by invoking additional constraints e.g. regularization assumptions, such as smoothly varying brightness patterns, or parameterized surface models and trajectory models, leading to least-square methods applied in small image regions. Beside the questionable validity of these assumptions they lead to inferior results near motion boundaries, i.e. the regions that carry most information about object boundaries. The behavior when new image structure appears or old structure disappears is also undefined.

An alternative approach for visual motion analysis is to directly analyze the geometrical structure of the spatio-temporal input image, thereby avoiding the detour through the optic flow estimation step [18, 19, 11]. By using the differential geometry of the spatio-temporal image, we get a low level syntactical description of the moving image whithout having to rely on the more high level semantic concept of object particle motion.

A systematic study of the local image structure, in the context of scale-space theory, has been pursued by Florack [6]. The basic idea is to find all descriptors of differential image structure that are invariant to rotation and translation (the Euclidean group). The choice of Euclidean invariance reflects that the image structures should be possible to recognize in spite of (small) camera translations and rotations around the optical axis. This theory embeds many of the operators previously used in computer vision, such as Canny's edge detector, Laplacian zero-crossings, blobs, isophote curvature and as well enabling the discovery of new ones.

2 Spatio-Temporal Image Geometry

Extending from a theory about spatial images to one about spatio-temporal im- ages it is natural to use the concept of absolute time (see e.g. [8] for a more elaborate discussion). Each point in space-time can be designated numeric label describing what time it occurred. The sets of space-time points that occurred at the same time are called planes of simultaneity and the temporal distance be- tween two planes of simultaneity can be measured (in the small spatio-temporal regions that seeing creatures, operates in, we see no need for handling relativistic