text stringlengths 17 3.36M | source stringlengths 3 333 | __index_level_0__ int64 0 518k |
|---|---|---|
Super-resolution is an important but difficult problem in image/video processing. If a video sequence or some training set other than the given low-resolution image is available, this kind of extra information can greatly aid in the reconstruction of the high-resolution image. The problem is substantially more difficult with only a single low-resolution image on hand. The image reconstruction methods designed primarily for denoising is insufficient for super-resolution problem in the sense that it tends to oversmooth images with essentially no noise. We propose a new adaptive linear interpolation method based on variational method and inspired by local linear embedding (LLE). The experimental result shows that our method avoids the problem of oversmoothing and preserves image structures well. | Variational local structure estimation for image super-resolution | 3,000 |
Kernel estimation techniques, such as mean shift, suffer from one major drawback: the kernel bandwidth selection. The bandwidth can be fixed for all the data set or can vary at each points. Automatic bandwidth selection becomes a real challenge in case of multidimensional heterogeneous features. This paper presents a solution to this problem. It is an extension of \cite{Comaniciu03a} which was based on the fundamental property of normal distributions regarding the bias of the normalized density gradient. The selection is done iteratively for each type of features, by looking for the stability of local bandwidth estimates across a predefined range of bandwidths. A pseudo balloon mean shift filtering and partitioning are introduced. The validity of the method is demonstrated in the context of color image segmentation based on a 5-dimensional space. | Bandwidth selection for kernel estimation in mixed multi-dimensional
spaces | 3,001 |
High resolution satellite image sequences are multidimensional signals composed of spatio-temporal patterns associated to numerous and various phenomena. Bayesian methods have been previously proposed in (Heas and Datcu, 2005) to code the information contained in satellite image sequences in a graph representation using Bayesian methods. Based on such a representation, this paper further presents a supervised learning methodology of semantics associated to spatio-temporal patterns occurring in satellite image sequences. It enables the recognition and the probabilistic retrieval of similar events. Indeed, graphs are attached to statistical models for spatio-temporal processes, which at their turn describe physical changes in the observed scene. Therefore, we adjust a parametric model evaluating similarity types between graph patterns in order to represent user-specific semantics attached to spatio-temporal phenomena. The learning step is performed by the incremental definition of similarity types via user-provided spatio-temporal pattern examples attached to positive or/and negative semantics. From these examples, probabilities are inferred using a Bayesian network and a Dirichlet model. This enables to links user interest to a specific similarity model between graph patterns. According to the current state of learning, semantic posterior probabilities are updated for all possible graph patterns so that similar spatio-temporal phenomena can be recognized and retrieved from the image sequence. Few experiments performed on a multi-spectral SPOT image sequence illustrate the proposed spatio-temporal recognition method. | Supervised learning on graphs of spatio-temporal similarity in satellite
image sequences | 3,002 |
A recent paper \cite{CaeCaeSchBar06} proposed a provably optimal, polynomial time method for performing near-isometric point pattern matching by means of exact probabilistic inference in a chordal graphical model. Their fundamental result is that the chordal graph in question is shown to be globally rigid, implying that exact inference provides the same matching solution as exact inference in a complete graphical model. This implies that the algorithm is optimal when there is no noise in the point patterns. In this paper, we present a new graph which is also globally rigid but has an advantage over the graph proposed in \cite{CaeCaeSchBar06}: its maximal clique size is smaller, rendering inference significantly more efficient. However, our graph is not chordal and thus standard Junction Tree algorithms cannot be directly applied. Nevertheless, we show that loopy belief propagation in such a graph converges to the optimal solution. This allows us to retain the optimality guarantee in the noiseless case, while substantially reducing both memory requirements and processing time. Our experimental results show that the accuracy of the proposed solution is indistinguishable from that of \cite{CaeCaeSchBar06} when there is noise in the point patterns. | Graph rigidity, Cyclic Belief Propagation and Point Pattern Matching | 3,003 |
In this paper, we use belief-propagation techniques to develop fast algorithms for image inpainting. Unlike traditional gradient-based approaches, which may require many iterations to converge, our techniques achieve competitive results after only a few iterations. On the other hand, while belief-propagation techniques are often unable to deal with high-order models due to the explosion in the size of messages, we avoid this problem by approximating our high-order prior model using a Gaussian mixture. By using such an approximation, we are able to inpaint images quickly while at the same time retaining good visual results. | High-Order Nonparametric Belief-Propagation for Fast Image Inpainting | 3,004 |
In this paper, we firstly modify a parameter in affinity propagation (AP) to improve its convergence ability, and then, we apply it to vector quantization (VQ) codebook design problem. In order to improve the quality of the resulted codebook, we combine the improved AP (IAP) with the conventional LBG algorithm to generate an effective algorithm call IAP-LBG. According to the experimental results, the proposed method not only enhances the convergence abilities but also is capable of providing higher-quality codebooks than conventional LBG method. | An Affinity Propagation Based method for Vector Quantization Codebook
Design | 3,005 |
Although the recognition of isolated handwritten digits has been a research topic for many years, it continues to be of interest for the research community and for commercial applications. We show that despite the maturity of the field, different approaches still deliver results that vary enough to allow improvements by using their combination. We do so by choosing four well-motivated state-of-the-art recognition systems for which results on the standard MNIST benchmark are available. When comparing the errors made, we observe that the errors made differ between all four systems, suggesting the use of classifier combination. We then determine the error rate of a hypothetical system that combines the output of the four systems. The result obtained in this manner is an error rate of 0.35% on the MNIST data, the best result published so far. We furthermore discuss the statistical significance of the combined result and of the results of the individual classifiers. | Comparison and Combination of State-of-the-art Techniques for
Handwritten Character Recognition: Topping the MNIST Benchmark | 3,006 |
I describe an approach to similarity motivated by Bayesian methods. This yields a similarity function that is learnable using a standard Bayesian methods. The relationship of the approach to variable kernel and variable metric methods is discussed. The approach is related to variable kernel Experimental results on character recognition and 3D object recognition are presented.. | Learning Similarity for Character Recognition and 3D Object Recognition | 3,007 |
Learning object models from views in 3D visual object recognition is usually formulated either as a function approximation problem of a function describing the view-manifold of an object, or as that of learning a class-conditional density. This paper describes an alternative framework for learning in visual object recognition, that of learning the view-generalization function. Using the view-generalization function, an observer can perform Bayes-optimal 3D object recognition given one or more 2D training views directly, without the need for a separate model acquisition step. The paper shows that view generalization functions can be computationally practical by restating two widely-used methods, the eigenspace and linear combination of views approaches, in a view generalization framework. The paper relates the approach to recent methods for object recognition based on non-uniform blurring. The paper presents results both on simulated 3D ``paperclip'' objects and real-world images from the COIL-100 database showing that useful view-generalization functions can be realistically be learned from a comparatively small number of training examples. | Learning View Generalization Functions | 3,008 |
This paper proves that visual object recognition systems using only 2D Euclidean similarity measurements to compare object views against previously seen views can achieve the same recognition performance as observers having access to all coordinate information and able of using arbitrary 3D models internally. Furthermore, it demonstrates that such systems do not require more training views than Bayes-optimal 3D model-based systems. For building computer vision systems, these results imply that using view-based or appearance-based techniques with carefully constructed combination of evidence mechanisms may not be at a disadvantage relative to 3D model-based systems. For computational approaches to human vision, they show that it is impossible to distinguish view-based and 3D model-based techniques for 3D object recognition solely by comparing the performance achievable by human and 3D model-based systems.} | View Based Methods can achieve Bayes-Optimal 3D Recognition | 3,009 |
Segmentation algorithms based on an energy minimisation framework often depend on a scale parameter which balances a fit to data and a regularising term. Irregular pyramids are defined as a stack of graphs successively reduced. Within this framework, the scale is often defined implicitly as the height in the pyramid. However, each level of an irregular pyramid can not usually be readily associated to the global optimum of an energy or a global criterion on the base level graph. This last drawback is addressed by the scale set framework designed by Guigues. The methods designed by this author allow to build a hierarchy and to design cuts within this hierarchy which globally minimise an energy. This paper studies the influence of the construction scheme of the initial hierarchy on the resulting optimal cuts. We propose one sequential and one parallel method with two variations within both. Our sequential methods provide partitions near the global optima while parallel methods require less execution times than the sequential method of Guigues even on sequential machines. | Hierarchy construction schemes within the Scale set framework | 3,010 |
The LULU operators for sequences are extended to multi-dimensional arrays via the morphological concept of connection in a way which preserves their essential properties, e.g. they are separators and form a four element fully ordered semi-group. The power of the operators is demonstrated by deriving a total variation preserving discrete pulse decomposition of images. | A Class of LULU Operators on Multi-Dimensional Arrays | 3,011 |
This paper proposes a novel method for segmentation of images by hierarchical multilevel thresholding. The method is global, agglomerative in nature and disregards pixel locations. It involves the optimization of the ratio of the unbiased estimators of within class to between class variances. We obtain a recursive relation at each step for the variances which expedites the process. The efficacy of the method is shown in a comparison with some well-known methods. | A Fast Hierarchical Multilevel Image Segmentation Method using Unbiased
Estimators | 3,012 |
We show the potential for classifying images of mixtures of aggregate, based themselves on varying, albeit well-defined, sizes and shapes, in order to provide a far more effective approach compared to the classification of individual sizes and shapes. While a dominant (additive, stationary) Gaussian noise component in image data will ensure that wavelet coefficients are of Gaussian distribution, long tailed distributions (symptomatic, for example, of extreme values) may well hold in practice for wavelet coefficients. Energy (2nd order moment) has often been used for image characterization for image content-based retrieval, and higher order moments may be important also, not least for capturing long tailed distributional behavior. In this work, we assess 2nd, 3rd and 4th order moments of multiresolution transform -- wavelet and curvelet transform -- coefficients as features. As analysis methodology, taking account of image types, multiresolution transforms, and moments of coefficients in the scales or bands, we use correspondence analysis as well as k-nearest neighbors supervised classification. | Wavelet and Curvelet Moments for Image Classification: Application to
Aggregate Mixture Grading | 3,013 |
We present the SAMMI lightweight object detection method which has a high level of accuracy and robustness, and which is able to operate in an environment with a large number of cameras. Background modeling is based on DCT coefficients provided by cameras. Foreground detection uses similarity in temporal characteristics of adjacent blocks of pixels, which is a computationally inexpensive way to make use of object coherence. Scene model updating uses the approximated median method for improved performance. Evaluation at pixel level and application level shows that SAMMI object detection performs better and faster than the conventional Mixture of Gaussians method. | Spatio-activity based object detection | 3,014 |
The method of a linear high dynamic range imaging using solid-state photosensors with Bayer colour filters array is provided in this paper. Using information from neighbour pixels, it is possible to reconstruct linear images with wide dynamic range from the oversaturated images. Bayer colour filters array is considered as an array of neutral filters in a quasimonochromatic light. If the camera's response function to the desirable light source is known then one can calculate correction coefficients to reconstruct oversaturated images. Reconstructed images are linearized in order to provide a linear high dynamic range images for optical-digital imaging systems. The calibration procedure for obtaining the camera's response function to the desired light source is described. Experimental results of the reconstruction of the images from the oversaturated images are presented for red, green, and blue quasimonochromatic light sources. Quantitative analysis of the accuracy of the reconstructed images is provided. | Using Spatially Varying Pixels Exposures and Bayer-covered Photosensors
for High Dynamic Range Imaging | 3,015 |
In this paper, we design linear time algorithms to recognize and determine topological invariants such as the genus and homology groups in 3D. These properties can be used to identify patterns in 3D image recognition. This has tremendous amount of applications in 3D medical image analysis. Our method is based on cubical images with direct adjacency, also called (6,26)-connectivity images in discrete geometry. According to the fact that there are only six types of local surface points in 3D and a discrete version of the well-known Gauss-Bonnett Theorem in differential geometry, we first determine the genus of a closed 2D-connected component (a closed digital surface). Then, we use Alexander duality to obtain the homology groups of a 3D object in 3D space. | Linear Time Recognition Algorithms for Topological Invariants in 3D | 3,016 |
This paper proposes a novel algorithm for the problem of structural image segmentation through an interactive model-based approach. Interaction is expressed in the model creation, which is done according to user traces drawn over a given input image. Both model and input are then represented by means of attributed relational graphs derived on the fly. Appearance features are taken into account as object attributes and structural properties are expressed as relational attributes. To cope with possible topological differences between both graphs, a new structure called the deformation graph is introduced. The segmentation process corresponds to finding a labelling of the input graph that minimizes the deformations introduced in the model when it is updated with input information. This approach has shown to be faster than other segmentation methods, with competitive output quality. Therefore, the method solves the problem of multiple label segmentation in an efficient way. Encouraging results on both natural and target-specific color images, as well as examples showing the reusability of the model, are presented and discussed. | A New Algorithm for Interactive Structural Image Segmentation | 3,017 |
By considering the features of the airport runway image filtering, an improved bilateral filtering method was proposed which can remove noise with edge preserving. Firstly the steerable filtering decomposition is used to calculate the sub-band parameters of 4 orients, and the texture feature matrix is then obtained from the sub-band local median energy. The texture similar, the spatial closer and the color similar functions are used to filter the image.The effect of the weighting function parameters is qualitatively analyzed also. In contrast with the standard bilateral filter and the simulation results for the real airport runway image show that the multilateral filtering is more effective than the standard bilateral filtering. | A multilateral filtering method applied to airplane runway image | 3,018 |
In this paper, we focus on statistical region-based active contour models where image features (e.g. intensity) are random variables whose distribution belongs to some parametric family (e.g. exponential) rather than confining ourselves to the special Gaussian case. Using shape derivation tools, our effort focuses on constructing a general expression for the derivative of the energy (with respect to a domain) and derive the corresponding evolution speed. A general result is stated within the framework of multi-parameter exponential family. More particularly, when using Maximum Likelihood estimators, the evolution speed has a closed-form expression that depends simply on the probability density function, while complicating additive terms appear when using other estimators, e.g. moments method. Experimental results on both synthesized and real images demonstrate the applicability of our approach. | Statistical region-based active contours with exponential family
observations | 3,019 |
In this paper, we propose to combine formally noise and shape priors in region-based active contours. On the one hand, we use the general framework of exponential family as a prior model for noise. On the other hand, translation and scale invariant Legendre moments are considered to incorporate the shape prior (e.g. fidelity to a reference shape). The combination of the two prior terms in the active contour functional yields the final evolution equation whose evolution speed is rigorously derived using shape derivative tools. Experimental results on both synthetic images and real life cardiac echography data clearly demonstrate the robustness to initialization and noise, flexibility and large potential applicability of our segmentation algorithm. | Region-based active contour with noise and shape priors | 3,020 |
Feature selection is a pattern recognition approach to choose important variables according to some criteria to distinguish or explain certain phenomena. There are many genomic and proteomic applications which rely on feature selection to answer questions such as: selecting signature genes which are informative about some biological state, e.g. normal tissues and several types of cancer; or defining a network of prediction or inference among elements such as genes, proteins, external stimuli and other elements of interest. In these applications, a recurrent problem is the lack of samples to perform an adequate estimate of the joint probabilities between element states. A myriad of feature selection algorithms and criterion functions are proposed, although it is difficult to point the best solution in general. The intent of this work is to provide an open-source multiplataform graphical environment to apply, test and compare many feature selection approaches suitable to be used in bioinformatics problems. | DimReduction - Interactive Graphic Environment for Dimensionality
Reduction | 3,021 |
In block-matching motion estimation (BMME), the search patterns have a significant impact on the algorithm's performance, both the search speed and the search quality. The search pattern should be designed to fit the motion vector probability (MVP) distribution characteristics of the real-world sequences. In this paper, we build a directional model of MVP distribution to describe the directional-center-biased characteristic of the MVP distribution and the directional characteristics of the conditional MVP distribution more exactly based on the detailed statistical data of motion vectors of eighteen popular sequences. Three directional search patterns are firstly designed by utilizing the directional characteristics and they are the smallest search patterns among the popular ones. A new algorithm is proposed using the horizontal cross search pattern as the initial step and the horizontal/vertical diamond search pattern as the subsequent step for the fast BMME, which is called the directional cross diamond search (DCDS) algorithm. The DCDS algorithm can obtain the motion vector with fewer search points than CDS, DS or HEXBS while maintaining the similar or even better search quality. The gain on speedup of DCDS over CDS or DS can be up to 54.9%. The simulation results show that DCDS is efficient, effective and robust, and it can always give the faster search speed on different sequences than other fast block-matching algorithm in common use. | Directional Cross Diamond Search Algorithm for Fast Block Motion
Estimation | 3,022 |
We investigate a biologically motivated approach to fast visual classification, directly inspired by the recent work of Serre et al. Specifically, trading-off biological accuracy for computational efficiency, we explore using wavelet and grouplet-like transforms to parallel the tuning of visual cortex V1 and V2 cells, alternated with max operations to achieve scale and translation invariance. A feature selection procedure is applied during learning to accelerate recognition. We introduce a simple attention-like feedback mechanism, significantly improving recognition and robustness in multiple-object scenes. In experiments, the proposed algorithm achieves or exceeds state-of-the-art success rate on object recognition, texture and satellite image classification, language identification and sound classification. | Fast Wavelet-Based Visual Classification | 3,023 |
We propose a robust classification algorithm for curves in 2D and 3D, under the special and full groups of affine transformations. To each plane or spatial curve we assign a plane signature curve. Curves, equivalent under an affine transformation, have the same signature. The signatures introduced in this paper are based on integral invariants, which behave much better on noisy images than classically known differential invariants. The comparison with other types of invariants is given in the introduction. Though the integral invariants for planar curves were known before, the affine integral invariants for spatial curves are proposed here for the first time. Using the inductive variation of the moving frame method we compute affine invariants in terms of Euclidean invariants. We present two types of signatures, the global signature and the local signature. Both signatures are independent of parameterization (curve sampling). The global signature depends on the choice of the initial point and does not allow us to compare fragments of curves, and is therefore sensitive to occlusions. The local signature, although is slightly more sensitive to noise, is independent of the choice of the initial point and is not sensitive to occlusions in an image. It helps establish local equivalence of curves. The robustness of these invariants and signatures in their application to the problem of classification of noisy spatial curves extracted from a 3D object is analyzed. | Classification of curves in 2D and 3D via affine integral signatures | 3,024 |
Adams and Bishop have proposed in 1994 a novel region growing algorithm called seeded region growing by pixels aggregation (SRGPA). This paper introduces a framework to implement an algorithm using SRGPA. This framework is built around two concepts: localization and organization of applied action. This conceptualization gives a quick implementation of algorithms, a direct translation between the mathematical idea and the numerical implementation, and an improvement of algorithms efficiency. | Conceptualization of seeded region growing by pixels aggregation. Part
1: the framework | 3,025 |
In the previous paper, we have conceptualized the localization and the organization of seeded region growing by pixels aggregation (SRGPA) but we do not give the issue when there is a collision between two distinct regions during the growing process. In this paper, we propose two implementations to manage two classical growing processes: one without a boundary region region to divide the other regions and another with. Unfortunately, as noticed by Mehnert and Jakway (1997), this partition depends on the seeded region initialisation order (SRIO). We propose a growing process, invariant about SRIO such as the boundary region is the set of ambiguous pixels. | Conceptualization of seeded region growing by pixels aggregation. Part
2: how to localize a final partition invariant about the seeded region
initialisation order | 3,026 |
In the two previous papers of this serie, we have created a library, called Population, dedicated to seeded region growing by pixels aggregation and we have proposed different growing processes to get a partition with or without a boundary region to divide the other regions or to get a partition invariant about the seeded region initialisation order. Using this work, we implement some algorithms belonging to the field of SRGPA using this library and these growing processes. | Conceptualization of seeded region growing by pixels aggregation. Part
3: a wide range of algorithms | 3,027 |
This paper proposes a simple, generic and robust method to extract the grains from experimental tridimensionnal images of granular materials obtained by X-ray tomography. This extraction has two steps: segmentation and splitting. For the segmentation step, if there is a sufficient contrast between the different components, a classical threshold procedure followed by a succession of morphological filters can be applied. If not, and if the boundary needs to be localized precisely, a watershed transformation controlled by labels is applied. The basement of this transformation is to localize a label included in the component and another label in the component complementary. A "soft" threshold following by an opening is applied on the initial image to localize a label in a component. For any segmentation procedure, the visualisation shows a problem: some groups of two grains, close one to each other, become connected. So if a classical cluster procedure is applied on the segmented binary image, these numerical connected grains are considered as a single grain. To overcome this problem, we applied a procedure introduced by L. Vincent in 1993. This grains extraction is tested for various complexes porous media and granular material, to predict various properties (diffusion, electrical conductivity, deformation field) in a good agreement with experiment data. | Conceptualization of seeded region growing by pixels aggregation. Part
4: Simple, generic and robust extraction of grains in granular materials
obtained by X-ray tomography | 3,028 |
The goal of this paper is to estimate directly the rotation and translation between two stereoscopic images with the help of five homologous points. The methodology presented does not mix the rotation and translation parameters, which is comparably an important advantage over the methods using the well-known essential matrix. This results in correct behavior and accuracy for situations otherwise known as quite unfavorable, such as planar scenes, or panoramic sets of images (with a null base length), while providing quite comparable results for more "standard" cases. The resolution of the algebraic polynomials resulting from the modeling of the coplanarity constraint is made with the help of powerful algebraic solver tools (the Groebner bases and the Rational Univariate Representation). | The Five Points Pose Problem : A New and Accurate Solution Adapted to
any Geometric Configuration | 3,029 |
Colour and coarseness of skin are visually different. When image processing is involved in the skin analysis, it is important to quantitatively evaluate such differences using texture features. In this paper, we discuss a texture analysis and measurements based on a statistical approach to the pattern recognition. Grain size and anisotropy are evaluated with proper diagrams. The possibility to determine the presence of pattern defects is also discussed. | An image processing analysis of skin textures | 3,030 |
The compound models of clutter statistics are found suitable to describe the nonstationary nature of radar backscattering from high-resolution observations. In this letter, we show that the properties of Mellin transform can be utilized to generate higher order moments of simple and compound models of clutter statistics in a compact manner. | Higher Order Moments Generation by Mellin Transform for Compound Models
of Clutter | 3,031 |
Most search engines index the textual content of documents in digital libraries. However, scholarly articles frequently report important findings in figures for visual impact and the contents of these figures are not indexed. These contents are often invaluable to the researcher in various fields, for the purposes of direct comparison with their own work. Therefore, searching for figures and extracting figure data are important problems. To the best of our knowledge, there exists no tool to automatically extract data from figures in digital documents. If we can extract data from these images automatically and store them in a database, an end-user can query and combine data from multiple digital documents simultaneously and efficiently. We propose a framework based on image analysis and machine learning to extract information from 2-D plot images and store them in a database. The proposed algorithm identifies a 2-D plot and extracts the axis labels, legend and the data points from the 2-D plot. We also segregate overlapping shapes that correspond to different data points. We demonstrate performance of individual algorithms, using a combination of generated and real-life images. | Automatic Identification and Data Extraction from 2-Dimensional Plots in
Digital Documents | 3,032 |
It is now well established that sparse signal models are well suited to restoration tasks and can effectively be learned from audio, image, and video data. Recent research has been aimed at learning discriminative sparse models instead of purely reconstructive ones. This paper proposes a new step in that direction, with a novel sparse representation for signals belonging to different classes in terms of a shared dictionary and multiple class-decision functions. The linear variant of the proposed model admits a simple probabilistic interpretation, while its most general variant admits an interpretation in terms of kernels. An optimization framework for learning all the components of the proposed model is presented, along with experimental results on standard handwritten digit and texture classification tasks. | Supervised Dictionary Learning | 3,033 |
Black box models of technical systems are purely descriptive. They do not explain why a system works the way it does. Thus, black box models are insufficient for some problems. But there are numerous applications, for example, in control engineering, for which a black box model is absolutely sufficient. In this article, we describe a general stochastic framework with which such models can be built easily and fully automated by observation. Furthermore, we give a practical example and show how this framework can be used to model and control a motorcar powertrain. | Modeling and Control with Local Linearizing Nadaraya Watson Regression | 3,034 |
Graph kernels methods are based on an implicit embedding of graphs within a vector space of large dimension. This implicit embedding allows to apply to graphs methods which where until recently solely reserved to numerical data. Within the shape classification framework, graphs are often produced by a skeletonization step which is sensitive to noise. We propose in this paper to integrate the robustness to structural noise by using a kernel based on a bag of path where each path is associated to a hierarchy encoding successive simplifications of the path. Several experiments prove the robustness and the flexibility of our approach compared to alternative shape classification methods. | Hierarchical Bag of Paths for Kernel Based Shape Classification | 3,035 |
In this paper we present a simple and robust method for self-correction of camera distortion using single images of scenes which contain straight lines. Since the most common distortion can be modelled as radial distortion, we illustrate the method using the Harris radial distortion model, but the method is applicable to any distortion model. The method is based on transforming the edgels of the distorted image to a 1-D angular Hough space, and optimizing the distortion correction parameters which minimize the entropy of the corresponding normalized histogram. Properly corrected imagery will have fewer curved lines, and therefore less spread in Hough space. Since the method does not rely on any image structure beyond the existence of edgels sharing some common orientations and does not use edge fitting, it is applicable to a wide variety of image types. For instance, it can be applied equally well to images of texture with weak but dominant orientations, or images with strong vanishing points. Finally, the method is performed on both synthetic and real data revealing that it is particularly robust to noise. | Camera distortion self-calibration using the plumb-line constraint and
minimal Hough entropy | 3,036 |
We consider the problem of classification of an object given multiple observations that possibly include different transformations. The possible transformations of the object generally span a low-dimensional manifold in the original signal space. We propose to take advantage of this manifold structure for the effective classification of the object represented by the observation set. In particular, we design a low complexity solution that is able to exploit the properties of the data manifolds with a graph-based algorithm. Hence, we formulate the computation of the unknown label matrix as a smoothing process on the manifold under the constraint that all observations represent an object of one single class. It results into a discrete optimization problem, which can be solved by an efficient and low complexity algorithm. We demonstrate the performance of the proposed graph-based algorithm in the classification of sets of multiple images. Moreover, we show its high potential in video-based face recognition, where it outperforms state-of-the-art solutions that fall short of exploiting the manifold structure of the face image data sets. | Graph-based classification of multiple observation sets | 3,037 |
This paper addresses the problem of 3D face recognition using simultaneous sparse approximations on the sphere. The 3D face point clouds are first aligned with a novel and fully automated registration process. They are then represented as signals on the 2D sphere in order to preserve depth and geometry information. Next, we implement a dimensionality reduction process with simultaneous sparse approximations and subspace projection. It permits to represent each 3D face by only a few spherical functions that are able to capture the salient facial characteristics, and hence to preserve the discriminant facial information. We eventually perform recognition by effective matching in the reduced space, where Linear Discriminant Analysis can be further activated for improved recognition performance. The 3D face recognition algorithm is evaluated on the FRGC v.1.0 data set, where it is shown to outperform classical state-of-the-art solutions that work with depth images. | 3D Face Recognition with Sparse Spherical Representations | 3,038 |
Statistical pattern recognition methods based on the Coherence Length Diagram (CLD) have been proposed for medical image analyses, such as quantitative characterisation of human skin textures, and for polarized light microscopy of liquid crystal textures. Further investigations are made on image maps originated from such diagram and some examples related to irregularity of microstructures are shown. | Mapping Images with the Coherence Length Diagrams | 3,039 |
In the paper, region based stereo matching algorithms are developed for extraction depth information from two color stereo image pair. A filter eliminating unreliable disparity estimation was used for increasing reliability of the disparity map. Obtained results by algorithms were represented and compared. | Obtaining Depth Maps From Color Images By Region Based Stereo Matching
Algorithms | 3,040 |
In this paper, we propose a new method for impulse noise removal from images. It uses the sparsity of images in the Discrete Cosine Transform (DCT) domain. The zeros in this domain give us the exact mathematical equation to reconstruct the pixels that are corrupted by random-value impulse noises. The proposed method can also detect and correct the corrupted pixels. Moreover, in a simpler case that salt and pepper noise is the brightest and darkest pixels in the image, we propose a simpler version of our method. In addition to the proposed method, we suggest a combination of the traditional median filter method with our method to yield better results when the percentage of the corrupted samples is high. | Sparse Component Analysis (SCA) in Random-valued and Salt and Pepper
Noise Removal | 3,041 |
In this paper, we propose a new approach for keypoint-based object detection. Traditional keypoint-based methods consist in classifying individual points and using pose estimation to discard misclassifications. Since a single point carries no relational features, such methods inherently restrict the usage of structural information to the pose estimation phase. Therefore, the classifier considers purely appearance-based feature vectors, thus requiring computationally expensive feature extraction or complex probabilistic modelling to achieve satisfactory robustness. In contrast, our approach consists in classifying graphs of keypoints, which incorporates structural information during the classification phase and allows the extraction of simpler feature vectors that are naturally robust. In the present work, 3-vertices graphs have been considered, though the methodology is general and larger order graphs may be adopted. Successful experimental results obtained for real-time object detection in video sequences are reported. | A Keygraph Classification Framework for Real-Time Object Detection | 3,042 |
This paper proposes an algorithm for image processing, obtained by adapting to image maps the definitions of two well-known physical quantities. These quantities are the dipole and quadrupole moments of a charge distribution. We will see how it is possible to define dipole and quadrupole moments for the gray-tone maps and apply them in the development of algorithms for edge detection. | Dipole and Quadrupole Moments in Image Processing | 3,043 |
Instead of evaluating the gradient field of the brightness map of an image, we propose the use of dipole vectors. This approach is obtained by adapting to the image gray-tone distribution the definition of the dipole moment of charge distributions. We will show how to evaluate the dipoles and obtain a vector field, which can be a good alternative to the gradient field in pattern recognition. | Dipole Vectors in Images Processing | 3,044 |
This paper advocates an improved solution for real-time error detection of texture errors that occurs in the production process in textile industry. The research is focused on the mono-color products with 3D texture model (Jaquard fabrics). This is a more difficult task than, for example, 2D multicolor textures. | Real-time Texture Error Detection | 3,045 |
Image processing can be used for digital restoration of ancient papyri, that is, for a restoration performed on their digital images. The digital manipulation allows reducing the background signals and enhancing the readability of texts. In the case of very old and damaged documents, this is fundamental for identification of the patterns of letters. Some examples of restoration, obtained with an image processing which uses edges detection and Fourier filtering, are shown. One of them concerns 7Q5 fragment of the Dead Sea Scrolls. | Digital Restoration of Ancient Papyri | 3,046 |
Dipole and higher moments are physical quantities used to describe a charge distribution. In analogy with electromagnetism, it is possible to define the dipole moments for a gray-scale image, according to the single aspect of a gray-tone map. In this paper we define the color dipole moments for color images. For color maps in fact, we have three aspects, the three primary colors, to consider. Associating three color charges to each pixel, color dipole moments can be easily defined and used for edge detection. | Color Dipole Moments for Edge Detection | 3,047 |
We show the closed-form solution to the maximization of trace(A'R), where A is given and R is unknown rotation matrix. This problem occurs in many computer vision tasks involving optimal rotation matrix estimation. The solution has been continuously reinvented in different fields as part of specific problems. We summarize the historical evolution of the problem and present the general proof of the solution. We contribute to the proof by considering the degenerate cases of A and discuss the uniqueness of R. | On the closed-form solution of the rotation matrix arising in computer
vision problems | 3,048 |
Point set registration is a key component in many computer vision tasks. The goal of point set registration is to assign correspondences between two sets of points and to recover the transformation that maps one point set to the other. Multiple factors, including an unknown non-rigid spatial transformation, large dimensionality of point set, noise and outliers, make the point set registration a challenging problem. We introduce a probabilistic method, called the Coherent Point Drift (CPD) algorithm, for both rigid and non-rigid point set registration. We consider the alignment of two point sets as a probability density estimation problem. We fit the GMM centroids (representing the first point set) to the data (the second point set) by maximizing the likelihood. We force the GMM centroids to move coherently as a group to preserve the topological structure of the point sets. In the rigid case, we impose the coherence constraint by re-parametrization of GMM centroid locations with rigid parameters and derive a closed form solution of the maximization step of the EM algorithm in arbitrary dimensions. In the non-rigid case, we impose the coherence constraint by regularizing the displacement field and using the variational calculus to derive the optimal transformation. We also introduce a fast algorithm that reduces the method computation complexity to linear. We test the CPD algorithm for both rigid and non-rigid transformations in the presence of noise, outliers and missing points, where CPD shows accurate results and outperforms current state-of-the-art methods. | Point-Set Registration: Coherent Point Drift | 3,049 |
Natural images in the colour space YUV have been observed to have a non-Gaussian, heavy tailed distribution (called 'sparse') when the filter G(U)(r) = U(r) - sum_{s \in N(r)} w{(Y)_{rs}} U(s), is applied to the chromacity channel U (and equivalently to V), where w is a weighting function constructed from the intensity component Y [1]. In this paper we develop Bayesian analysis of the colorization problem using the filter response as a regularization term to arrive at a non-convex optimization problem. This problem is convexified using L1 optimization which often gives the same results for sparse signals [2]. It is observed that L1 optimization, in many cases, over-performs the famous colorization algorithm by Levin et al [3]. | Colorization of Natural Images via L1 Optimization | 3,050 |
A statistical learning/inference framework for color demosaicing is presented. We start with simplistic assumptions about color constancy, and recast color demosaicing as a blind linear inverse problem: color parameterizes the unknown kernel, while brightness takes on the role of a latent variable. An expectation-maximization algorithm naturally suggests itself for the estimation of them both. Then, as we gradually broaden the family of hypothesis where color is learned, we let our demosaicing behave adaptively, in a manner that reflects our prior knowledge about the statistics of color images. We show that we can incorporate realistic, learned priors without essentially changing the complexity of the simple expectation-maximization algorithm we started with. | A statistical learning approach to color demosaicing | 3,051 |
This paper presents a new method to recover the relative pose between two images, using three points and the vertical direction information. The vertical direction can be determined in two ways: 1- using direct physical measurement like IMU (inertial measurement unit), 2- using vertical vanishing point. This knowledge of the vertical direction solves 2 unknowns among the 3 parameters of the relative rotation, so that only 3 homologous points are requested to position a couple of images. Rewriting the coplanarity equations leads to a simpler solution. The remaining unknowns resolution is performed by an algebraic method using Grobner bases. The elements necessary to build a specific algebraic solver are given in this paper, allowing for a real-time implementation. The results on real and synthetic data show the efficiency of this method. | A New Solution to the Relative Orientation Problem using only 3 Points
and the Vertical Direction | 3,052 |
In this paper, we use semi-definite programming and generalized principal component analysis (GPCA) to distinguish between two or more different facial expressions. In the first step, semi-definite programming is used to reduce the dimension of the image data and "unfold" the manifold which the data points (corresponding to facial expressions) reside on. Next, GPCA is used to fit a series of subspaces to the data points and associate each data point with a subspace. Data points that belong to the same subspace are claimed to belong to the same facial expression category. An example is provided. | Segmentation of Facial Expressions Using Semi-Definite Programming and
Generalized Principal Component Analysis | 3,053 |
This paper defines the basis of a new hierarchical framework for segmentation algorithms based on energy minimization schemes. This new framework is based on two formal tools. First, a combinatorial pyramid encode efficiently a hierarchy of partitions. Secondly, discrete geometric estimators measure precisely some important geometric parameters of the regions. These measures combined with photometrical and topological features of the partition allows to design energy terms based on discrete measures. Our segmentation framework exploits these energies to build a pyramid of image partitions with a minimization scheme. Some experiments illustrating our framework are shown and discussed. | Combinatorial pyramids and discrete geometry for energy-minimizing
segmentation | 3,054 |
We present a parametric deformable model which recovers image components with a complexity independent from the resolution of input images. The proposed model also automatically changes its topology and remains fully compatible with the general framework of deformable models. More precisely, the image space is equipped with a metric that expands salient image details according to their strength and their curvature. During the whole evolution of the model, the sampling of the contour is kept regular with respect to this metric. By this way, the vertex density is reduced along most parts of the curve while a high quality of shape representation is preserved. The complexity of the deformable model is thus improved and is no longer influenced by feature-preserving changes in the resolution of input images. Building the metric requires a prior estimation of contour curvature. It is obtained using a robust estimator which investigates the local variations in the orientation of image gradient. Experimental results on both computer generated and biomedical images are presented to illustrate the advantages of our approach. | Deformable Model with a Complexity Independent from Image Resolution | 3,055 |
We introduce an adaptive regularization approach. In contrast to conventional Tikhonov regularization, which specifies a fixed regularization operator, we estimate it simultaneously with parameters. From a Bayesian perspective we estimate the prior distribution on parameters assuming that it is close to some given model distribution. We constrain the prior distribution to be a Gauss-Markov random field (GMRF), which allows us to solve for the prior distribution analytically and provides a fast optimization algorithm. We apply our approach to non-rigid image registration to estimate the spatial transformation between two images. Our evaluation shows that the adaptive regularization approach significantly outperforms standard variational methods. | Adaptive Regularization of Ill-Posed Problems: Application to Non-rigid
Image Registration | 3,056 |
Quality control is an important issue in the ceramic tile industry. On the other hand maintaining the rate of production with respect to time is also a major issue in ceramic tile manufacturing. Again, price of ceramic tiles also depends on purity of texture, accuracy of color, shape etc. Considering this criteria, an automated defect detection and classification technique has been proposed in this report that can have ensured the better quality of tiles in manufacturing process as well as production rate. Our proposed method plays an important role in ceramic tiles industries to detect the defects and to control the quality of ceramic tiles. This automated classification method helps us to acquire knowledge about the pattern of defect within a very short period of time and also to decide about the recovery process so that the defected tiles may not be mixed with the fresh tiles. | Automatic Defect Detection and Classification Technique from Image: A
Special Case Using Ceramic Tiles | 3,057 |
Image segmentation techniques are predominately based on parameter-laden optimization. The objective function typically involves weights for balancing competing image fidelity and segmentation regularization cost terms. Setting these weights suitably has been a painstaking, empirical process. Even if such ideal weights are found for a novel image, most current approaches fix the weight across the whole image domain, ignoring the spatially-varying properties of object shape and image appearance. We propose a novel technique that autonomously balances these terms in a spatially-adaptive manner through the incorporation of image reliability in a graph-based segmentation framework. We validate on synthetic data achieving a reduction in mean error of 47% (p-value << 0.05) when compared to the best fixed parameter segmentation. We also present results on medical images (including segmentations of the corpus callosum and brain tissue in MRI data) and on natural images. | Automatic Spatially-Adaptive Balancing of Energy Terms for Image
Segmentation | 3,058 |
The selection of the optimal feature subset and the classification has become an important issue in the field of iris recognition. In this paper we propose several methods for iris feature subset selection and vector creation. The deterministic feature sequence is extracted from the iris image by using the contourlet transform technique. Contourlet transform captures the intrinsic geometrical structures of iris image. It decomposes the iris image into a set of directional sub-bands with texture details captured in different orientations at various scales so for reducing the feature vector dimensions we use the method for extract only significant bit and information from normalized iris images. In this method we ignore fragile bits. And finally we use SVM (Support Vector Machine) classifier for approximating the amount of people identification in our proposed system. Experimental result show that most proposed method reduces processing time and increase the classification accuracy and also the iris feature vector length is much smaller versus the other methods. | Efficient IRIS Recognition through Improvement of Feature Extraction and
subset Selection | 3,059 |
We present in this paper a new approach for hand gesture analysis that allows digit recognition. The analysis is based on extracting a set of features from a hand image and then combining them by using an induction graph. The most important features we extract from each image are the fingers locations, their heights and the distance between each pair of fingers. Our approach consists of three steps: (i) Hand detection and localization, (ii) fingers extraction and (iii) features identification and combination to digit recognition. Each input image is assumed to contain only one person, thus we apply a fuzzy classifier to identify the skin pixels. In the finger extraction step, we attempt to remove all the hand components except the fingers, this process is based on the hand anatomy properties. The final step consists on representing histogram of the detected fingers in order to extract features that will be used for digit recognition. The approach is invariant to scale, rotation and translation of the hand. Some experiments have been undertaken to show the effectiveness of the proposed approach. | A new approach for digit recognition based on hand gesture analysis | 3,060 |
There are many applications of graph cuts in computer vision, e.g. segmentation. We present a novel method to reformulate the NP-hard, k-way graph partitioning problem as an approximate minimal s-t graph cut problem, for which a globally optimal solution is found in polynomial time. Each non-terminal vertex in the original graph is replaced by a set of ceil(log_2(k)) new vertices. The original graph edges are replaced by new edges connecting the new vertices to each other and to only two, source s and sink t, terminal nodes. The weights of the new edges are obtained using a novel least squares solution approximating the constraints of the initial k-way setup. The minimal s-t cut labels each new vertex with a binary (s vs t) "Gray" encoding, which is then decoded into a decimal label number that assigns each of the original vertices to one of k classes. We analyze the properties of the approximation and present quantitative as well as qualitative segmentation results. | Multi-Label MRF Optimization via Least Squares s-t Cuts | 3,061 |
We describe an algorithm to enhance and binarize a fingerprint image. The algorithm is based on accurate determination of orientation flow of the ridges of the fingerprint image by computing variance of the neighborhood pixels around a pixel in different directions. We show that an iterative algorithm which captures the mutual interdependence of orientation flow computation, enhancement and binarization gives very good results on poor quality images. | An Iterative Fingerprint Enhancement Algorithm Based on Accurate
Determination of Orientation Flow | 3,062 |
We consider models for which it is important, early in processing, to estimate some variables with high precision, but perhaps at relatively low rates of recall. If some variables can be identified with near certainty, then they can be conditioned upon, allowing further inference to be done efficiently. Specifically, we consider optical character recognition (OCR) systems that can be bootstrapped by identifying a subset of correctly translated document words with very high precision. This "clean set" is subsequently used as document-specific training data. While many current OCR systems produce measures of confidence for the identity of each letter or word, thresholding these confidence values, even at very high values, still produces some errors. We introduce a novel technique for identifying a set of correct words with very high precision. Rather than estimating posterior probabilities, we bound the probability that any given word is incorrect under very general assumptions, using an approximate worst case analysis. As a result, the parameters of the model are nearly irrelevant, and we are able to identify a subset of words, even in noisy documents, of which we are highly confident. On our set of 10 documents, we are able to identify about 6% of the words on average without making a single error. This ability to produce word lists with very high precision allows us to use a family of models which depends upon such clean word lists. | Bounding the Probability of Error for High Precision Recognition | 3,063 |
The ray-based 4D light field representation cannot be directly used to analyze diffractive or phase--sensitive optical elements. In this paper, we exploit tools from wave optics and extend the light field representation via a novel "light field transform". We introduce a key modification to the ray--based model to support the transform. We insert a "virtual light source", with potentially negative valued radiance for certain emitted rays. We create a look-up table of light field transformers of canonical optical elements. The two key conclusions are that (i) in free space, the 4D light field completely represents wavefront propagation via rays with real (positive as well as negative) valued radiance and (ii) at occluders, a light field composed of light field transformers plus insertion of (ray--based) virtual light sources represents resultant phase and amplitude of wavefronts. For free--space propagation, we analyze different wavefronts and coherence possibilities. For occluders, we show that the light field transform is simply based on a convolution followed by a multiplication operation. This formulation brings powerful concepts from wave optics to computer vision and graphics. We show applications in cubic-phase plate imaging and holographic displays. | Augmenting Light Field to model Wave Optics effects | 3,064 |
Medical image registration is a difficult problem. Not only a registration algorithm needs to capture both large and small scale image deformations, it also has to deal with global and local image intensity variations. In this paper we describe a new multiresolution elastic image registration method that challenges these difficulties in image registration. To capture large and small scale image deformations, we use both global and local affine transformation algorithms. To address global and local image intensity variations, we apply an image intensity standardization algorithm to correct image intensity variations. This transforms image intensities into a standard intensity scale, which allows highly accurate registration of medical images. | Multiresolution Elastic Medical Image Registration in Standard Intensity
Scale | 3,065 |
In this paper, we propose three novel and important methods for the registration of histological images for 3D reconstruction. First, possible intensity variations and nonstandardness in images are corrected by an intensity standardization process which maps the image scale into a standard scale where the similar intensities correspond to similar tissues meaning. Second, 2D histological images are mapped into a feature space where continuous variables are used as high confidence image features for accurate registration. Third, we propose an automatic best reference slice selection algorithm that improves reconstruction quality based on both image entropy and mean square error of the registration process. We demonstrate that the choice of reference slice has a significant impact on registration error, standardization, feature space and entropy information. After 2D histological slices are registered through an affine transformation with respect to an automatically chosen reference, the 3D volume is reconstructed by co-registering 2D slices elastically. | Registration of Standardized Histological Images in Feature Space | 3,066 |
In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction. | Fully Automatic 3D Reconstruction of Histological Images | 3,067 |
In this paper, the problem of automatic Gabor wavelet selection for face recognition is tackled by introducing an automatic algorithm based on Parallel AdaBoosting method. Incorporating mutual information into the algorithm leads to the selection procedure not only based on classification accuracy but also on efficiency. Effective image features are selected by using properly chosen Gabor wavelets optimised with Parallel AdaBoost method and mutual information to get high recognition rates with low computational cost. Experiments are conducted using the well-known FERET face database. In proposed framework, memory and computation costs are reduced significantly and high classification accuracy is obtained. | Parallel AdaBoost Algorithm for Gabor Wavelet Selection in Face
Recognition | 3,068 |
We present BEAMER: a new spatially exploitative approach to learning object detectors which shows excellent results when applied to the task of detecting objects in greyscale aerial imagery in the presence of ambiguous and noisy data. There are four main contributions used to produce these results. First, we introduce a grammar-guided feature extraction system, enabling the exploration of a richer feature space while constraining the features to a useful subset. This is specified with a rule-based generative grammar crafted by a human expert. Second, we learn a classifier on this data using a newly proposed variant of AdaBoost which takes into account the spatially correlated nature of the data. Third, we perform another round of training to optimize the method of converting the pixel classifications generated by boosting into a high quality set of (x, y) locations. Lastly, we carefully define three common problems in object detection and define two evaluation criteria that are tightly matched to these problems. Major strengths of this approach are: (1) a way of randomly searching a broad feature space, (2) its performance when evaluated on well-matched evaluation criteria, and (3) its use of the location prediction domain to learn object detectors as well as to generate detections that perform well on several tasks: object counting, tracking, and target detection. We demonstrate the efficacy of BEAMER with a comprehensive experimental evaluation on a challenging data set. | Learning Object Location Predictors with Boosting and Grammar-Guided
Feature Extraction | 3,069 |
We present in this paper a biometric system of face detection and recognition in color images. The face detection technique is based on skin color information and fuzzy classification. A new algorithm is proposed in order to detect automatically face features (eyes, mouth and nose) and extract their correspondent geometrical points. These fiducial points are described by sets of wavelet components which are used for recognition. To achieve the face recognition, we use neural networks and we study its performances for different inputs. We compare the two types of features used for recognition: geometric distances and Gabor coefficients which can be used either independently or jointly. This comparison shows that Gabor coefficients are more powerful than geometric distances. We show with experimental results how the importance recognition ratio makes our system an effective tool for automatic face detection and recognition. | Automatic local Gabor Features extraction for face recognition | 3,070 |
A robust classification method is developed on the basis of sparse subspace decomposition. This method tries to decompose a mixture of subspaces of unlabeled data (queries) into class subspaces as few as possible. Each query is classified into the class whose subspace significantly contributes to the decomposed subspace. Multiple queries from different classes can be simultaneously classified into their respective classes. A practical greedy algorithm of the sparse subspace decomposition is designed for the classification. The present method achieves high recognition rate and robust performance exploiting joint sparsity. | Multiple pattern classification by sparse subspace decomposition | 3,071 |
We exam various geometric active contour methods for radar image segmentation. Due to special properties of radar images, we propose our new model based on modified Chan-Vese functional. Our method is efficient in separating non-meteorological noises from meteorological images. | Segmentation for radar images based on active contour | 3,072 |
A hierarchical interval subdivision is shown to lead to a $p$-adic encoding of image data. This allows in the case of the relative pose problem in computer vision and photogrammetry to derive equations having 2-adic numbers as coefficients, and to use Hensel's lifting method to their solution. This method is applied to the linear and non-linear equations coming from eight, seven or five point correspondences. An inherent property of the method is its robustness. | A dyadic solution of relative pose problems | 3,073 |
Neural Networks are being used for character recognition from last many years but most of the work was confined to English character recognition. Till date, a very little work has been reported for Handwritten Farsi Character recognition. In this paper, we have made an attempt to recognize handwritten Farsi characters by using a multilayer perceptron with one hidden layer. The error backpropagation algorithm has been used to train the MLP network. In addition, an analysis has been carried out to determine the number of hidden nodes to achieve high performance of backpropagation network in the recognition of handwritten Farsi characters. The system has been trained using several different forms of handwriting provided by both male and female participants of different age groups. Finally, this rigorous training results an automatic HCR system using MLP network. In this work, the experiments were carried out on two hundred fifty samples of five writers. The results showed that the MLP networks trained by the error backpropagation algorithm are superior in recognition accuracy and memory usage. The result indicates that the backpropagation network provides good recognition accuracy of more than 80% of handwritten Farsi characters. | Handwritten Farsi Character Recognition using Artificial Neural Network | 3,074 |
By a "covering" we mean a Gaussian mixture model fit to observed data. Approximations of the Bayes factor can be availed of to judge model fit to the data within a given Gaussian mixture model. Between families of Gaussian mixture models, we propose the R\'enyi quadratic entropy as an excellent and tractable model comparison framework. We exemplify this using the segmentation of an MRI image volume, based (1) on a direct Gaussian mixture model applied to the marginal distribution function, and (2) Gaussian model fit through k-means applied to the 4D multivalued image volume furnished by the wavelet transform. Visual preference for one model over another is not immediate. The R\'enyi quadratic entropy allows us to show clearly that one of these modelings is superior to the other. | Scale-Based Gaussian Coverings: Combining Intra and Inter Mixture Models
in Image Segmentation | 3,075 |
Multi-manifold modeling is increasingly used in segmentation and data representation tasks in computer vision and related fields. While the general problem, modeling data by mixtures of manifolds, is very challenging, several approaches exist for modeling data by mixtures of affine subspaces (which is often referred to as hybrid linear modeling). We translate some important instances of multi-manifold modeling to hybrid linear modeling in embedded spaces, without explicitly performing the embedding but applying the kernel trick. The resulting algorithm, Kernel Spectral Curvature Clustering, uses kernels at two levels - both as an implicit embedding method to linearize nonflat manifolds and as a principled method to convert a multiway affinity problem into a spectral clustering one. We demonstrate the effectiveness of the method by comparing it with other state-of-the-art methods on both synthetic data and a real-world problem of segmenting multiple motions from two perspective camera views. | Kernel Spectral Curvature Clustering (KSCC) | 3,076 |
We apply the Spectral Curvature Clustering (SCC) algorithm to a benchmark database of 155 motion sequences, and show that it outperforms all other state-of-the-art methods. The average misclassification rate by SCC is 1.41% for sequences having two motions and 4.85% for three motions. | Motion Segmentation by SCC on the Hopkins 155 Database | 3,077 |
A method to extract and recognize isolated characters in license plates is proposed. In extraction stage, the proposed method detects isolated characters by using Difference-of-Gaussian (DOG) function, The DOG function, similar to Laplacian of Gaussian function, was proven to produce the most stable image features compared to a range of other possible image functions. The candidate characters are extracted by doing connected component analysis on different scale DOG images. In recognition stage, a novel feature vector named accumulated gradient projection vector (AGPV) is used to compare the candidate character with the standard ones. The AGPV is calculated by first projecting pixels of similar gradient orientations onto specific axes, and then accumulates the projected gradient magnitudes by each axis. In the experiments, the AGPVs are proven to be invariant from image scaling and rotation, and robust to noise and illumination change. | A Method for Extraction and Recognition of Isolated License Plate
Characters | 3,078 |
The size and geometry of the prostate are known to be pivotal quantities used by clinicians to assess the condition of the gland during prostate cancer screening. As an alternative to palpation, an increasing number of methods for estimation of the above-mentioned quantities are based on using imagery data of prostate. The necessity to process large volumes of such data creates a need for automatic segmentation tools which would allow the estimation to be carried out with maximum accuracy and efficiency. In particular, the use of transrectal ultrasound (TRUS) imaging in prostate cancer screening seems to be becoming a standard clinical practice due to the high benefit-to-cost ratio of this imaging modality. Unfortunately, the segmentation of TRUS images is still hampered by relatively low contrast and reduced SNR of the images, thereby requiring the segmentation algorithms to incorporate prior knowledge about the geometry of the gland. In this paper, a novel approach to the problem of segmenting the TRUS images is described. The proposed approach is based on the concept of distribution tracking, which provides a unified framework for modeling and fusing image-related and morphological features of the prostate. Moreover, the same framework allows the segmentation to be regularized via using a new type of "weak" shape priors, which minimally bias the estimation procedure, while rendering the latter stable and robust. | Information tracking approach to segmentation of ultrasound imagery of
prostate | 3,079 |
The problem of reconstruction of digital images from their degraded measurements is regarded as a problem of central importance in various fields of engineering and imaging sciences. In such cases, the degradation is typically caused by the resolution limitations of an imaging device in use and/or by the destructive influence of measurement noise. Specifically, when the noise obeys a Poisson probability law, standard approaches to the problem of image reconstruction are based on using fixed-point algorithms which follow the methodology first proposed by Richardson and Lucy. The practice of using these methods, however, shows that their convergence properties tend to deteriorate at relatively high noise levels. Accordingly, in the present paper, a novel method for de-noising and/or de-blurring of digital images corrupted by Poisson noise is introduced. The proposed method is derived under the assumption that the image of interest can be sparsely represented in the domain of a linear transform. Consequently, a shrinkage-based iterative procedure is proposed, which guarantees the solution to converge to the global maximizer of an associated maximum-a-posteriori criterion. It is shown in a series of both computer-simulated and real-life experiments that the proposed method outperforms a number of existing alternatives in terms of stability, precision, and computational efficiency. | Iterative Shrinkage Approach to Restoration of Optical Imagery | 3,080 |
We present a new modular traffic signs recognition system, successfully applied to both American and European speed limit signs. Our sign detection step is based only on shape-detection (rectangles or circles). This enables it to work on grayscale images, contrary to most European competitors, which eases robustness to illumination conditions (notably night operation). Speed sign candidates are classified (or rejected) by segmenting potential digits inside them (which is rather original and has several advantages), and then applying a neural digit recognition. The global detection rate is ~90% for both (standard) U.S. and E.U. speed signs, with a misclassification rate <1%, and no validated false alarm in >150 minutes of video. The system processes in real-time ~20 frames/s on a standard high-end laptop. | Modular Traffic Sign Recognition applied to on-vehicle real-time visual
detection of American and European speed limit signs | 3,081 |
Radiofrequency (RF) catheter ablation has transformed treatment for tachyarrhythmias and has become first-line therapy for some tachycardias. The precise localization of the arrhythmogenic site and the positioning of the RF catheter over that site are problematic: they can impair the efficiency of the procedure and are time consuming (several hours). Electroanatomic mapping technologies are available that enable the display of the cardiac chambers and the relative position of ablation lesions. However, these are expensive and use custom-made catheters. The proposed methodology makes use of standard catheters and inexpensive technology in order to create a 3D volume of the heart chamber affected by the arrhythmia. Further, we propose a novel method that uses a priori 3D information of the mapping catheter in order to estimate the 3D locations of multiple electrodes across single view C-arm images. The monoplane algorithm is tested for feasibility on computer simulations and initial canine data. | 3D/2D Registration of Mapping Catheter Images for Arrhythmia
Interventional Assistance | 3,082 |
With the advancement in image capturing device, the image data been generated at high volume. If images are analyzed properly, they can reveal useful information to the human users. Content based image retrieval address the problem of retrieving images relevant to the user needs from image databases on the basis of low-level visual features that can be derived from the images. Grouping images into meaningful categories to reveal useful information is a challenging and important problem. Clustering is a data mining technique to group a set of unsupervised data based on the conceptual clustering principal: maximizing the intraclass similarity and minimizing the interclass similarity. Proposed framework focuses on color as feature. Color Moment and Block Truncation Coding (BTC) are used to extract features for image dataset. Experimental study using K-Means clustering algorithm is conducted to group the image dataset into various clusters. | Color Image Clustering using Block Truncation Algorithm | 3,083 |
There are many resources useful for processing images, most of them freely available and quite friendly to use. In spite of this abundance of tools, a study of the processing methods is still worthy of efforts. Here, we want to discuss the possibilities arising from the use of fractional differential calculus. This calculus evolved in the research field of pure mathematics until 1920, when applied science started to use it. Only recently, fractional calculus was involved in image processing methods. As we shall see, the fractional calculation is able to enhance the quality of images, with interesting possibilities in edge detection and image restoration. We suggest also the fractional differentiation as a tool to reveal faint objects in astronomical images. | Fractional differentiation based image processing | 3,084 |
Background subtraction has been a driving engine for many computer vision and video analytics tasks. Although its many variants exist, they all share the underlying assumption that photometric scene properties are either static or exhibit temporal stationarity. While this works in some applications, the model fails when one is interested in discovering {\it changes in scene dynamics} rather than those in a static background; detection of unusual pedestrian and motor traffic patterns is but one example. We propose a new model and computational framework that address this failure by considering stationary scene dynamics as a ``background'' with which observed scene dynamics are compared. Central to our approach is the concept of an {\it event}, that we define as short-term scene dynamics captured over a time window at a specific spatial location in the camera field of view. We compute events by time-aggregating motion labels, obtained by background subtraction, as well as object descriptors (e.g., object size). Subsequently, we characterize events probabilistically, but use a low-memory, low-complexity surrogates in practical implementation. Using these surrogates amounts to {\it behavior subtraction}, a new algorithm with some surprising properties. As demonstrated here, behavior subtraction is an effective tool in anomaly detection and localization. It is resilient to spurious background motion, such as one due to camera jitter, and is content-blind, i.e., it works equally well on humans, cars, animals, and other objects in both uncluttered and highly-cluttered scenes. Clearly, treating video as a collection of events rather than colored pixels opens new possibilities for video analytics. | Behavior Subtraction | 3,085 |
A $p$-adic variation of the Ran(dom) Sa(mple) C(onsensus) method for solving the relative pose problem in stereo vision is developped. From two 2-adically encoded images a random sample of five pairs of corresponding points is taken, and the equations for the essential matrix are solved by lifting solutions modulo 2 to the 2-adic integers. A recently devised $p$-adic hierarchical classification algorithm imitating the known LBG quantisation method classifies the solutions for all the samples after having determined the number of clusters using the known intra-inter validity of clusterings. In the successful case, a cluster ranking will determine the cluster containing a 2-adic approximation to the "true" solution of the problem. | A $p$-adic RanSaC algorithm for stereo vision using Hensel lifting | 3,086 |
The problem of restoration of digital images from their degraded measurements plays a central role in a multitude of practically important applications. A particularly challenging instance of this problem occurs in the case when the degradation phenomenon is modeled by an ill-conditioned operator. In such a case, the presence of noise makes it impossible to recover a valuable approximation of the image of interest without using some a priori information about its properties. Such a priori information is essential for image restoration, rendering it stable and robust to noise. Particularly, if the original image is known to be a piecewise smooth function, one of the standard priors used in this case is defined by the Rudin-Osher-Fatemi model, which results in total variation (TV) based image restoration. The current arsenal of algorithms for TV-based image restoration is vast. In the present paper, a different approach to the solution of the problem is proposed based on the method of iterative shrinkage (aka iterated thresholding). In the proposed method, the TV-based image restoration is performed through a recursive application of two simple procedures, viz. linear filtering and soft thresholding. Therefore, the method can be identified as belonging to the group of first-order algorithms which are efficient in dealing with images of relatively large sizes. Another valuable feature of the proposed method consists in its working directly with the TV functional, rather then with its smoothed versions. Moreover, the method provides a single solution for both isotropic and anisotropic definitions of the TV functional, thereby establishing a useful connection between the two formulae. | An Iterative Shrinkage Approach to Total-Variation Image Restoration | 3,087 |
A new fangled method for ship wake detection in synthetic aperture radar (SAR) images is explored here. Most of the detection procedure applies the Radon transform as its properties outfit more than any other transformation for the detection purpose. But still it holds problems when the transform is applied to an image with a high level of noise. Here this paper articulates the combination between the radon transformation and the shrinkage methods which increase the mode of wake detection process. The latter shrinkage method with RT maximize the signal to noise ratio hence it leads to most optimal detection of lines in the SAR images. The originality mainly works on the denoising segment of the proposed algorithm. Experimental work outs are carried over both in simulated and real SAR images. The detection process is more adequate with the proposed method and improves better than the conventional methods. | An Optimal Method For Wake Detection In SAR Images Using Radon
Transformation Combined With Wavelet Filters | 3,088 |
This paper presents an algorithm which aims to assist the radiologist in identifying breast cancer at its earlier stages. It combines several image processing techniques like image negative, thresholding and segmentation techniques for detection of tumor in mammograms. The algorithm is verified by using mammograms from Mammographic Image Analysis Society. The results obtained by applying these techniques are described. | Breast Cancer Detection Using Multilevel Thresholding | 3,089 |
The paper describes an image processing for a non-photorealistic rendering. The algorithm is based on a random choice of a set of pixels from those ot the original image and substitution of them with colour spots. An iterative procedure is applied to cover, at a desired level, the canvas. The resulting effect mimics the impressionist painting and Pointillism. | Non-photorealistic image processing: an Impressionist rendering | 3,090 |
Recognition of iris based on Visible Light (VL) imaging is a difficult problem because of the light reflection from the cornea. Nonetheless, pigment melanin provides a rich feature source in VL, unavailable in Near-Infrared (NIR) imaging. This is due to biological spectroscopy of eumelanin, a chemical not stimulated in NIR. In this case, a plausible solution to observe such patterns may be provided by an adaptive procedure using a variational technique on the image histogram. To describe the patterns, a shape analysis method is used to derive feature-code for each subject. An important question is how much the melanin patterns, extracted from VL, are independent of iris texture in NIR. With this question in mind, the present investigation proposes fusion of features extracted from NIR and VL to boost the recognition performance. We have collected our own database (UTIRIS) consisting of both NIR and VL images of 158 eyes of 79 individuals. This investigation demonstrates that the proposed algorithm is highly sensitive to the patterns of cromophores and improves the iris recognition rate. | Pigment Melanin: Pattern for Iris Recognition | 3,091 |
Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation. | Sequential Clustering based Facial Feature Extraction Method for
Automatic Creation of Facial Models from Orthogonal Views | 3,092 |
In this paper, we present a system for modelling vehicle motion in an urban scene from low frame-rate aerial video. In particular, the scene is modelled as a probability distribution over velocities at every pixel in the image. We describe the complete system for acquiring this model. The video is captured from a helicopter and stabilized by warping the images to match an orthorectified image of the area. A pixel classifier is applied to the stabilized images, and the response is segmented to determine car locations and orientations. The results are fed in to a tracking scheme which tracks cars for three frames, creating tracklets. This allows the tracker to use a combination of velocity, direction, appearance, and acceleration cues to keep only tracks likely to be correct. Each tracklet provides a measurement of the car velocity at every point along the tracklet's length, and these are then aggregated to create a histogram of vehicle velocities at every pixel in the image. The results demonstrate that the velocity probability distribution prior can be used to infer a variety of information about road lane directions, speed limits, vehicle speeds and common trajectories, and traffic bottlenecks, as well as providing a means of describing environmental knowledge about traffic rules that can be used in tracking. | Automatic creation of urban velocity fields from aerial video | 3,093 |
Finding the three-dimensional representation of all or a part of a scene from a single two dimensional image is a challenging task. In this paper we propose a method for identifying the pose and location of objects with circular protrusions in three dimensions from a single image and a 3d representation or model of the object of interest. To do this, we present a method for identifying ellipses and their properties quickly and reliably with a novel technique that exploits intensity differences between objects and a geometric technique for matching an ellipse in 2d to a circle in 3d. We apply these techniques to the specific problem of determining the pose and location of vehicles, particularly cars, from a single image. We have achieved excellent pose recovery performance on artificially generated car images and show promising results on real vehicle images. We also make use of the ellipse detection method to identify car wheels from images, with a very high successful match rate. | Matching 2-D Ellipses to 3-D Circles with Application to Vehicle Pose
Estimation | 3,094 |
Varieties of noises are major problem in recognition of Electromyography (EMG) signal. Hence, methods to remove noise become most significant in EMG signal analysis. White Gaussian noise (WGN) is used to represent interference in this paper. Generally, WGN is difficult to be removed using typical filtering and solutions to remove WGN are limited. In addition, noise removal is an important step before performing feature extraction, which is used in EMG-based recognition. This research is aimed to present a novel feature that tolerate with WGN. As a result, noise removal algorithm is not needed. Two novel mean and median frequencies (MMNF and MMDF) are presented for robust feature extraction. Sixteen existing features and two novelties are evaluated in a noisy environment. WGN with various signal-to-noise ratios (SNRs), i.e. 20-0 dB, was added to the original EMG signal. The results showed that MMNF performed very well especially in weak EMG signal compared with others. The error of MMNF in weak EMG signal with very high noise, 0 dB SNR, is about 5-10 percent and closed by MMDF and Histogram, whereas the error of other features is more than 20 percent. While in strong EMG signal, the error of MMNF is better than those from other features. Moreover, the combination of MMNF, Histrogram of EMG and Willison amplitude is used as feature vector in classification task. The experimental result shows the better recognition result in noisy environment than other success feature candidates. From the above results demonstrate that MMNF can be used for new robust feature extraction. | A Novel Feature Extraction for Robust EMG Pattern Recognition | 3,095 |
We propose to use novel and classical audio and text signal-processing and otherwise techniques for "inexpensive" fast writer identification tasks of scanned hand-written documents "visually". The "inexpensive" refers to the efficiency of the identification process in terms of CPU cycles while preserving decent accuracy for preliminary identification. This is a comparative study of multiple algorithm combinations in a pattern recognition pipeline implemented in Java around an open-source Modular Audio Recognition Framework (MARF) that can do a lot more beyond audio. We present our preliminary experimental findings in such an identification task. We simulate "visual" identification by "looking" at the hand-written document as a whole rather than trying to extract fine-grained features out of it prior classification. | Writer Identification Using Inexpensive Signal Processing Techniques | 3,096 |
Vector quantization(VQ) is a lossy data compression technique from signal processing for which simple competitive learning is one standard method to quantize patterns from the input space. Extending competitive learning VQ to the domain of graphs results in competitive learning for quantizing input graphs. In this contribution, we propose an accelerated version of competitive learning graph quantization (GQ) without trading computational time against solution quality. For this, we lift graphs locally to vectors in order to avoid unnecessary calculations of intractable graph distances. In doing so, the accelerated version of competitive learning GQ gradually turns locally into a competitive learning VQ with increasing number of iterations. Empirical results show a significant speedup by maintaining a comparable solution quality. | Accelerating Competitive Learning Graph Quantization | 3,097 |
The k-nearest neighbors (k-NN) classification rule has proven extremely successful in countless many computer vision applications. For example, image categorization often relies on uniform voting among the nearest prototypes in the space of descriptors. In spite of its good properties, the classic k-NN rule suffers from high variance when dealing with sparse prototype datasets in high dimensions. A few techniques have been proposed to improve k-NN classification, which rely on either deforming the nearest neighborhood relationship or modifying the input space. In this paper, we propose a novel boosting algorithm, called UNN (Universal Nearest Neighbors), which induces leveraged k-NN, thus generalizing the classic k-NN rule. We redefine the voting rule as a strong classifier that linearly combines predictions from the k closest prototypes. Weak classifiers are learned by UNN so as to minimize a surrogate risk. A major feature of UNN is the ability to learn which prototypes are the most relevant for a given class, thus allowing one for effective data reduction. Experimental results on the synthetic two-class dataset of Ripley show that such a filtering strategy is able to reject "noisy" prototypes. We carried out image categorization experiments on a database containing eight classes of natural scenes. We show that our method outperforms significantly the classic k-NN classification, while enabling significant reduction of the computational cost by means of data filtering. | Boosting k-NN for categorization of natural scenes | 3,098 |
The need of sign language is increasing radically especially to hearing impaired community. Only few research groups try to automatically recognize sign language from video, colored gloves and etc. Their approach requires a valid segmentation of the data that is used for training and of the data that is used to be recognized. Recognition of a sign language image sequence is challenging because of the variety of hand shapes and hand motions. Here, this paper proposes to apply a combination of image segmentation with restoration using topological derivatives for achieving high recognition accuracy. Image quality measures are conceded here to differentiate the methods both subjectively as well as objectively. Experiments show that the additional use of the restoration before segmenting the postures significantly improves the correct rate of hand detection, and that the discrete derivatives yields a high rate of discrimination between different static hand postures as well as between hand postures and the scene background. Eventually, the research is to contribute to the implementation of automated sign language recognition system mainly established for the welfare purpose. | A Topological derivative based image segmentation for sign language
recognition system using isotropic filter | 3,099 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.