text
stringlengths 0
4.09k
|
|---|
Abstract: A field known as Compressive Sensing (CS) has recently emerged to help address the growing challenges of capturing and processing high-dimensional signals and data sets. CS exploits the surprising fact that the information contained in a sparse signal can be preserved in a small number of compressive (or random) linear measurements of that signal. Strong theoretical guarantees have been established on the accuracy to which sparse or near-sparse signals can be recovered from noisy compressive measurements. In this paper, we address similar questions in the context of a different modeling framework. Instead of sparse models, we focus on the broad class of manifold models, which can arise in both parametric and non-parametric signal families. Building upon recent results concerning the stable embeddings of manifolds within the measurement space, we establish both deterministic and probabilistic instance-optimal bounds in $\ell_2$ for manifold-based signal recovery and parameter estimation from noisy compressive measurements. In line with analogous results for sparsity-based CS, we conclude that much stronger bounds are possible in the probabilistic setting. Our work supports the growing empirical evidence that manifold-based models can be used with high accuracy in compressive signal processing.
|
Title: The Influence of Intensity Standardization on Medical Image Registration
|
Abstract: Acquisition-to-acquisition signal intensity variations (non-standardness) are inherent in MR images. Standardization is a post processing method for correcting inter-subject intensity variations through transforming all images from the given image gray scale into a standard gray scale wherein similar intensities achieve similar tissue meanings. The lack of a standard image intensity scale in MRI leads to many difficulties in tissue characterizability, image display, and analysis, including image segmentation. This phenomenon has been documented well; however, effects of standardization on medical image registration have not been studied yet. In this paper, we investigate the influence of intensity standardization in registration tasks with systematic and analytic evaluations involving clinical MR images. We conducted nearly 20,000 clinical MR image registration experiments and evaluated the quality of registrations both quantitatively and qualitatively. The evaluations show that intensity variations between images degrades the accuracy of registration performance. The results imply that the accuracy of image registration not only depends on spatial and geometric similarity but also on the similarity of the intensity values for the same tissues in different images.
|
Title: Ball-Scale Based Hierarchical Multi-Object Recognition in 3D Medical Images
|
Abstract: This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition. (3) Scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.
|
Title: A Minimum Relative Entropy Controller for Undiscounted Markov Decision Processes
|
Abstract: Adaptive control problems are notoriously difficult to solve even in the presence of plant-specific controllers. One way to by-pass the intractable computation of the optimal policy is to restate the adaptive control as the minimization of the relative entropy of a controller that ignores the true plant dynamics from an informed controller. The solution is given by the Bayesian control rule-a set of equations characterizing a stochastic adaptive controller for the class of possible plant dynamics. Here, the Bayesian control rule is applied to derive BCR-MDP, a controller to solve undiscounted Markov decision processes with finite state and action spaces and unknown dynamics. In particular, we derive a non-parametric conjugate prior distribution over the policy space that encapsulates the agent's whole relevant history and we present a Gibbs sampler to draw random policies from this distribution. Preliminary results show that BCR-MDP successfully avoids sub-optimal limit cycles due to its built-in mechanism to balance exploration versus exploitation.
|
Title: An Improved DC Recovery Method from AC Coefficients of DCT-Transformed Images
|
Abstract: Motivated by the work of Uehara et al. [1], an improved method to recover DC coefficients from AC coefficients of DCT-transformed images is investigated in this work, which finds applications in cryptanalysis of selective multimedia encryption. The proposed under/over-flow rate minimization (FRM) method employs an optimization process to get a statistically more accurate estimation of unknown DC coefficients, thus achieving a better recovery performance. It was shown by experimental results based on 200 test images that the proposed DC recovery method significantly improves the quality of most recovered images in terms of the PSNR values and several state-of-the-art objective image quality assessment (IQA) metrics such as SSIM and MS-SSIM.
|
Title: Cuspidal and Noncuspidal Robot Manipulators
|
Abstract: This article synthezises the most important results on the kinematics of cuspidal manipulators i.e. nonredundant manipulators that can change posture without meeting a singularity. The characteristic surfaces, the uniqueness domains and the regions of feasible paths in the workspace are defined. Then, several sufficient geometric conditions for a manipulator to be noncuspidal are enumerated and a general necessary and sufficient condition for a manipulator to be cuspidal is provided. An explicit DH-parameter-based condition for an orthogonal manipulator to be cuspidal is derived. The full classification of 3R orthogonal manipulators is provided and all types of cuspidal and noncuspidal orthogonal manipulators are enumerated. Finally, some facts about cuspidal and noncuspidal 6R manipulators are reported.
|
Title: Position Analysis of the RRP-3(SS) Multi-Loop Spatial Structure
|
Abstract: The paper presents the position analysis of a spatial structure composed of two platforms mutually connected by one RRP and three SS serial kinematic chains, where R, P, and S stand for revolute, prismatic, and spherical kinematic pair respectively. A set of three compatibility equations is laid down that, following algebraic elimination, results in a 28th-order univariate algebraic equation, which in turn provides the addressed problem with 28 solutions in the complex domain. Among the applications of the results presented in this paper is the solution to the forward kinematics of the Tricept, a well-known in-parallel-actuated spatial manipulator. Numerical examples show adoption of the proposed method in dealing with two case studies.
|
Title: Online Distributed Sensor Selection
|
Abstract: A key problem in sensor networks is to decide which sensors to query when, in order to obtain the most useful information (e.g., for performing accurate prediction), subject to constraints (e.g., on power and bandwidth). In many applications the utility function is not known a priori, must be learned from data, and can even change over time. Furthermore for large sensor networks solving a centralized optimization problem to select sensors is not feasible, and thus we seek a fully distributed solution. In this paper, we present Distributed Online Greedy (DOG), an efficient, distributed algorithm for repeatedly selecting sensors online, only receiving feedback about the utility of the selected sensors. We prove very strong theoretical no-regret guarantees that apply whenever the (unknown) utility function satisfies a natural diminishing returns property called submodularity. Our algorithm has extremely low communication requirements, and scales well to large sensor deployments. We extend DOG to allow observation-dependent sensor selection. We empirically demonstrate the effectiveness of our algorithm on several real-world sensing tasks.
|
Title: Thai Rhetorical Structure Analysis
|
Abstract: Rhetorical structure analysis (RSA) explores discourse relations among elementary discourse units (EDUs) in a text. It is very useful in many text processing tasks employing relationships among EDUs such as text understanding, summarization, and question-answering. Thai language with its distinctive linguistic characteristics requires a unique technique. This article proposes an approach for Thai rhetorical structure analysis. First, EDUs are segmented by two hidden Markov models derived from syntactic rules. A rhetorical structure tree is constructed from a clustering technique with its similarity measure derived from Thai semantic rules. Then, a decision tree whose features derived from the semantic rules is used to determine discourse relations.
|
Title: Measures of Analysis of Time Series (MATS): A MATLAB Toolkit for Computation of Multiple Measures on Time Series Data Bases
|
Abstract: In many applications, such as physiology and finance, large time series data bases are to be analyzed requiring the computation of linear, nonlinear and other measures. Such measures have been developed and implemented in commercial and freeware softwares rather selectively and independently. The Measures of Analysis of Time Series (\tt MATS) \tt MATLAB toolkit is designed to handle an arbitrary large set of scalar time series and compute a large variety of measures on them, allowing for the specification of varying measure parameters as well. The variety of options with added facilities for visualization of the results support different settings of time series analysis, such as the detection of dynamics changes in long data records, resampling (surrogate or bootstrap) tests for independence and linearity with various test statistics, and discrimination power of different measures and for different combinations of their parameters. The basic features of \tt MATS are presented and the implemented measures are briefly described. The usefulness of \tt MATS is illustrated on some empirical examples along with screenshots.
|
Title: Probabilistic Recovery of Multiple Subspaces in Point Clouds by Geometric lp Minimization
|
Abstract: We assume data independently sampled from a mixture distribution on the unit ball of the D-dimensional Euclidean space with K+1 components: the first component is a uniform distribution on that ball representing outliers and the other K components are uniform distributions along K d-dimensional linear subspaces restricted to that ball. We study both the simultaneous recovery of all K underlying subspaces and the recovery of the best l0 subspace (i.e., with largest number of points) by minimizing the lp-averaged distances of data points from d-dimensional subspaces of the D-dimensional space. Unlike other lp minimization problems, this minimization is non-convex for all p>0 and thus requires different methods for its analysis. We show that if 0<p <= 1, then both all underlying subspaces and the best l0 subspace can be precisely recovered by lp minimization with overwhelming probability. This result extends to additive homoscedastic uniform noise around the subspaces (i.e., uniform distribution in a strip around them) and near recovery with an error proportional to the noise level. On the other hand, if K>1 and p>1, then we show that both all underlying subspaces and the best l0 subspace cannot be recovered and even nearly recovered. Further relaxations are also discussed. We use the results of this paper for partially justifying recent effective algorithms for modeling data by mixtures of multiple subspaces as well as for discussing the effect of using variants of lp minimizations in RANSAC-type strategies for single subspace recovery.
|
Title: Computationally Efficient Estimation of Factor Multivariate Stochastic Volatility Models
|
Abstract: An MCMC simulation method based on a two stage delayed rejection Metropolis-Hastings algorithm is proposed to estimate a factor multivariate stochastic volatility model. The first stage uses kstep iteration towards the mode, with k small, and the second stage uses an adaptive random walk proposal density. The marginal likelihood approach of Chib (1995) is used to choose the number of factors, with the posterior density ordinates approximated by Gaussian copula. Simulation and real data applications suggest that the proposed simulation method is computationally much more efficient than the approach of Chib. Nardari and Shephard (2006. This increase in computational efficiency is particularly important in calculating marginal likelihoods because it is necessary to carry out the simulation a number of times to estimate the posterior ordinates for a given marginal likelihood. In addition to the MCMC method, the paper also proposes a fast approximate EM method to estimate the factor multivariate stochastic volatility model. The estimates from the approximate EM method are of interest in their own right, but are especially useful as initial inputs to MCMC methods, making them more efficient computationally. The methodology is illustrated using simulated and real examples.
|
Title: Dire n'est pas concevoir
|
Abstract: The conceptual modelling built from text is rarely an ontology. As a matter of fact, such a conceptualization is corpus-dependent and does not offer the main properties we expect from ontology. Furthermore, ontology extracted from text in general does not match ontology defined by expert using a formal language. It is not surprising since ontology is an extra-linguistic conceptualization whereas knowledge extracted from text is the concern of textual linguistics. Incompleteness of text and using rhetorical figures, like ellipsis, modify the perception of the conceptualization we may have. Ontological knowledge, which is necessary for text understanding, is not in general embedded into documents.
|
Title: On the Stability of Empirical Risk Minimization in the Presence of Multiple Risk Minimizers
|
Abstract: Recently Kutin and Niyogi investigated several notions of algorithmic stability--a property of a learning map conceptually similar to continuity--showing that training-stability is sufficient for consistency of Empirical Risk Minimization while distribution-free CV-stability is necessary and sufficient for having finite VC-dimension. This paper concerns a phase transition in the training stability of ERM, conjectured by the same authors. Kutin and Niyogi proved that ERM on finite hypothesis spaces containing a unique risk minimizer has training stability that scales exponentially with sample size, and conjectured that the existence of multiple risk minimizers prevents even super-quadratic convergence. We prove this result for the strictly weaker notion of CV-stability, positively resolving the conjecture.
|
Title: Intrinsic dimension estimation of data by principal component analysis
|
Abstract: Estimating intrinsic dimensionality of data is a classic problem in pattern recognition and statistics. Principal Component Analysis (PCA) is a powerful tool in discovering dimensionality of data sets with a linear structure; it, however, becomes ineffective when data have a nonlinear structure. In this paper, we propose a new PCA-based method to estimate intrinsic dimension of data with nonlinear structures. Our method works by first finding a minimal cover of the data set, then performing PCA locally on each subset in the cover and finally giving the estimation result by checking up the data variance on all small neighborhood regions. The proposed method utilizes the whole data set to estimate its intrinsic dimension and is convenient for incremental learning. In addition, our new PCA procedure can filter out noise in data and converge to a stable estimation with the neighborhood region size increasing. Experiments on synthetic and real world data sets show effectiveness of the proposed method.
|
Title: Bayesian Inference
|
Abstract: This chapter provides a overview of Bayesian inference, mostly emphasising that it is a universal method for summarising uncertainty and making estimates and predictions using probability statements conditional on observed data and an assumed model (Gelman 2008). The Bayesian perspective is thus applicable to all aspects of statistical inference, while being open to the incorporation of information items resulting from earlier experiments and from expert opinions. We provide here the basic elements of Bayesian analysis when considered for standard models, refering to Marin and Robert (2007) and to Robert (2007) for book-length entries.1 In the following, we refrain from embarking upon philosophical discussions about the nature of knowledge (see, e.g., Robert 2007, Chapter 10), opting instead for a mathematically sound presentation of an eminently practical statistical methodology. We indeed believe that the most convincing arguments for adopting a Bayesian version of data analyses are in the versatility of this tool and in the large range of existing applications, rather than in those polemical arguments.
|
Title: Estimating Bayesian networks for high-dimensional data with complex mean structure and random effects
|
Abstract: The estimation of Bayesian networks given high-dimensional data, in particular gene expression data, has been the focus of much recent research. Whilst there are several methods available for the estimation of such networks, these typically assume that the data consist of independent and identically distributed samples. However, it is often the case that the available data have a more complex mean structure plus additional components of variance, which must then be accounted for in the estimation of a Bayesian network. In this paper, score metrics that take account of such complexities are proposed for use in conjunction with score-based methods for the estimation of Bayesian networks. We propose firstly, a fully Bayesian score metric, and secondly, a metric inspired by the notion of restricted maximum likelihood. We demonstrate the performance of these new metrics for the estimation of Bayesian networks using simulated data with known complex mean structures. We then present the analysis of expression levels of grape berry genes adjusting for exogenous variables believed to affect the expression levels of the genes. Demonstrable biological effects can be inferred from the estimated conditional independence relationships and correlations amongst the grape-berry genes.
|
Title: Reverse Engineering Financial Markets with Majority and Minority Games using Genetic Algorithms
|
Abstract: Using virtual stock markets with artificial interacting software investors, aka agent-based models (ABMs), we present a method to reverse engineer real-world financial time series. We model financial markets as made of a large number of interacting boundedly rational agents. By optimizing the similarity between the actual data and that generated by the reconstructed virtual stock market, we obtain parameters and strategies, which reveal some of the inner workings of the target stock market. We validate our approach by out-of-sample predictions of directional moves of the Nasdaq Composite Index.
|
Title: Detection of Microcalcification in Mammograms Using Wavelet Transform and Fuzzy Shell Clustering
|
Abstract: Microcalcifications in mammogram have been mainly targeted as a reliable earliest sign of breast cancer and their early detection is vital to improve its prognosis. Since their size is very small and may be easily overlooked by the examining radiologist, computer-based detection output can assist the radiologist to improve the diagnostic accuracy. In this paper, we have proposed an algorithm for detecting microcalcification in mammogram. The proposed microcalcification detection algorithm involves mammogram quality enhancement using multirresolution analysis based on the dyadic wavelet transform and microcalcification detection by fuzzy shell clustering. It may be possible to detect nodular components such as microcalcification accurately by introducing shape information. The effectiveness of the proposed algorithm for microcalcification detection is confirmed by experimental results.
|
Title: The Fast Haar Wavelet Transform for Signal & Image Processing
|
Abstract: A method for the design of Fast Haar wavelet for signal processing and image processing has been proposed. In the proposed work, the analysis bank and synthesis bank of Haar wavelet is modified by using polyphase structure. Finally, the Fast Haar wavelet was designed and it satisfies alias free and perfect reconstruction condition. Computational time and computational complexity is reduced in Fast Haar wavelet transform.
|
Title: Vision Based Game Development Using Human Computer Interaction
|
Abstract: A Human Computer Interface (HCI) System for playing games is designed here for more natural communication with the machines. The system presented here is a vision-based system for detection of long voluntary eye blinks and interpretation of blink patterns for communication between man and machine. This system replaces the mouse with the human face as a new way to interact with the computer. Facial features (nose tip and eyes) are detected and tracked in realtime to use their actions as mouse events. The coordinates and movement of the nose tip in the live video feed are translated to become the coordinates and movement of the mouse pointer on the application. The left or right eye blinks fire left or right mouse click events. The system works with inexpensive USB cameras and runs at a frame rate of 30 frames per second.
|
Title: Modeling of Human Criminal Behavior using Probabilistic Networks
|
Abstract: Currently, criminals profile (CP) is obtained from investigators or forensic psychologists interpretation, linking crime scene characteristics and an offenders behavior to his or her characteristics and psychological profile. This paper seeks an efficient and systematic discovery of nonobvious and valuable patterns between variables from a large database of solved cases via a probabilistic network (PN) modeling approach. The PN structure can be used to extract behavioral patterns and to gain insight into what factors influence these behaviors. Thus, when a new case is being investigated and the profile variables are unknown because the offender has yet to be identified, the observed crime scene variables are used to infer the unknown variables based on their connections in the structure and the corresponding numerical (probabilistic) weights. The objective is to produce a more systematic and empirical approach to profiling, and to use the resulting PN model as a decision tool.
|
Title: A Generalization of the Chow-Liu Algorithm and its Application to Statistical Learning
|
Abstract: We extend the Chow-Liu algorithm for general random variables while the previous versions only considered finite cases. In particular, this paper applies the generalization to Suzuki's learning algorithm that generates from data forests rather than trees based on the minimum description length by balancing the fitness of the data to the forest and the simplicity of the forest. As a result, we successfully obtain an algorithm when both of the Gaussian and finite random variables are present.
|
Title: Effect of Wind Intermittency on the Electric Grid: Mitigating the Risk of Energy Deficits
|
Abstract: Successful implementation of California's Renewable Portfolio Standard (RPS) mandating 33 percent renewable energy generation by 2020 requires inclusion of a robust strategy to mitigate increased risk of energy deficits (blackouts) due to short time-scale (sub 1 hour) intermittencies in renewable energy sources. Of these RPS sources, wind energy has the fastest growth rate--over 25% year-over-year. If these growth trends continue, wind energy could make up 15 percent of California's energy portfolio by 2016 (wRPS15). However, the hour-to-hour variations in wind energy (speed) will create large hourly energy deficits that require installation of other, more predictable, compensation generation capacity and infrastructure. Compensating for the energy deficits of wRPS15 could potentially cost tens of billions in additional dollar-expenditure for fossil and / or nuclear generation capacity. There is a real possibility that carbon dioxide and other greenhouse gas (GHG) emission reductions will miss the California Assembly Bill 32 (CA AB 32) target by a wide margin once the wRPS15 compensation system is in place. This work presents a set of analytics tools that show the impact of short-term intermittencies to help policy makers understand and plan for wRPS15 integration. What are the right policy choices for RPS that include wind energy?
|
Title: Operator norm convergence of spectral clustering on level sets
|
Abstract: Following Hartigan, a cluster is defined as a connected component of the t-level set of the underlying density, i.e., the set of points for which the density is greater than t. A clustering algorithm which combines a density estimate with spectral clustering techniques is proposed. Our algorithm is composed of two steps. First, a nonparametric density estimate is used to extract the data points for which the estimated density takes a value greater than t. Next, the extracted points are clustered based on the eigenvectors of a graph Laplacian matrix. Under mild assumptions, we prove the almost sure convergence in operator norm of the empirical graph Laplacian operator associated with the algorithm. Furthermore, we give the typical behavior of the representation of the dataset into the feature space, which establishes the strong consistency of our proposed algorithm.
|
Title: Automatic diagnosis of retinal diseases from color retinal images
|
Abstract: Teleophthalmology holds a great potential to improve the quality, access, and affordability in health care. For patients, it can reduce the need for travel and provide the access to a superspecialist. Ophthalmology lends itself easily to telemedicine as it is a largely image based diagnosis. The main goal of the proposed system is to diagnose the type of disease in the retina and to automatically detect and segment retinal diseases without human supervision or interaction. The proposed system will diagnose the disease present in the retina using a neural network based classifier.The extent of the disease spread in the retina can be identified by extracting the textural features of the retina. This system will diagnose the following type of diseases: Diabetic Retinopathy and Drusen.
|
Title: Medical Image Compression using Wavelet Decomposition for Prediction Method
|
Abstract: In this paper offers a simple and lossless compression method for compression of medical images. Method is based on wavelet decomposition of the medical images followed by the correlation analysis of coefficients. The correlation analyses are the basis of prediction equation for each sub band. Predictor variable selection is performed through coefficient graphic method to avoid multicollinearity problem and to achieve high prediction accuracy and compression rate. The method is applied on MRI and CT images. Results show that the proposed approach gives a high compression rate for MRI and CT images comparing with state of the art methods.
|
Title: Application of k Means Clustering algorithm for prediction of Students Academic Performance
|
Abstract: The ability to monitor the progress of students academic performance is a critical issue to the academic community of higher learning. A system for analyzing students results based on cluster analysis and uses standard statistical algorithms to arrange their scores data according to the level of their performance is described. In this paper, we also implemented k mean clustering algorithm for analyzing students result data. The model was combined with the deterministic model to analyze the students results of a private Institution in Nigeria which is a good benchmark to monitor the progression of academic performance of students in higher Institution for the purpose of making an effective decision by the academic planners.
|
Title: Feature Level Fusion of Face and Fingerprint Biometrics
|
Abstract: The aim of this paper is to study the fusion at feature extraction level for face and fingerprint biometrics. The proposed approach is based on the fusion of the two traits by extracting independent feature pointsets from the two modalities, and making the two pointsets compatible for concatenation. Moreover, to handle the problem of curse of dimensionality, the feature pointsets are properly reduced in dimension. Different feature reduction techniques are implemented, prior and after the feature pointsets fusion, and the results are duly recorded. The fused feature pointset for the database and the query face and fingerprint images are matched using techniques based on either the point pattern matching, or the Delaunay triangulation. Comparative experiments are conducted on chimeric and real databases, to assess the actual advantage of the fusion performed at the feature extraction level, in comparison to the matching score level.
|
Title: Assessment Of The Wind Farm Impact On The Radar
|
Abstract: This study shows the means to evaluate the wind farm impact on the radar. It proposes the set of tools, which can be used to realise this objective. The big part of report covers the study of complex pattern propagation factor as the critical issue of the Advanced Propagation Model (APM). Finally, the reader can find here the implementation of this algorithm - the real scenario in Inverness airport (the United Kingdom), where the ATC radar STAR 2000, developed by Thales Air Systems, operates in the presence of several wind farms. Basically, the project is based on terms of the department "Strategy Technology & Innovation", where it has been done. Also you can find here how the radar industry can act with the problem engendered by wind farms. The current strategies in this area are presented, such as a wind turbine production, improvements of air traffic handling procedures and the collaboration between developers of radars and wind turbines. The possible strategy for Thales as a main pioneer was given as well.
|
Title: On computational tools for Bayesian data analysis
|
Abstract: While Robert and Rousseau (2010) addressed the foundational aspects of Bayesian analysis, the current chapter details its practical aspects through a review of the computational methods available for approximating Bayesian procedures. Recent innovations like Monte Carlo Markov chain, sequential Monte Carlo methods and more recently Approximate Bayesian Computation techniques have considerably increased the potential for Bayesian applications and they have also opened new avenues for Bayesian inference, first and foremost Bayesian model choice.
|
Title: Bayesian computational methods
|
Abstract: In this chapter, we will first present the most standard computational challenges met in Bayesian Statistics, focussing primarily on mixture estimation and on model choice issues, and then relate these problems with computational solutions. Of course, this chapter is only a terse introduction to the problems and solutions related to Bayesian computations. For more complete references, see Robert and Casella (2004, 2009), or Marin and Robert (2007), among others. We also restrain from providing an introduction to Bayesian Statistics per se and for comprehensive coverage, address the reader to Robert (2007), (again) among others.
|
Title: Evolutionary Stochastic Search for Bayesian model exploration
|
Abstract: Implementing Bayesian variable selection for linear Gaussian regression models for analysing high dimensional data sets is of current interest in many fields. In order to make such analysis operational, we propose a new sampling algorithm based upon Evolutionary Monte Carlo and designed to work under the "large p, small n" paradigm, thus making fully Bayesian multivariate analysis feasible, for example, in genetics/genomics experiments. Two real data examples in genomics are presented, demonstrating the performance of the algorithm in a space of up to 10,000 covariates. Finally the methodology is compared with a recently proposed search algorithms in an extensive simulation study.
|
Title: Multibiometrics Belief Fusion
|
Abstract: This paper proposes a multimodal biometric system through Gaussian Mixture Model (GMM) for face and ear biometrics with belief fusion of the estimated scores characterized by Gabor responses and the proposed fusion is accomplished by Dempster-Shafer (DS) decision theory. Face and ear images are convolved with Gabor wavelet filters to extracts spatially enhanced Gabor facial features and Gabor ear features. Further, GMM is applied to the high-dimensional Gabor face and Gabor ear responses separately for quantitive measurements. Expectation Maximization (EM) algorithm is used to estimate density parameters in GMM. This produces two sets of feature vectors which are then fused using Dempster-Shafer theory. Experiments are conducted on multimodal database containing face and ear images of 400 individuals. It is found that use of Gabor wavelet filters along with GMM and DS theory can provide robust and efficient multimodal fusion strategy.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.