text
stringlengths
0
4.09k
Title: SVM-based Multiview Face Recognition by Generalization of Discriminant Analysis
Abstract: Identity verification of authentic persons by their multiview faces is a real valued problem in machine vision. Multiview faces are having difficulties due to non-linear representation in the feature space. This paper illustrates the usability of the generalization of LDA in the form of canonical covariate for face recognition to multiview faces. In the proposed work, the Gabor filter bank is used to extract facial features that characterized by spatial frequency, spatial locality and orientation. Gabor face representation captures substantial amount of variations of the face instances that often occurs due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images of rotated profile views produce Gabor faces with high dimensional features vectors. Canonical covariate is then used to Gabor faces to reduce the high dimensional feature spaces into low dimensional subspaces. Finally, support vector machines are trained with canonical sub-spaces that contain reduced set of features and perform recognition task. The proposed system is evaluated with UMIST face database. The experiment results demonstrate the efficiency and robustness of the proposed system with high recognition rates.
Title: Detection and Demarcation of Tumor using Vector Quantization in MRI images
Abstract: Segmenting a MRI images into homogeneous texture regions representing disparate tissue types is often a useful preprocessing step in the computer-assisted detection of breast cancer. That is why we proposed new algorithm to detect cancer in mammogram breast cancer images. In this paper we proposed segmentation using vector quantization technique. Here we used Linde Buzo-Gray algorithm (LBG) for segmentation of MRI images. Initially a codebook of size 128 was generated for MRI images. These code vectors were further clustered in 8 clusters using same LBG algorithm. These 8 images were displayed as a result. This approach does not leads to over segmentation or under segmentation. For the comparison purpose we displayed results of watershed segmentation and Entropy using Gray Level Co-occurrence Matrix along with this method.
Title: Sparse covariance estimation in heterogeneous samples
Abstract: Standard Gaussian graphical models (GGMs) implicitly assume that the conditional independence among variables is common to all observations in the sample. However, in practice, observations are usually collected form heterogeneous populations where such assumption is not satisfied, leading in turn to nonlinear relationships among variables. To tackle these problems we explore mixtures of GGMs; in particular, we consider both infinite mixture models of GGMs and infinite hidden Markov models with GGM emission distributions. Such models allow us to divide a heterogeneous population into homogenous groups, with each cluster having its own conditional independence structure. The main advantage of considering infinite mixtures is that they allow us easily to estimate the number of number of subpopulations in the sample. As an illustration, we study the trends in exchange rate fluctuations in the pre-Euro era. This example demonstrates that the models are very flexible while providing extremely interesting interesting insights into real-life applications.
Title: A Decidable Class of Nested Iterated Schemata (extended version)
Abstract: Many problems can be specified by patterns of propositional formulae depending on a parameter, e.g. the specification of a circuit usually depends on the number of bits of its input. We define a logic whose formulae, called "iterated schemata", allow to express such patterns. Schemata extend propositional logic with indexed propositions, e.g. P_i, P_i+1, P_1, and with generalized connectives, e.g. /\i=1..n or i=1..n (called "iterations") where n is an (unbound) integer variable called a "parameter". The expressive power of iterated schemata is strictly greater than propositional logic: it is even out of the scope of first-order logic. We define a proof procedure, called DPLL*, that can prove that a schema is satisfiable for at least one value of its parameter, in the spirit of the DPLL procedure. However the converse problem, i.e. proving that a schema is unsatisfiable for every value of the parameter, is undecidable so DPLL* does not terminate in general. Still, we prove that it terminates for schemata of a syntactic subclass called "regularly nested". This is the first non trivial class for which DPLL* is proved to terminate. Furthermore the class of regularly nested schemata is the first decidable class to allow nesting of iterations, i.e. to allow schemata of the form /\i=1..n (/\j=1..n ...).
Title: Sentence Simplification Aids Protein-Protein Interaction Extraction
Abstract: Accurate systems for extracting Protein-Protein Interactions (PPIs) automatically from biomedical articles can help accelerate biomedical research. Biomedical Informatics researchers are collaborating to provide metaservices and advance the state-of-art in PPI extraction. One problem often neglected by current Natural Language Processing systems is the characteristic complexity of the sentences in biomedical literature. In this paper, we report on the impact that automatic simplification of sentences has on the performance of a state-of-art PPI extraction system, showing a substantial improvement in recall (8%) when the sentence simplification method is applied, without significant impact to precision.
Title: Towards Effective Sentence Simplification for Automatic Processing of Biomedical Text
Abstract: The complexity of sentences characteristic to biomedical articles poses a challenge to natural language parsers, which are typically trained on large-scale corpora of non-technical text. We propose a text simplification process, bioSimplify, that seeks to reduce the complexity of sentences in biomedical abstracts in order to improve the performance of syntactic parsers on the processed sentences. Syntactic parsing is typically one of the first steps in a text mining pipeline. Thus, any improvement in performance would have a ripple effect over all processing steps. We evaluated our method using a corpus of biomedical sentences annotated with syntactic links. Our empirical results show an improvement of 2.90% for the Charniak-McClosky parser and of 4.23% for the Link Grammar parser when processing simplified sentences rather than the original sentences in the corpus.
Title: Multi-camera Realtime 3D Tracking of Multiple Flying Animals
Abstract: Automated tracking of animal movement allows analyses that would not otherwise be possible by providing great quantities of data. The additional capability of tracking in realtime - with minimal latency - opens up the experimental possibility of manipulating sensory feedback, thus allowing detailed explorations of the neural basis for control of behavior. Here we describe a new system capable of tracking the position and body orientation of animals such as flies and birds. The system operates with less than 40 msec latency and can track multiple animals simultaneously. To achieve these results, a multi target tracking algorithm was developed based on the Extended Kalman Filter and the Nearest Neighbor Standard Filter data association algorithm. In one implementation, an eleven camera system is capable of tracking three flies simultaneously at 60 frames per second using a gigabit network of nine standard Intel Pentium 4 and Core 2 Duo computers. This manuscript presents the rationale and details of the algorithms employed and shows three implementations of the system. An experiment was performed using the tracking system to measure the effect of visual contrast on the flight speed of Drosophila melanogaster. At low contrasts, speed is more variable and faster on average than at high contrasts. Thus, the system is already a useful tool to study the neurobiology and behavior of freely flying animals. If combined with other techniques, such as `virtual reality'-type computer graphics or genetic manipulation, the tracking system would offer a powerful new way to investigate the biology of flying animals.
Title: Probabilistic Approach to Neural Networks Computation Based on Quantum Probability Model Probabilistic Principal Subspace Analysis Example
Abstract: In this paper, we introduce elements of probabilistic model that is suitable for modeling of learning algorithms in biologically plausible artificial neural networks framework. Model is based on two of the main concepts in quantum physics - a density matrix and the Born rule. As an example, we will show that proposed probabilistic interpretation is suitable for modeling of on-line learning algorithms for PSA, which are preferably realized by a parallel hardware based on very simple computational units. Proposed concept (model) can be used in the context of improving algorithm convergence speed, learning factor choice, or input signal scale robustness. We are going to see how the Born rule and the Hebbian learning rule are connected
Title: Analytical continuation of imaginary axis data using maximum entropy
Abstract: We study the maximum entropy (MaxEnt) approach for analytical continuation of spectral data from imaginary times to real frequencies. The total error is divided in a statistical error, due to the noise in the input data, and a systematic error, due to deviations of the default function, used in the MaxEnt approach, from the exact spectrum. We find that the MaxEnt approach in its classical formulation can lead to a nonoptimal balance between the two types of errors, leading to an unnecessary large statistical error. The statistical error can be reduced by splitting up the data in several batches, performing a MaxEnt calculation for each batch and averaging. This can outweigh an increase in the systematic error resulting from this approach. The output from the MaxEnt result can be used as a default function for a new MaxEnt calculation. Such iterations often lead to worse results due to an increase in the statistical error. By splitting up the data in batches, the statistical error is reduced and and the increase resulting from iterations can be outweighed by a decrease in the systematic error. Finally we consider a linearized version to obtain a better understanding of the method.
Title: Implicit media frames: Automated analysis of public debate on artificial sweeteners
Abstract: The framing of issues in the mass media plays a crucial role in the public understanding of science and technology. This article contributes to research concerned with diachronic analysis of media frames by making an analytical distinction between implicit and explicit media frames, and by introducing an automated method for analysing diachronic changes of implicit frames. In particular, we apply a semantic maps method to a case study on the newspaper debate about artificial sweeteners, published in The New York Times (NYT) between 1980 and 2006. Our results show that the analysis of semantic changes enables us to filter out the dynamics of implicit frames, and to detect emerging metaphors in public debates. Theoretically, we discuss the relation between implicit frames in public debates and codification of information in scientific discourses, and suggest further avenues for research interested in the automated analysis of frame changes and trends in public debates.
Title: A Formal Framework of Virtual Organisations as Agent Societies
Abstract: We propose a formal framework that supports a model of agent-based Virtual Organisations (VOs) for service grids and provides an associated operational model for the creation of VOs. The framework is intended to be used for describing different service grid applications based on multiple agents and, as a result, it abstracts away from any realisation choices of the service grid application, the agents involved to support the applications and their interactions. Within the proposed framework VOs are seen as emerging from societies of agents, where agents are abstractly characterised by goals and roles they can play within VOs. In turn, VOs are abstractly characterised by the agents participating in them with specific roles, as well as the workflow of services and corresponding contracts suitable for achieving the goals of the participating agents. We illustrate the proposed framework with an earth observation scenario.
Title: X-Armed Bandits
Abstract: We consider a generalization of stochastic bandits where the set of arms, $\cX$, is allowed to be a generic measurable space and the mean-payoff function is "locally Lipschitz" with respect to a dissimilarity function that is known to the decision maker. Under this condition we construct an arm selection policy, called HOO (hierarchical optimistic optimization), with improved regret bounds compared to previous results for a large class of problems. In particular, our results imply that if $\cX$ is the unit hypercube in a Euclidean space and the mean-payoff function has a finite number of global maxima around which the behavior of the function is locally continuous with a known smoothness degree, then the expected regret of HOO is bounded up to a logarithmic factor by $$, i.e., the rate of growth of the regret is independent of the dimension of the space. We also prove the minimax optimality of our algorithm when the dissimilarity is a metric. Our basic strategy has quadratic computational complexity as a function of the number of time steps and does not rely on the doubling trick. We also introduce a modified strategy, which relies on the doubling trick but runs in linearithmic time. Both results are improvements with respect to previous approaches.
Title: On Bayesian Data Analysis
Abstract: This introduction to Bayesian statistics presents the main concepts as well as the principal reasons advocated in favour of a Bayesian modelling. We cover the various approaches to prior determination as well as the basis asymptotic arguments in favour of using Bayes estimators. The testing aspects of Bayesian inference are also examined in details.
Title: MM Algorithms for Minimizing Nonsmoothly Penalized Objective Functions
Abstract: In this paper, we propose a general class of algorithms for optimizing an extensive variety of nonsmoothly penalized objective functions that satisfy certain regularity conditions. The proposed framework utilizes the majorization-minimization (MM) algorithm as its core optimization engine. The resulting algorithms rely on iterated soft-thresholding, implemented componentwise, allowing for fast, stable updating that avoids the need for any high-dimensional matrix inversion. We establish a local convergence theory for this class of algorithms under weaker assumptions than previously considered in the statistical literature. We also demonstrate the exceptional effectiveness of new acceleration methods, originally proposed for the EM algorithm, in this class of problems. Simulation results and a microarray data example are provided to demonstrate the algorithm's capabilities and versatility.
Title: Janus: Automatic Ontology Builder from XSD Files
Abstract: The construction of a reference ontology for a large domain still remains an hard human task. The process is sometimes assisted by software tools that facilitate the information extraction from a textual corpus. Despite of the great use of XML Schema files on the internet and especially in the B2B domain, tools that offer a complete semantic analysis of XML schemas are really rare. In this paper we introduce Janus, a tool for automatically building a reference knowledge base starting from XML Schema files. Janus also provides different useful views to simplify B2B application integration.
Title: "Additivity" versus "Maxitivity" at the heart of the paradoxical and efficient nature of Statistics
Abstract: Unlike the Probability Theory based on additivity, Statistical Inference seems to hesitate between "Additivity" and a so-called "Maxitivity" approach. After a brief overview of three types of principles for any (parametric) statistical theory and the proof that these principles are mutually exclusive, the paper shows that two kinds of support measures are conceivable, an additive one and a maxitive one (based on maximization operators). Unfortunately, none of them is able to cope with the ignorance part of the statistical experiment and, in the meantime, with the partial information given through the structure of the data. To conclude, the author promotes the combined use of both approaches, as an efficient middle-of-the-road position for the statistician.
Title: Trajectory Clustering and an Application to Airspace Monitoring
Abstract: This paper presents a framework aimed at monitoring the behavior of aircraft in a given airspace. Nominal trajectories are determined and learned using data driven methods. Standard procedures are used by air traffic controllers (ATC) to guide aircraft, ensure the safety of the airspace, and to maximize the runway occupancy. Even though standard procedures are used by ATC, the control of the aircraft remains with the pilots, leading to a large variability in the flight patterns observed. Two methods to identify typical operations and their variability from recorded radar tracks are presented. This knowledge base is then used to monitor the conformance of current operations against operations previously identified as standard. A tool called AirTrajectoryMiner is presented, aiming at monitoring the instantaneous health of the airspace, in real time. The airspace is "healthy" when all aircraft are flying according to the nominal procedures. A measure of complexity is introduced, measuring the conformance of current flight to nominal flight patterns. When an aircraft does not conform, the complexity increases as more attention from ATC is required to ensure a safe separation between aircraft.
Title: Computing Networks: A General Framework to Contrast Neural and Swarm Cognitions
Abstract: This paper presents the Computing Networks (CNs) framework. CNs are used to generalize neural and swarm architectures. Artificial neural networks, ant colony optimization, particle swarm optimization, and realistic biological models are used as examples of instantiations of CNs. The description of these architectures as CNs allows their comparison. Their differences and similarities allow the identification of properties that enable neural and swarm architectures to perform complex computations and exhibit complex cognitive abilities. In this context, the most relevant characteristics of CNs are the existence multiple dynamical and functional scales. The relationship between multiple dynamical and functional scales with adaptation, cognition (of brains and swarms) and computation is discussed.
Title: Distilled Sensing: Adaptive Sampling for Sparse Detection and Estimation
Abstract: Adaptive sampling results in dramatic improvements in the recovery of sparse signals in white Gaussian noise. A sequential adaptive sampling-and-refinement procedure called Distilled Sensing (DS) is proposed and analyzed. DS is a form of multi-stage experimental design and testing. Because of the adaptive nature of the data collection, DS can detect and localize far weaker signals than possible from non-adaptive measurements. In particular, reliable detection and localization (support estimation) using non-adaptive samples is possible only if the signal amplitudes grow logarithmically with the problem dimension. Here it is shown that using adaptive sampling, reliable detection is possible provided the amplitude exceeds a constant, and localization is possible when the amplitude exceeds any arbitrarily slowly growing function of the dimension.
Title: Performance Comparisons of PSO based Clustering
Abstract: In this paper we have investigated the performance of PSO Particle Swarm Optimization based clustering on few real world data sets and one artificial data set. The performances are measured by two metric namely quantization error and inter-cluster distance. The K means clustering algorithm is first implemented for all data sets, the results of which form the basis of comparison of PSO based approaches. We have explored different variants of PSO such as gbest, lbest ring, lbest vonneumann and Hybrid PSO for comparison purposes. The results reveal that PSO based clustering algorithms perform better compared to K means in all data sets.
Title: Kannada Character Recognition System A Review
Abstract: Intensive research has been done on optical character recognition ocr and a large number of articles have been published on this topic during the last few decades. Many commercial OCR systems are now available in the market, but most of these systems work for Roman, Chinese, Japanese and Arabic characters. There are no sufficient number of works on Indian language character recognition especially Kannada script among 12 major scripts in India. This paper presents a review of existing work on printed Kannada script and their results. The characteristics of Kannada script and Kannada Character Recognition System kcr are discussed in detail. Finally fusion at the classifier level is proposed to increase the recognition accuracy.
Title: Threshold Based Indexing of Commercial Shoe Print to Create Reference and Recovery Images
Abstract: One of the important evidence in a crime scene that is normally overlooked but very important evidence is shoe print as the criminal is normally unaware of the mask for this. In this paper we use image processing technique to process reference shoe images to make it index-able for a search from the database the shoe print impressions available in the commercial market. This is achieved first by converting the commercially available image through the process of converting them to gray scale then apply image enhancement and restoration techniques and finally do image segmentation to store the segmented parameter as index in the database storage. We use histogram method for image enhancement, inverse filtering for image restoration and threshold method for indexing. We use global threshold as index of the shoe print. The paper describes this method and simulation results are included to validate the method.
Title: Locally adaptive image denoising by a statistical multiresolution criterion
Abstract: We demonstrate how one can choose the smoothing parameter in image denoising by a statistical multiresolution criterion, both globally and locally. Using inhomogeneous diffusion and total variation regularization as examples for localized regularization schemes, we present an efficient method for locally adaptive image denoising. As expected, the smoothing parameter serves as an edge detector in this framework. Numerical examples illustrate the usefulness of our approach. We also present an application in confocal microscopy.
Title: $\alpha$-Discounting Multi-Criteria Decision Making ($\alpha$-D MCDM)
Abstract: In this book we introduce a new procedure called \alpha-Discounting Method for Multi-Criteria Decision Making (\alpha-D MCDM), which is as an alternative and extension of Saaty Analytical Hierarchy Process (AHP). It works for any number of preferences that can be transformed into a system of homogeneous linear equations. A degree of consistency (and implicitly a degree of inconsistency) of a decision-making problem are defined. \alpha-D MCDM is afterwards generalized to a set of preferences that can be transformed into a system of linear and or non-linear homogeneous and or non-homogeneous equations and or inequalities. The general idea of \alpha-D MCDM is to assign non-null positive parameters \alpha_1, \alpha_2, and so on \alpha_p to the coefficients in the right-hand side of each preference that diminish or increase them in order to transform the above linear homogeneous system of equations which has only the null-solution, into a system having a particular non-null solution. After finding the general solution of this system, the principles used to assign particular values to all parameters \alpha is the second important part of \alpha-D, yet to be deeper investigated in the future. In the current book we propose the Fairness Principle, i.e. each coefficient should be discounted with the same percentage (we think this is fair: not making any favoritism or unfairness to any coefficient), but the reader can propose other principles. For consistent decision-making problems with pairwise comparisons, \alpha-Discounting Method together with the Fairness Principle give the same result as AHP. But for weak inconsistent decision-making problem, \alpha-Discounting together with the Fairness Principle give a different result from AHP. Many consistent, weak inconsistent, and strong inconsistent examples are given in this book.
Title: Genetic algorithm for robotic telescope scheduling
Abstract: This work was inspired by author experiences with a telescope scheduling. Author long time goal is to develop and further extend software for an autonomous observatory. The software shall provide users with all the facilities they need to take scientific images of the night sky, cooperate with other autonomous observatories, and possibly more. This works shows how genetic algorithm can be used for scheduling of a single observatory, as well as network of observatories.
Title: Constraint solvers: An empirical evaluation of design decisions
Abstract: This paper presents an evaluation of the design decisions made in four state-of-the-art constraint solvers; Choco, ECLiPSe, Gecode, and Minion. To assess the impact of design decisions, instances of the five problem classes n-Queens, Golomb Ruler, Magic Square, Social Golfers, and Balanced Incomplete Block Design are modelled and solved with each solver. The results of the experiments are not meant to give an indication of the performance of a solver, but rather investigate what influence the choice of algorithms and data structures has. The analysis of the impact of the design decisions focuses on the different ways of memory management, behaviour with increasing problem size, and specialised algorithms for specific types of variables. It also briefly considers other, less significant decisions.
Title: Dominion -- A constraint solver generator
Abstract: This paper proposes a design for a system to generate constraint solvers that are specialised for specific problem models. It describes the design in detail and gives preliminary experimental results showing the feasibility and effectiveness of the approach.
Title: Classifying the typefaces of the Gutenberg 42-line bible
Abstract: We have measured the dissimilarities among several printed characters of a single page in the Gutenberg 42-line bible and we prove statistically the existence of several different matrices from which the metal types where constructed. This is in contrast with the prevailing theory, which states that only one matrix per character was used in the printing process of Gutenberg's greatest work. The main mathematical tool for this purpose is cluster analysis, combined with a statistical test for outliers. We carry out the research with two letters, i and a. In the first case, an exact clustering method is employed; in the second, with more specimens to be classified, we resort to an approximate agglomerative clustering method. The results show that the letters form clusters according to their shape, with significant shape differences among clusters, and allow to conclude, with a very small probability of error, that indeed the metal types used to print them were cast from several different matrices. Mathematics Subject Classification: 62H30
Title: Estimation error for blind Gaussian time series prediction
Abstract: We tackle the issue of the blind prediction of a Gaussian time series. For this, we construct a projection operator build by plugging an empirical covariance estimation into a Schur complement decomposition of the projector. This operator is then used to compute the predictor. Rates of convergence of the estimates are given.
Title: Logical Evaluation of Consciousness: For Incorporating Consciousness into Machine Architecture
Abstract: Machine Consciousness is the study of consciousness in a biological, philosophical, mathematical and physical perspective and designing a model that can fit into a programmable system architecture. Prime objective of the study is to make the system architecture behave consciously like a biological model does. Present work has developed a feasible definition of consciousness, that characterizes consciousness with four parameters i.e., parasitic, symbiotic, self referral and reproduction. Present work has also developed a biologically inspired consciousness architecture that has following layers: quantum layer, cellular layer, organ layer and behavioral layer and traced the characteristics of consciousness at each layer. Finally, the work has estimated physical and algorithmic architecture to devise a system that can behave consciously.
Title: Some considerations on how the human brain must be arranged in order to make its replication in a thinking machine possible
Abstract: For the most of my life, I have earned my living as a computer vision professional busy with image processing tasks and problems. In the computer vision community there is a widespread belief that artificial vision systems faithfully replicate human vision abilities or at least very closely mimic them. It was a great surprise to me when one day I have realized that computer and human vision have next to nothing in common. The former is occupied with extensive data processing, carrying out massive pixel-based calculations, while the latter is busy with meaningful information processing, concerned with smart objects-based manipulations. And the gap between the two is insurmountable. To resolve this confusion, I had had to return and revaluate first the vision phenomenon itself, define more carefully what visual information is and how to treat it properly. In this work I have not been, as it is usually accepted, biologically inspired . On the contrary, I have drawn my inspirations from a pure mathematical theory, the Kolmogorov s complexity theory. The results of my work have been already published elsewhere. So the objective of this paper is to try and apply the insights gained in course of this my enterprise to a more general case of information processing in human brain and the challenging issue of human intelligence.
Title: Dendritic Cells for SYN Scan Detection
Abstract: Artificial immune systems have previously been applied to the problem of intrusion detection. The aim of this research is to develop an intrusion detection system based on the function of Dendritic Cells (DCs). DCs are antigen presenting cells and key to activation of the human immune system, behaviour which has been abstracted to form the Dendritic Cell Algorithm (DCA). In algorithmic terms, individual DCs perform multi-sensor data fusion, asynchronously correlating the the fused data signals with a secondary data stream. Aggregate output of a population of cells, is analysed and forms the basis of an anomaly detection system. In this paper the DCA is applied to the detection of outgoing port scans using TCP SYN packets. Results show that detection can be achieved with the DCA, yet some false positives can be encountered when simultaneously scanning and using other network services. Suggestions are made for using adaptive signals to alleviate this uncovered problem.
Title: Multivariate Granger Causality and Generalized Variance
Abstract: Granger causality analysis is a popular method for inference on directed interactions in complex systems of many variables. A shortcoming of the standard framework for Granger causality is that it only allows for examination of interactions between single (univariate) variables within a system, perhaps conditioned on other variables. However, interactions do not necessarily take place between single variables, but may occur among groups, or "ensembles", of variables. In this study we establish a principled framework for Granger causality in the context of causal interactions among two or more multivariate sets of variables. Building on Geweke's seminal 1982 work, we offer new justifications for one particular form of multivariate Granger causality based on the generalized variances of residual errors. Taken together, our results support a comprehensive and theoretically consistent extension of Granger causality to the multivariate case. Treated individually, they highlight several specific advantages of the generalized variance measure, which we illustrate using applications in neuroscience as an example. We further show how the measure can be used to define "partial" Granger causality in the multivariate context and we also motivate reformulations of "causal density" and "Granger autonomy". Our results are directly applicable to experimental data and promise to reveal new types of functional relations in complex systems, neural and otherwise.