text
stringlengths 0
4.09k
|
|---|
Title: Simulated annealing for weighted polygon packing
|
Abstract: In this paper we present a new algorithm for a layout optimization problem: this concerns the placement of weighted polygons inside a circular container, the two objectives being to minimize imbalance of mass and to minimize the radius of the container. This problem carries real practical significance in industrial applications (such as the design of satellites), as well as being of significant theoretical interest. Previous work has dealt with circular or rectangular objects, but here we deal with the more realistic case where objects may be represented as polygons and the polygons are allowed to rotate. We present a solution based on simulated annealing and first test it on instances with known optima. Our results show that the algorithm obtains container radii that are close to optimal. We also compare our method with existing algorithms for the (special) rectangular case. Experimental results show that our approach out-performs these methods in terms of solution quality.
|
Title: Determining the Unithood of Word Sequences using a Probabilistic Approach
|
Abstract: Most research related to unithood were conducted as part of a larger effort for the determination of termhood. Consequently, novelties are rare in this small sub-field of term extraction. In addition, existing work were mostly empirically motivated and derived. We propose a new probabilistically-derived measure, independent of any influences of termhood, that provides dedicated measures to gather linguistic evidence from parsed text and statistical evidence from Google search engine for the measurement of unithood. Our comparative study using 1,825 test cases against an existing empirically-derived function revealed an improvement in terms of precision, recall and accuracy.
|
Title: Determining the Unithood of Word Sequences using Mutual Information and Independence Measure
|
Abstract: Most works related to unithood were conducted as part of a larger effort for the determination of termhood. Consequently, the number of independent research that study the notion of unithood and produce dedicated techniques for measuring unithood is extremely small. We propose a new approach, independent of any influences of termhood, that provides dedicated measures to gather linguistic evidence from parsed text and statistical evidence from Google search engine for the measurement of unithood. Our evaluations revealed a precision and recall of 98.68% and 91.82% respectively with an accuracy at 95.42% in measuring the unithood of 1005 test cases.
|
Title: Distribution of complexities in the Vai script
|
Abstract: In the paper, we analyze the distribution of complexities in the Vai script, an indigenous syllabic writing system from Liberia. It is found that the uniformity hypothesis for complexities fails for this script. The models using Poisson distribution for the number of components and hyper-Poisson distribution for connections provide good fits in the case of the Vai script.
|
Title: Enhanced Integrated Scoring for Cleaning Dirty Texts
|
Abstract: An increasing number of approaches for ontology engineering from text are gearing towards the use of online sources such as company intranet and the World Wide Web. Despite such rise, not much work can be found in aspects of preprocessing and cleaning dirty texts from online sources. This paper presents an enhancement of an Integrated Scoring for Spelling error correction, Abbreviation expansion and Case restoration (ISSAC). ISSAC is implemented as part of a text preprocessing phase in an ontology engineering system. New evaluations performed on the enhanced ISSAC using 700 chat records reveal an improved accuracy of 98% as compared to 96.5% and 71% based on the use of only basic ISSAC and of Aspell, respectively.
|
Title: Estimating the Parameters of Binomial and Poisson Distributions via Multistage Sampling
|
Abstract: In this paper, we have developed a new class of sampling schemes for estimating parameters of binomial and Poisson distributions. Without any information of the unknown parameters, our sampling schemes rigorously guarantee prescribed levels of precision and confidence.
|
Title: Three New Complexity Results for Resource Allocation Problems
|
Abstract: We prove the following results for task allocation of indivisible resources: - The problem of finding a leximin-maximal resource allocation is in P if the agents have max-utility functions and atomic demands. - Deciding whether a resource allocation is Pareto-optimal is coNP-complete for agents with (1-)additive utility functions. - Deciding whether there exists a Pareto-optimal and envy-free resource allocation is Sigma_2^p-complete for agents with (1-)additive utility functions.
|
Title: A multi-resolution, non-parametric, Bayesian framework for identification of spatially-varying model parameters
|
Abstract: This paper proposes a hierarchical, multi-resolution framework for the identification of model parameters and their spatially variability from noisy measurements of the response or output. Such parameters are frequently encountered in PDE-based models and correspond to quantities such as density or pressure fields, elasto-plastic moduli and internal variables in solid mechanics, conductivity fields in heat diffusion problems, permeability fields in fluid flow through porous media etc. The proposed model has all the advantages of traditional Bayesian formulations such as the ability to produce measures of confidence for the inferences made and providing not only predictive estimates but also quantitative measures of the predictive uncertainty. In contrast to existing approaches it utilizes a parsimonious, non-parametric formulation that favors sparse representations and whose complexity can be determined from the data. The proposed framework in non-intrusive and makes use of a sequence of forward solvers operating at various resolutions. As a result, inexpensive, coarse solvers are used to identify the most salient features of the unknown field(s) which are subsequently enriched by invoking solvers operating at finer resolutions. This leads to significant computational savings particularly in problems involving computationally demanding forward models but also improvements in accuracy. It is based on a novel, adaptive scheme based on Sequential Monte Carlo sampling which is embarrassingly parallelizable and circumvents issues with slow mixing encountered in Markov Chain Monte Carlo schemes.
|
Title: A New Upper Bound on the Capacity of a Class of Primitive Relay Channels
|
Abstract: We obtain a new upper bound on the capacity of a class of discrete memoryless relay channels. For this class of relay channels, the relay observes an i.i.d. sequence $T$, which is independent of the channel input $X$. The channel is described by a set of probability transition functions $p(y|x,t)$ for all $(x,t,y)\in \times \times $. Furthermore, a noiseless link of finite capacity $R_0$ exists from the relay to the receiver. Although the capacity for these channels is not known in general, the capacity of a subclass of these channels, namely when $T=g(X,Y)$, for some deterministic function $g$, was obtained in [1] and it was shown to be equal to the cut-set bound. Another instance where the capacity was obtained was in [2], where the channel output $Y$ can be written as $Y=X\oplus Z$, where $\oplus$ denotes modulo-$m$ addition, $Z$ is independent of $X$, $||=||=m$, and $T$ is some stochastic function of $Z$. The compress-and-forward (CAF) achievability scheme [3] was shown to be capacity achieving in both cases. Using our upper bound we recover the capacity results of [1] and [2]. We also obtain the capacity of a class of channels which does not fall into either of the classes studied in [1] and [2]. For this class of channels, CAF scheme is shown to be optimal but capacity is strictly less than the cut-set bound for certain values of $R_0$. We also evaluate our outer bound for a particular relay channel with binary multiplicative states and binary additive noise for which the channel is given as $Y=TX+N$. We show that our upper bound is strictly better than the cut-set upper bound for certain values of $R_0$ but it lies strictly above the rates yielded by the CAF achievability scheme.
|
Title: Class-Specific Tests of Spatial Segregation Based on Nearest Neighbor Contingency Tables
|
Abstract: The spatial interaction between two or more classes (or species) has important consequences in many fields and might cause multivariate clustering patterns such as segregation or association. The spatial pattern of segregation occurs when members of a class tend to be found near members of the same class (i.e., conspecifics), while association occurs when members of a class tend to be found near members of the other class or classes. These patterns can be tested using a nearest neighbor contingency table (NNCT). The null hypothesis is randomness in the nearest neighbor (NN) structure, which may result from -- among other patterns -- random labeling (RL) or complete spatial randomness (CSR) of points from two or more classes (which is called the CSR independence, henceforth). In this article, we consider Dixon's class-specific tests of segregation and introduce a new class-specific test, which is a new decomposition of Dixon's overall chi-square segregation statistic. We demonstrate that the tests we consider provide information on different aspects of the spatial interaction between the classes and they are conditional under the CSR independence pattern, but not under the RL pattern. We analyze the distributional properties and prove the consistency of these tests; compare the empirical significant levels (Type I error rates) and empirical power estimates of the tests using Monte Carlo simulations. We demonstrate that the new class-specific tests also have comparable performance with the currently available tests based on NNCTs in terms of Type I error and power estimates. For illustrative purposes, we use three example data sets. We also provide guidelines for using these tests.
|
Title: Stiffness Analysis Of Multi-Chain Parallel Robotic Systems
|
Abstract: The paper presents a new stiffness modelling method for multi-chain parallel robotic manipulators with flexible links and compliant actuating joints. In contrast to other works, the method involves a FEA-based link stiffness evaluation and employs a new solution strategy of the kinetostatic equations, which allows computing the stiffness matrix for singular postures and to take into account influence of the internal forces. The advantages of the developed technique are confirmed by application examples, which deal with stiffness analysis of the Orthoglide manipulator.
|
Title: Bias-Variance Techniques for Monte Carlo Optimization: Cross-validation for the CE Method
|
Abstract: In this paper, we examine the CE method in the broad context of Monte Carlo Optimization (MCO) and Parametric Learning (PL), a type of machine learning. A well-known overarching principle used to improve the performance of many PL algorithms is the bias-variance tradeoff. This tradeoff has been used to improve PL algorithms ranging from Monte Carlo estimation of integrals, to linear estimation, to general statistical estimation. Moreover, as described by, MCO is very closely related to PL. Owing to this similarity, the bias-variance tradeoff affects MCO performance, just as it does PL performance. In this article, we exploit the bias-variance tradeoff to enhance the performance of MCO algorithms. We use the technique of cross-validation, a technique based on the bias-variance tradeoff, to significantly improve the performance of the Cross Entropy (CE) method, which is an MCO algorithm. In previous work we have confirmed that other PL techniques improve the perfomance of other MCO algorithms. We conclude that the many techniques pioneered in PL could be investigated as ways to improve MCO algorithms in general, and the CE method in particular.
|
Title: HIV with contact-tracing: a case study in Approximate Bayesian Computation
|
Abstract: Missing data is a recurrent issue in epidemiology where the infection process may be partially observed. Approximate Bayesian Computation, an alternative to data imputation methods such as Markov Chain Monte Carlo integration, is proposed for making inference in epidemiological models. It is a likelihood-free method that relies exclusively on numerical simulations. ABC consists in computing a distance between simulated and observed summary statistics and weighting the simulations according to this distance. We propose an original extension of ABC to path-valued summary statistics, corresponding to the cumulated number of detections as a function of time. For a standard compartmental model with Suceptible, Infectious and Recovered individuals (SIR), we show that the posterior distributions obtained with ABC and MCMC are similar. In a refined SIR model well-suited to the HIV contact-tracing data in Cuba, we perform a comparison between ABC with full and binned detection times. For the Cuban data, we evaluate the efficiency of the detection system and predict the evolution of the HIV-AIDS disease. In particular, the percentage of undetected infectious individuals is found to be of the order of 40%.
|
Title: Large Scale Variational Inference and Experimental Design for Sparse Generalized Linear Models
|
Abstract: Many problems of low-level computer vision and image processing, such as denoising, deconvolution, tomographic reconstruction or super-resolution, can be addressed by maximizing the posterior distribution of a sparse linear model (SLM). We show how higher-order Bayesian decision-making problems, such as optimizing image acquisition in magnetic resonance scanners, can be addressed by querying the SLM posterior covariance, unrelated to the density's mode. We propose a scalable algorithmic framework, with which SLM posteriors over full, high-resolution images can be approximated for the first time, solving a variational optimization problem which is convex iff posterior mode finding is convex. These methods successfully drive the optimization of sampling trajectories for real-world magnetic resonance imaging through Bayesian experimental design, which has not been attempted before. Our methodology provides new insight into similarities and differences between sparse reconstruction and approximate Bayesian inference, and has important implications for compressive sensing of real-world images.
|
Title: A Principal Component Analysis for Trees
|
Abstract: The active field of Functional Data Analysis (about understanding the variation in a set of curves) has been recently extended to Object Oriented Data Analysis, which considers populations of more general objects. A particularly challenging extension of this set of ideas is to populations of tree-structured objects. We develop an analog of Principal Component Analysis for trees, based on the notion of tree-lines, and propose numerically fast (linear time) algorithms to solve the resulting optimization problems. The solutions we obtain are used in the analysis of a data set of 73 individuals, where each data object is a tree of blood vessels in one person's brain.
|
Title: Inference for the limiting cluster size distribution of extreme values
|
Abstract: Any limiting point process for the time normalized exceedances of high levels by a stationary sequence is necessarily compound Poisson under appropriate long range dependence conditions. Typically exceedances appear in clusters. The underlying Poisson points represent the cluster positions and the multiplicities correspond to the cluster sizes. In the present paper we introduce estimators of the limiting cluster size probabilities, which are constructed through a recursive algorithm. We derive estimators of the extremal index which plays a key role in determining the intensity of cluster positions. We study the asymptotic properties of the estimators and investigate their finite sample behavior on simulated data.
|
Title: Generalised linear mixed model analysis via sequential Monte Carlo sampling
|
Abstract: We present a sequential Monte Carlo sampler algorithm for the Bayesian analysis of generalised linear mixed models (GLMMs). These models support a variety of interesting regression-type analyses, but performing inference is often extremely difficult, even when using the Bayesian approach combined with Markov chain Monte Carlo (MCMC). The Sequential Monte Carlo sampler (SMC) is a new and general method for producing samples from posterior distributions. In this article we demonstrate use of the SMC method for performing inference for GLMMs. We demonstrate the effectiveness of the method on both simulated and real data, and find that sequential Monte Carlo is a competitive alternative to the available MCMC techniques.
|
Title: On-the-fly Macros
|
Abstract: We present a domain-independent algorithm that computes macros in a novel way. Our algorithm computes macros "on-the-fly" for a given set of states and does not require previously learned or inferred information, nor prior domain knowledge. The algorithm is used to define new domain-independent tractable classes of classical planning that are proved to include and .
|
Title: Une grammaire formelle du cr\'eole martiniquais pour la g\'en\'eration automatique
|
Abstract: In this article, some first elements of a computational modelling of the grammar of the Martiniquese French Creole dialect are presented. The sources of inspiration for the modelling is the functional description given by Damoiseau (1984), and Pinalie's & Bernabe's (1999) grammar manual. Based on earlier works in text generation (Vaillant, 1997), a unification grammar formalism, namely Tree Adjoining Grammars (TAG), and a modelling of lexical functional categories based on syntactic and semantic properties, are used to implement a grammar of Martiniquese Creole which is used in a prototype of text generation system. One of the main applications of the system could be its use as a tool software supporting the task of learning Creole as a second language. -- Nous pr\'esenterons dans cette communication les premiers travaux de mod\'elisation informatique d'une grammaire de la langue cr\'eole martiniquaise, en nous inspirant des descriptions fonctionnelles de Damoiseau (1984) ainsi que du manuel de Pinalie & Bernab\'e (1999). Prenant appui sur des travaux ant\'erieurs en g\'en\'eration de texte (Vaillant, 1997), nous utilisons un formalisme de grammaires d'unification, les grammaires d'adjonction d'arbres (TAG d'apr\`es l'acronyme anglais), ainsi qu'une mod\'elisation de cat\'egories lexicales fonctionnelles \`a base syntaxico-s\'emantique, pour mettre en oeuvre une grammaire du cr\'eole martiniquais utilisable dans une maquette de syst\`eme de g\'en\'eration automatique. L'un des int\'er\^ets principaux de ce syst\`eme pourrait \^etre son utilisation comme logiciel outil pour l'aide \`a l'apprentissage du cr\'eole en tant que langue seconde.
|
Title: A Layered Grammar Model: Using Tree-Adjoining Grammars to Build a Common Syntactic Kernel for Related Dialects
|
Abstract: This article describes the design of a common syntactic description for the core grammar of a group of related dialects. The common description does not rely on an abstract sub-linguistic structure like a metagrammar: it consists in a single FS-LTAG where the actual specific language is included as one of the attributes in the set of attribute types defined for the features. When the lang attribute is instantiated, the selected subset of the grammar is equivalent to the grammar of one dialect. When it is not, we have a model of a hybrid multidialectal linguistic system. This principle is used for a group of creole languages of the West-Atlantic area, namely the French-based Creoles of Haiti, Guadeloupe, Martinique and French Guiana.
|
Title: Analyse spectrale des textes: d\'etection automatique des fronti\`eres de langue et de discours
|
Abstract: We propose a theoretical framework within which information on the vocabulary of a given corpus can be inferred on the basis of statistical information gathered on that corpus. Inferences can be made on the categories of the words in the vocabulary, and on their syntactical properties within particular languages. Based on the same statistical data, it is possible to build matrices of syntagmatic similarity (bigram transition matrices) or paradigmatic similarity (probability for any pair of words to share common contexts). When clustered with respect to their syntagmatic similarity, words tend to group into sublanguage vocabularies, and when clustered with respect to their paradigmatic similarity, into syntactic or semantic classes. Experiments have explored the first of these two possibilities. Their results are interpreted in the frame of a Markov chain modelling of the corpus' generative processe(s): we show that the results of a spectral analysis of the transition matrix can be interpreted as probability distributions of words within clusters. This method yields a soft clustering of the vocabulary into sublanguages which contribute to the generation of heterogeneous corpora. As an application, we show how multilingual texts can be visually segmented into linguistically homogeneous segments. Our method is specifically useful in the case of related languages which happened to be mixed in corpora.
|
Title: Soft Uncoupling of Markov Chains for Permeable Language Distinction: A New Algorithm
|
Abstract: Without prior knowledge, distinguishing different languages may be a hard task, especially when their borders are permeable. We develop an extension of spectral clustering -- a powerful unsupervised classification toolbox -- that is shown to resolve accurately the task of soft language distinction. At the heart of our approach, we replace the usual hard membership assignment of spectral clustering by a soft, probabilistic assignment, which also presents the advantage to bypass a well-known complexity bottleneck of the method. Furthermore, our approach relies on a novel, convenient construction of a Markov chain out of a corpus. Extensive experiments with a readily available system clearly display the potential of the method, which brings a visually appealing soft distinction of languages that may define altogether a whole corpus.
|
Title: Blind Cognitive MAC Protocols
|
Abstract: We consider the design of cognitive Medium Access Control (MAC) protocols enabling an unlicensed (secondary) transmitter-receiver pair to communicate over the idle periods of a set of licensed channels, i.e., the primary network. The objective is to maximize data throughput while maintaining the synchronization between secondary users and avoiding interference with licensed (primary) users. No statistical information about the primary traffic is assumed to be available a-priori to the secondary user. We investigate two distinct sensing scenarios. In the first, the secondary transmitter is capable of sensing all the primary channels, whereas it senses one channel only in the second scenario. In both cases, we propose MAC protocols that efficiently learn the statistics of the primary traffic online. Our simulation results demonstrate that the proposed blind protocols asymptotically achieve the throughput obtained when prior knowledge of primary traffic statistics is available.
|
Title: A Gaussian Belief Propagation Solver for Large Scale Support Vector Machines
|
Abstract: Support vector machines (SVMs) are an extremely successful type of classification and regression algorithms. Building an SVM entails solving a constrained convex quadratic programming problem, which is quadratic in the number of training samples. We introduce an efficient parallel implementation of an support vector regression solver, based on the Gaussian Belief Propagation algorithm (GaBP). In this paper, we demonstrate that methods from the complex system domain could be utilized for performing efficient distributed computation. We compare the proposed algorithm to previously proposed distributed and single-node SVM solvers. Our comparison shows that the proposed algorithm is just as accurate as these solvers, while being significantly faster, especially for large datasets. We demonstrate scalability of the proposed algorithm to up to 1,024 computing nodes and hundreds of thousands of data points using an IBM Blue Gene supercomputer. As far as we know, our work is the largest parallel implementation of belief propagation ever done, demonstrating the applicability of this algorithm for large scale distributed computing systems.
|
Title: A global physician-oriented medical information system
|
Abstract: We propose to improve medical decision making and reduce global health care costs by employing a free Internet-based medical information system with two main target groups: practicing physicians and medical researchers. After acquiring patients' consent, physicians enter medical histories, physiological data and symptoms or disorders into the system; an integrated expert system can then assist in diagnosis and statistical software provides a list of the most promising treatment options and medications, tailored to the patient. Physicians later enter information about the outcomes of the chosen treatments, data the system uses to optimize future treatment recommendations. Medical researchers can analyze the aggregate data to compare various drugs or treatments in defined patient populations on a large scale.
|
Title: Characterizing 1-Dof Henneberg-I graphs with efficient configuration spaces
|
Abstract: We define and study exact, efficient representations of realization spaces of a natural class of underconstrained 2D Euclidean Distance Constraint Systems(EDCS) or Frameworks based on 1-dof Henneberg-I graphs. Each representation corresponds to a choice of parameters and yields a different parametrized configuration space. Our notion of efficiency is based on the algebraic complexities of sampling the configuration space and of obtaining a realization from the sample (parametrized) configuration. Significantly, we give purely combinatorial characterizations that capture (i) the class of graphs that have efficient configuration spaces and (ii) the possible choices of representation parameters that yield efficient configuration spaces for a given graph. Our results automatically yield an efficient algorithm for sampling realizations, without missing extreme or boundary realizations. In addition, our results formally show that our definition of efficient configuration space is robust and that our characterizations are tight. We choose the class of 1-dof Henneberg-I graphs in order to take the next step in a systematic and graded program of combinatorial characterizations of efficient configuration spaces. In particular, the results presented here are the first characterizations that go beyond graphs that have connected and convex configuration spaces.
|
Title: A note on conditional Akaike information for Poisson regression with random effects
|
Abstract: A popular model selection approach for generalized linear mixed-effects models is the Akaike information criterion, or AIC. Among others, pointed out the distinction between the marginal and conditional inference depending on the focus of research. The conditional AIC was derived for the linear mixed-effects model which was later generalized by . We show that the similar strategy extends to Poisson regression with random effects, where condition AIC can be obtained based on our observations. Simulation studies demonstrate the usage of the criterion.
|
Title: Visualization Optimization : Application to the RoboCup Rescue Domain
|
Abstract: In this paper we demonstrate the use of intelligent optimization methodologies on the visualization optimization of virtual / simulated environments. The problem of automatic selection of an optimized set of views, which better describes an on-going simulation over a virtual environment is addressed in the context of the RoboCup Rescue Simulation domain. A generic architecture for optimization is proposed and described. We outline the possible extensions of this architecture and argue on how several problems within the fields of Interactive Rendering and Visualization can benefit from it.
|
Title: Modeling of Social Transitions Using Intelligent Systems
|
Abstract: In this study, we reproduce two new hybrid intelligent systems, involve three prominent intelligent computing and approximate reasoning methods: Self Organizing feature Map (SOM), Neruo-Fuzzy Inference System and Rough Set Theory (RST),called SONFIS and SORST. We show how our algorithms can be construed as a linkage of government-society interactions, where government catches various states of behaviors: solid (absolute) or flexible. So, transition of society, by changing of connectivity parameters (noise) from order to disorder is inferred.
|
Title: On two-sided p-values for non-symmetric distributions
|
Abstract: Two-sided statistical tests and p-values are well defined only when the test statistic in question has a symmetric distribution. A new two-sided p-value called conditional p-value $P_C$ is introduced here. It is closely related to the doubled p-value and has an intuitive appeal. Its use is advocated for both continuous and discrete distributions. An important advantage of this p-value is that equivalent 1-sided tests are transformed into $P_C$-equivalent 2-sided tests. It is compared to the widely used doubled and minimum likelihood p-values. Examples include the variance test, the binomial and the Fisher's exact test.
|
Title: Non-Negative Matrix Factorization, Convexity and Isometry
|
Abstract: In this paper we explore avenues for improving the reliability of dimensionality reduction methods such as Non-Negative Matrix Factorization (NMF) as interpretive exploratory data analysis tools. We first explore the difficulties of the optimization problem underlying NMF, showing for the first time that non-trivial NMF solutions always exist and that the optimization problem is actually convex, by using the theory of Completely Positive Factorization. We subsequently explore four novel approaches to finding globally-optimal NMF solutions using various ideas from convex optimization. We then develop a new method, isometric NMF (isoNMF), which preserves non-negativity while also providing an isometric embedding, simultaneously achieving two properties which are helpful for interpretation. Though it results in a more difficult optimization problem, we show experimentally that the resulting method is scalable and even achieves more compact spectra than standard NMF.
|
Title: Faster and better: a machine learning approach to corner detection
|
Abstract: The repeatability and efficiency of a corner detector determines how likely it is to be useful in a real-world application. The repeatability is importand because the same scene viewed from different positions should yield features which correspond to the same real-world 3D locations [Schmid et al 2000]. The efficiency is important because this determines whether the detector combined with further processing can operate at frame rate. Three advances are described in this paper. First, we present a new heuristic for feature detection, and using machine learning we derive a feature detector from this which can fully process live PAL video using less than 5% of the available processing time. By comparison, most other detectors cannot even operate at frame rate (Harris detector 115%, SIFT 195%). Second, we generalize the detector, allowing it to be optimized for repeatability, with little loss of efficiency. Third, we carry out a rigorous comparison of corner detectors based on the above repeatability criterion applied to 3D scenes. We show that despite being principally constructed for speed, on these stringent tests, our heuristic detector significantly outperforms existing feature detectors. Finally, the comparison demonstrates that using machine learning produces significant improvements in repeatability, yielding a detector that is both very fast and very high quality.
|
Title: Bayesian evidence for finite element model updating
|
Abstract: This paper considers the problem of model selection within the context of finite element model updating. Given that a number of FEM updating models, with different updating parameters, can be designed, this paper proposes using the Bayesian evidence statistic to assess the probability of each updating model. This makes it possible then to evaluate the need for alternative updating parameters in the updating of the initial FE model. The model evidences are compared using the Bayes factor, which is the ratio of evidences. The Jeffrey scale is used to determine the differences in the models. The Bayesian evidence is calculated by integrating the likelihood of the data given the model and its parameters over the a priori model parameter space using the new nested sampling algorithm. The nested algorithm samples this likelihood distribution by using a hard likelihood-value constraint on the sampling region while providing the posterior samples of the updating model parameters as a by-product. This method is used to calculate the evidence of a number of plausible finite element models.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.