text stringlengths 0 4.09k |
|---|
Abstract: This paper unifies and extends results on a class of multivariate Extreme Value (EV) models studied by Hougaard, Crowder, and Tawn. In these models both unconditional and conditional distributions are EV, and all lower-dimensional marginals and maxima belong to the class. This leads to substantial economies of understanding, analysis and prediction. One interpretation of the models is as size mixtures of EV distributions, where the mixing is by positive stable distributions. A second interpretation is as exponential-stable location mixtures (for Gumbel) or as power-stable scale mixtures (for non-Gumbel EV distributions). A third interpretation is through a Peaks over Thresholds model with a positive stable intensity. The mixing variables are used as a modeling tool and for better understanding and model checking. We study extreme value analogues of components of variance models, and new time series, spatial, and continuous parameter models for extreme values. The results are applied to data from a pitting corrosion investigation. |
Title: Robust model selection in generalized linear models |
Abstract: In this paper, we extend to generalized linear models (including logistic and other binary regression models, Poisson regression and gamma regression models) the robust model selection methodology developed by Mueller and Welsh (2005; JASA) for linear regression models. As in Mueller and Welsh (2005), we combine a robust penalized measure of fit to the sample with a robust measure of out of sample predictive ability which is estimated using a post-stratified m-out-of-n bootstrap. A key idea is that the method can be used to compare different estimators (robust and nonrobust) as well as different models. Even when specialized back to linear regression models, the methodology presented in this paper improves on that of Mueller and Welsh (2005). In particular, we use a new bias-adjusted bootstrap estimator which avoids the need to centre the explanatory variables and to include an intercept in every model. We also use more sophisticated arguments than Mueller and Welsh (2005) to establish an essential monotonicity condition. |
Title: Variable importance in binary regression trees and forests |
Abstract: We characterize and study variable importance (VIMP) and pairwise variable associations in binary regression trees. A key component involves the node mean squared error for a quantity we refer to as a maximal subtree. The theory naturally extends from single trees to ensembles of trees and applies to methods like random forests. This is useful because while importance values from random forests are used to screen variables, for example they are used to filter high throughput genomic data in Bioinformatics, very little theory exists about their properties. |
Title: Proof nets for display logic |
Abstract: This paper explores several extensions of proof nets for the Lambek calculus in order to handle the different connectives of display logic in a natural way. The new proof net calculus handles some recent additions to the Lambek vocabulary such as Galois connections and Grishin interactions. It concludes with an exploration of the generative capacity of the Lambek-Grishin calculus, presenting an embedding of lexicalized tree adjoining grammars into the Lambek-Grishin calculus. |
Title: A Compact Self-organizing Cellular Automata-based Genetic Algorithm |
Abstract: A Genetic Algorithm (GA) is proposed in which each member of the population can change schemata only with its neighbors according to a rule. The rule methodology and the neighborhood structure employ elements from the Cellular Automata (CA) strategies. Each member of the GA population is assigned to a cell and crossover takes place only between adjacent cells, according to the predefined rule. Although combinations of CA and GA approaches have appeared previously, here we rely on the inherent self-organizing features of CA, rather than on parallelism. This conceptual shift directs us toward the evolution of compact populations containing only a handful of members. We find that the resulting algorithm can search the design space more efficiently than traditional GA strategies due to its ability to exploit mutations within this compact self-organizing population. Consequently, premature convergence is avoided and the final results often are more accurate. In order to reinforce the superior mutation capability, a re-initialization strategy also is implemented. Ten test functions and two benchmark structural engineering truss design problems are examined in order to demonstrate the performance of the method. |
Title: Inverse Sampling for Nonasymptotic Sequential Estimation of Bounded Variable Means |
Abstract: In this paper, we consider the nonasymptotic sequential estimation of means of random variables bounded in between zero and one. We have rigorously demonstrated that, in order to guarantee prescribed relative precision and confidence level, it suffices to continue sampling until the sample sum is no less than a certain bound and then take the average of samples as an estimate for the mean of the bounded random variable. We have developed an explicit formula and a bisection search method for the determination of such bound of sample sum, without any knowledge of the bounded variable. Moreover, we have derived bounds for the distribution of sample size. In the special case of Bernoulli random variables, we have established analytical and numerical methods to further reduce the bound of sample sum and thus improve the efficiency of sampling. Furthermore, the fallacy of existing results are detected and analyzed. |
Title: The Method of Normalized Correlations - A Fast Alternative to Maximum Likelihood Estimation for Random Processes and Isotropic Random Fields with Short-Range Dependence |
Abstract: This paper has been withdrawn by the authors, due the copyright policy of the journal it has been submited to. |
Title: Comparing the notions of optimality in CP-nets, strategic games and soft constraints |
Abstract: The notion of optimality naturally arises in many areas of applied mathematics and computer science concerned with decision making. Here we consider this notion in the context of three formalisms used for different purposes in reasoning about multi-agent systems: strategic games, CP-nets, and soft constraints. To relate the notions of optimality in these formalisms we introduce a natural qualitative modification of the notion of a strategic game. We show then that the optimal outcomes of a CP-net are exactly the Nash equilibria of such games. This allows us to use the techniques of game theory to search for optimal outcomes of CP-nets and vice-versa, to use techniques developed for CP-nets to search for Nash equilibria of the considered games. Then, we relate the notion of optimality used in the area of soft constraints to that used in a generalization of strategic games, called graphical games. In particular we prove that for a natural class of soft constraints that includes weighted constraints every optimal solution is both a Nash equilibrium and Pareto efficient joint strategy. For a natural mapping in the other direction we show that Pareto efficient joint strategies coincide with the optimal solutions of soft constraints. |
Title: Image Classification Using SVMs: One-against-One Vs One-against-All |
Abstract: Support Vector Machines (SVMs) are a relatively new supervised classification technique to the land cover mapping community. They have their roots in Statistical Learning Theory and have gained prominence because they are robust, accurate and are effective even when using a small training sample. By their nature SVMs are essentially binary classifiers, however, they can be adopted to handle the multiple classification tasks common in remote sensing studies. The two approaches commonly used are the One-Against-One (1A1) and One-Against-All (1AA) techniques. In this paper, these approaches are evaluated in as far as their impact and implication for land cover mapping. The main finding from this research is that whereas the 1AA technique is more predisposed to yielding unclassified and mixed pixels, the resulting classification accuracy is not significantly different from 1A1 approach. It is the authors conclusion therefore that ultimately the choice of technique adopted boils down to personal preference and the uniqueness of the dataset at hand. |
Title: Statistical Inference for Disordered Sphere Packings |
Abstract: Sphere packings are essential to the development of physical models for powders, composite materials, and the atomic structure of the liquid state. There is a strong scientific need to be able to assess the fit of packing models to data, but this is complicated by the lack of formal probabilistic models for packings. Without formal models, simulation algorithms and collections of physical objects must be used as models. Identification of common aspects of different realizations of the same packing process requires the use of new descriptive statistics, many of which have yet to be developed. Model assessment will require the use of large samples of independent and identically distributed realizations, rather than the large single stationary realizations found in conventional spatial statistics. The development of procedures for model assessment will resemble the development of thermodynamic models, and will be based on much exploration and experimentation rather than on extensions of established statistical methods. |
Title: How to realize "a sense of humour" in computers ? |
Abstract: Computer model of a "sense of humour" suggested previously [arXiv:0711.2058, 0711.2061, 0711.2270] is raised to the level of a realistic algorithm. |
Title: An Integral Measure of Aging/Rejuvenation for Repairable and Non-repairable Systems |
Abstract: This paper introduces a simple index that helps to assess the degree of aging or rejuvenation of a (non)repairable system. The index ranges from -1 to 1 and is negative for the class of decreasing failure rate distributions (or deteriorating point processes) and is positive for the increasing failure rate distributions (or improving point processes). The introduced index is distribution free. |
Title: A Game-Theoretic Analysis of Updating Sets of Probabilities |
Abstract: We consider how an agent should update her uncertainty when it is represented by a set $\P$ of probability distributions and the agent observes that a random variable $X$ takes on value $x$, given that the agent makes decisions using the minimax criterion, perhaps the best-studied and most commonly-used criterion in the literature. We adopt a game-theoretic framework, where the agent plays against a bookie, who chooses some distribution from $\P$. We consider two reasonable games that differ in what the bookie knows when he makes his choice. Anomalies that have been observed before, like time inconsistency, can be understood as arising important because different games are being played, against bookies with different information. We characterize the important special cases in which the optimal decision rules according to the minimax criterion amount to either conditioning or simply ignoring the information. Finally, we consider the relationship between conditioning and calibration when uncertainty is described by sets of probabilities. |
Title: Confidence intervals in regression utilizing prior information |
Abstract: We consider a linear regression model with regression parameter beta=(beta_1,...,beta_p) and independent and identically N(0,sigma^2) distributed errors. Suppose that the parameter of interest is theta = a^T beta where a is a specified vector. Define the parameter tau=c^T beta-t where the vector c and the number t are specified and a and c are linearly independent. Also suppose that we have uncertain prior information that tau = 0. We present a new frequentist 1-alpha confidence interval for theta that utilizes this prior information. We require this confidence interval to (a) have endpoints that are continuous functions of the data and (b) coincide with the standard 1-alpha confidence interval when the data strongly contradicts this prior information. This interval is optimal in the sense that it has minimum weighted average expected length where the largest weight is given to this expected length when tau=0. This minimization leads to an interval that has the following desirable properties. This interval has expected length that (a) is relatively small when the prior information about tau is correct and (b) has a maximum value that is not too large. The following problem will be used to illustrate the application of this new confidence interval. Consider a 2-by 2 factorial experiment with 20 replicates. Suppose that the parameter of interest theta is a specified simple effect and that we have uncertain prior information that the two-factor interaction is zero. Our aim is to find a frequentist 0.95 confidence interval for theta that utilizes this prior information. |
Title: Computer model validation with functional output |
Abstract: A key question in evaluation of computer models is Does the computer model adequately represent reality? A six-step process for computer model validation is set out in Bayarri et al. [Technometrics 49 (2007) 138--154] (and briefly summarized below), based on comparison of computer model runs with field data of the process being modeled. The methodology is particularly suited to treating the major issues associated with the validation process: quantifying multiple sources of error and uncertainty in computer models; combining multiple sources of information; and being able to adapt to different, but related scenarios. Two complications that frequently arise in practice are the need to deal with highly irregular functional data and the need to acknowledge and incorporate uncertainty in the inputs. We develop methodology to deal with both complications. A key part of the approach utilizes a wavelet representation of the functional data, applies a hierarchical version of the scalar validation methodology to the wavelet coefficients, and transforms back, to ultimately compare computer model output with field output. The generality of the methodology is only limited by the capability of a combination of computational tools and the appropriateness of decompositions of the sort (wavelets) employed here. The methods and analyses we present are illustrated with a test bed dynamic stress analysis for a particular engineering system. |
Title: Morphological annotation of Korean with Directly Maintainable Resources |
Abstract: This article describes an exclusively resource-based method of morphological annotation of written Korean text. Korean is an agglutinative language. Our annotator is designed to process text before the operation of a syntactic parser. In its present state, it annotates one-stem words only. The output is a graph of morphemes annotated with accurate linguistic information. The granularity of the tagset is 3 to 5 times higher than usual tagsets. A comparison with a reference annotated corpus showed that it achieves 89% recall without any corpus training. The language resources used by the system are lexicons of stems, transducers of suffixes and transducers of generation of allomorphs. All can be easily updated, which allows users to control the evolution of the performances of the system. It has been claimed that morphological annotation of Korean text could only be performed by a morphological analysis module accessing a lexicon of morphemes. We show that it can also be performed directly with a lexicon of words and without applying morphological rules at annotation time, which speeds up annotation to 1,210 word/s. The lexicon of words is obtained from the maintainable language resources through a fully automated compilation process. |
Title: Translating OWL and Semantic Web Rules into Prolog: Moving Toward Description Logic Programs |
Abstract: To appear in Theory and Practice of Logic Programming (TPLP), 2008. We are researching the interaction between the rule and the ontology layers of the Semantic Web, by comparing two options: 1) using OWL and its rule extension SWRL to develop an integrated ontology/rule language, and 2) layering rules on top of an ontology with RuleML and OWL. Toward this end, we are developing the SWORIER system, which enables efficient automated reasoning on ontologies and rules, by translating all of them into Prolog and adding a set of general rules that properly capture the semantics of OWL. We have also enabled the user to make dynamic changes on the fly, at run time. This work addresses several of the concerns expressed in previous work, such as negation, complementary classes, disjunctive heads, and cardinality, and it discusses alternative approaches for dealing with inconsistencies in the knowledge base. In addition, for efficiency, we implemented techniques called extensionalization, avoiding reanalysis, and code minimization. |
Title: Lexicon management and standard formats |
Abstract: International standards for lexicon formats are in preparation. To a certain extent, the proposed formats converge with prior results of standardization projects. However, their adequacy for (i) lexicon management and (ii) lexicon-driven applications have been little debated in the past, nor are they as a part of the present standardization effort. We examine these issues. IGM has developed XML formats compatible with the emerging international standards, and we report experimental results on large-coverage lexica. |
Title: In memoriam Maurice Gross |
Abstract: Maurice Gross (1934-2001) was both a great linguist and a pioneer in natural language processing. This article is written in homage to his memory |
Title: A resource-based Korean morphological annotation system |
Abstract: We describe a resource-based method of morphological annotation of written Korean text. Korean is an agglutinative language. The output of our system is a graph of morphemes annotated with accurate linguistic information. The language resources used by the system can be easily updated, which allows us-ers to control the evolution of the per-formances of the system. We show that morphological annotation of Korean text can be performed directly with a lexicon of words and without morpho-logical rules. |
Title: Graphes param\'etr\'es et outils de lexicalisation |
Abstract: Shifting to a lexicalized grammar reduces the number of parsing errors and improves application results. However, such an operation affects a syntactic parser in all its aspects. One of our research objectives is to design a realistic model for grammar lexicalization. We carried out experiments for which we used a grammar with a very simple content and formalism, and a very informative syntactic lexicon, the lexicon-grammar of French elaborated by the LADL. Lexicalization was performed by applying the parameterized-graph approach. Our results tend to show that most information in the lexicon-grammar can be transferred into a grammar and exploited successfully for the syntactic parsing of sentences. |
Title: Evaluation of a Grammar of French Determiners |
Abstract: Existing syntactic grammars of natural languages, even with a far from complete coverage, are complex objects. Assessments of the quality of parts of such grammars are useful for the validation of their construction. We evaluated the quality of a grammar of French determiners that takes the form of a recursive transition network. The result of the application of this local grammar gives deeper syntactic information than chunking or information available in treebanks. We performed the evaluation by comparison with a corpus independently annotated with information on determiners. We obtained 86% precision and 92% recall on text not tagged for parts of speech. |
Title: Clustering with Transitive Distance and K-Means Duality |
Abstract: Recent spectral clustering methods are a propular and powerful technique for data clustering. These methods need to solve the eigenproblem whose computational complexity is $O(n^3)$, where $n$ is the number of data samples. In this paper, a non-eigenproblem based clustering method is proposed to deal with the clustering problem. Its performance is comparable to the spectral clustering algorithms but it is more efficient with computational complexity $O(n^2)$. We show that with a transitive distance and an observed property, called K-means duality, our algorithm can be used to handle data sets with complex cluster shapes, multi-scale clusters, and noise. Moreover, no parameters except the number of clusters need to be set in our algorithm. |
Title: Very strict selectional restrictions |
Abstract: We discuss the characteristics and behaviour of two parallel classes of verbs in two Romance languages, French and Portuguese. Examples of these verbs are Port. abater [gado] and Fr. abattre [b\'etail], both meaning "slaughter [cattle]". In both languages, the definition of the class of verbs includes several features: - They have only one essential complement, which is a direct object. - The nominal distribution of the complement is very limited, i.e., few nouns can be selected as head nouns of the complement. However, this selection is not restricted to a single noun, as would be the case for verbal idioms such as Fr. monter la garde "mount guard". - We excluded from the class constructions which are reductions of more complex constructions, e.g. Port. afinar [instrumento] com "tune [instrument] with". |
Title: Bayesian Shrinkage Variable Selection |
Abstract: Withdrawn due to extensions and submission as another paper. |
Title: Derivations of Normalized Mutual Information in Binary Classifications |
Abstract: This correspondence studies the basic problem of classifications - how to evaluate different classifiers. Although the conventional performance indexes, such as accuracy, are commonly used in classifier selection or evaluation, information-based criteria, such as mutual information, are becoming popular in feature/model selections. In this work, we propose to assess classifiers in terms of normalized mutual information (NI), which is novel and well defined in a compact range for classifier evaluation. We derive close-form relations of normalized mutual information with respect to accuracy, precision, and recall in binary classifications. By exploring the relations among them, we reveal that NI is actually a set of nonlinear functions, with a concordant power-exponent form, to each performance index. The relations can also be expressed with respect to precision and recall, or to false alarm and hitting rate (recall). |
Title: Outilex, plate-forme logicielle de traitement de textes \'ecrits |
Abstract: The Outilex software platform, which will be made available to research, development and industry, comprises software components implementing all the fundamental operations of written text processing: processing without lexicons, exploitation of lexicons and grammars, language resource management. All data are structured in XML formats, and also in more compact formats, either readable or binary, whenever necessary; the required format converters are included in the platform; the grammar formats allow for combining statistical approaches with resource-based approaches. Manually constructed lexicons for French and English, originating from the LADL, and of substantial coverage, will be distributed with the platform under LGPL-LR license. |
Title: Let's get the student into the driver's seat |
Abstract: Speaking a language and achieving proficiency in another one is a highly complex process which requires the acquisition of various kinds of knowledge and skills, like the learning of words, rules and patterns and their connection to communicative goals (intentions), the usual starting point. To help the learner to acquire these skills we propose an enhanced, electronic version of an age old method: pattern drills (henceforth PDs). While being highly regarded in the fifties, PDs have become unpopular since then, partially because of their lack of grounding (natural context) and rigidity. Despite these shortcomings we do believe in the virtues of this approach, at least with regard to the acquisition of basic linguistic reflexes or skills (automatisms), necessary to survive in the new language. Of course, the method needs improvement, and we will show here how this can be achieved. Unlike tapes or books, computers are open media, allowing for dynamic changes, taking users' performances and preferences into account. Building an electronic version of PDs amounts to building an open resource, accomodatable to the users' ever changing needs. |
Title: On the Analytic Wavelet Transform |
Abstract: An exact and general expression for the analytic wavelet transform of a real-valued signal is constructed, resolving the time-dependent effects of non-negligible amplitude and frequency modulation. The analytic signal is first locally represented as a modulated oscillation, demodulated by its own instantaneous frequency, and then Taylor-expanded at each point in time. The terms in this expansion, called the instantaneous modulation functions, are time-varying functions which quantify, at increasingly higher orders, the local departures of the signal from a uniform sinusoidal oscillation. Closed-form expressions for these functions are found in terms of Bell polynomials and derivatives of the signal's instantaneous frequency and bandwidth. The analytic wavelet transform is shown to depend upon the interaction between the signal's instantaneous modulation functions and frequency-domain derivatives of the wavelet, inducing a hierarchy of departures of the transform away from a perfect representation of the signal. The form of these deviation terms suggests a set of conditions for matching the wavelet properties to suit the variability of the signal, in which case our expressions simplify considerably. One may then quantify the time-varying bias associated with signal estimation via wavelet ridge analysis, and choose wavelets to minimize this bias. |
Title: Periodic Chandrasekhar recursions |
Abstract: This paper extends the Chandrasekhar-type recursions due to Morf, Sidhu, and Kailath "Some new algorithms for recursive estimation in constant, linear, discrete-time systems, IEEE Trans. Autom. Control 19 (1974) 315-323" to the case of periodic time-varying state-space models. We show that the S-lagged increments of the one-step prediction error covariance satisfy certain recursions from which we derive some algorithms for linear least squares estimation for periodic state-space models. The proposed recursions may have potential computational advantages over the Kalman Filter and, in particular, the periodic Riccati difference equation. |
Title: Knowware: the third star after Hardware and Software |
Abstract: This book proposes to separate knowledge from software and to make it a commodity that is called knowware. The architecture, representation and function of Knowware are discussed. The principles of knowware engineering and its three life cycle models: furnace model, crystallization model and spiral model are proposed and analyzed. Techniques of software/knowware co-engineering are introduced. A software component whose knowledge is replaced by knowware is called mixware. An object and component oriented development schema of mixware is introduced. In particular, the tower model and ladder model for mixware development are proposed and discussed. Finally, knowledge service and knowware based Web service are introduced and compared with Web service. In summary, knowware, software and hardware should be considered as three equally important underpinnings of IT industry. Ruqian Lu is a professor of computer science of the Institute of Mathematics, Academy of Mathematics and System Sciences. He is a fellow of Chinese Academy of Sciences. His research interests include artificial intelligence, knowledge engineering and knowledge based software engineering. He has published more than 100 papers and 10 books. He has won two first class awards from the Academia Sinica and a National second class prize from the Ministry of Science and Technology. He has also won the sixth Hua Loo-keng Mathematics Prize. |
Title: Covariance and PCA for Categorical Variables |
Abstract: Covariances from categorical variables are defined using a regular simplex expression for categories. The method follows the variance definition by Gini, and it gives the covariance as a solution of simultaneous equations. The calculated results give reasonable values for test data. A method of principal component analysis (RS-PCA) is also proposed using regular simplex expressions, which allows easy interpretation of the principal components. The proposed methods apply to variable selection problem of categorical data USCensus1990 data. The proposed methods give appropriate criterion for the variable selection problem of categorical |
Title: Valence extraction using EM selection and co-occurrence matrices |
Abstract: This paper discusses two new procedures for extracting verb valences from raw texts, with an application to the Polish language. The first novel technique, the EM selection algorithm, performs unsupervised disambiguation of valence frame forests, obtained by applying a non-probabilistic deep grammar parser and some post-processing to the text. The second new idea concerns filtering of incorrect frames detected in the parsed text and is motivated by an observation that verbs which take similar arguments tend to have similar frames. This phenomenon is described in terms of newly introduced co-occurrence matrices. Using co-occurrence matrices, we split filtering into two steps. The list of valid arguments is first determined for each verb, whereas the pattern according to which the arguments are combined into frames is computed in the following stage. Our best extracted dictionary reaches an $F$-score of 45%, compared to an $F$-score of 39% for the standard frame-based BHT filtering. |
Title: The Second Law as a Cause of the Evolution |
Abstract: It is a common belief that in any environment where life is possible, life will be generated. Here it is suggested that the cause for a spontaneous generation of complex systems is probability driven processes. Based on equilibrium thermodynamics, it is argued that in low occupation number statistical systems, the second law of thermodynamics yields an increase of thermal entropy and a canonic energy distribution. However, in high occupation number statistical systems, the same law for the same reasons yields an increase of information and a Benford's law/power-law energy distribution. It is therefore, plausible, that eventually the heat death is not necessarily the end of the universe. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.