text
stringlengths
0
4.09k
Title: Concept-based Recommendations for Internet Advertisement
Abstract: The problem of detecting terms that can be interesting to the advertiser is considered. If a company has already bought some advertising terms which describe certain services, it is reasonable to find out the terms bought by competing companies. A part of them can be recommended as future advertising terms to the company. The goal of this work is to propose better interpretable recommendations based on FCA and association rules.
Title: Chemical Power for Microscopic Robots in Capillaries
Abstract: The power available to microscopic robots (nanorobots) that oxidize bloodstream glucose while aggregated in circumferential rings on capillary walls is evaluated with a numerical model using axial symmetry and time-averaged release of oxygen from passing red blood cells. Robots about one micron in size can produce up to several tens of picowatts, in steady-state, if they fully use oxygen reaching their surface from the blood plasma. Robots with pumps and tanks for onboard oxygen storage could collect oxygen to support burst power demands two to three orders of magnitude larger. We evaluate effects of oxygen depletion and local heating on surrounding tissue. These results give the power constraints when robots rely entirely on ambient available oxygen and identify aspects of the robot design significantly affecting available power. More generally, our numerical model provides an approach to evaluating robot design choices for nanomedicine treatments in and near capillaries.
Title: A Novel Two-Stage Dynamic Decision Support based Optimal Threat Evaluation and Defensive Resource Scheduling Algorithm for Multi Air-borne threats
Abstract: This paper presents a novel two-stage flexible dynamic decision support based optimal threat evaluation and defensive resource scheduling algorithm for multi-target air-borne threats. The algorithm provides flexibility and optimality by swapping between two objective functions, i.e. the preferential and subtractive defense strategies as and when required. To further enhance the solution quality, it outlines and divides the critical parameters used in Threat Evaluation and Weapon Assignment (TEWA) into three broad categories (Triggering, Scheduling and Ranking parameters). Proposed algorithm uses a variant of many-to-many Stable Marriage Algorithm (SMA) to solve Threat Evaluation (TE) and Weapon Assignment (WA) problem. In TE stage, Threat Ranking and Threat-Asset pairing is done. Stage two is based on a new flexible dynamic weapon scheduling algorithm, allowing multiple engagements using shoot-look-shoot strategy, to compute near-optimal solution for a range of scenarios. Analysis part of this paper presents the strengths and weaknesses of the proposed algorithm over an alternative greedy algorithm as applied to different offline scenarios.
Title: A new approach for digit recognition based on hand gesture analysis
Abstract: We present in this paper a new approach for hand gesture analysis that allows digit recognition. The analysis is based on extracting a set of features from a hand image and then combining them by using an induction graph. The most important features we extract from each image are the fingers locations, their heights and the distance between each pair of fingers. Our approach consists of three steps: (i) Hand detection and localization, (ii) fingers extraction and (iii) features identification and combination to digit recognition. Each input image is assumed to contain only one person, thus we apply a fuzzy classifier to identify the skin pixels. In the finger extraction step, we attempt to remove all the hand components except the fingers, this process is based on the hand anatomy properties. The final step consists on representing histogram of the detected fingers in order to extract features that will be used for digit recognition. The approach is invariant to scale, rotation and translation of the hand. Some experiments have been undertaken to show the effectiveness of the proposed approach.
Title: Towards the Patterns of Hard CSPs with Association Rule Mining
Abstract: The hardness of finite domain Constraint Satisfaction Problems (CSPs) is a very important research area in Constraint Programming (CP) community. However, this problem has not yet attracted much attention from the researchers in the association rule mining community. As a popular data mining technique, association rule mining has an extremely wide application area and it has already been successfully applied to many interdisciplines. In this paper, we study the association rule mining techniques and propose a cascaded approach to extract the interesting patterns of the hard CSPs. As far as we know, this problem is investigated with the data mining techniques for the first time. Specifically, we generate the random CSPs and collect their characteristics by solving all the CSP instances, and then apply the data mining techniques on the data set and further to discover the interesting patterns of the hardness of the randomly generated CSPs
Title: Statistical Analysis of Privacy and Anonymity Guarantees in Randomized Security Protocol Implementations
Abstract: Security protocols often use randomization to achieve probabilistic non-determinism. This non-determinism, in turn, is used in obfuscating the dependence of observable values on secret data. Since the correctness of security protocols is very important, formal analysis of security protocols has been widely studied in literature. Randomized security protocols have also been analyzed using formal techniques such as process-calculi and probabilistic model checking. In this paper, we consider the problem of validating implementations of randomized protocols. Unlike previous approaches which treat the protocol as a white-box, our approach tries to verify an implementation provided as a black box. Our goal is to infer the secrecy guarantees provided by a security protocol through statistical techniques. We learn the probabilistic dependency of the observable outputs on secret inputs using Bayesian network. This is then used to approximate the leakage of secret. In order to evaluate the accuracy of our statistical approach, we compare our technique with the probabilistic model checking technique on two examples: crowds protocol and dining crypotgrapher's protocol.
Title: Non-Parametric Bayesian Areal Linguistics
Abstract: We describe a statistical model over linguistic areas and phylogeny. Our model recovers known areas and identifies a plausible hierarchy of areal features. The use of areas improves genetic reconstruction of languages both qualitatively and quantitatively according to a variety of metrics. We model linguistic areas by a Pitman-Yor process and linguistic phylogeny by Kingman's coalescent.
Title: General combination rules for qualitative and quantitative beliefs
Abstract: Martin and Osswald have recently proposed many generalizations of combination rules on quantitative beliefs in order to manage the conflict and to consider the specificity of the responses of the experts. Since the experts express themselves usually in natural language with linguistic labels, Smarandache and Dezert have introduced a mathematical framework for dealing directly also with qualitative beliefs. In this paper we recall some element of our previous works and propose the new combination rules, developed for the fusion of both qualitative or quantitative beliefs.
Title: Comments on "A new combination of evidence based on compromise" by K. Yamada
Abstract: Comments on ``A new combination of evidence based on compromise'' by K. Yamada
Title: Explicit probabilistic models for databases and networks
Abstract: Recent work in data mining and related areas has highlighted the importance of the statistical assessment of data mining results. Crucial to this endeavour is the choice of a non-trivial null model for the data, to which the found patterns can be contrasted. The most influential null models proposed so far are defined in terms of invariants of the null distribution. Such null models can be used by computation intensive randomization approaches in estimating the statistical significance of data mining results. Here, we introduce a methodology to construct non-trivial probabilistic models based on the maximum entropy (MaxEnt) principle. We show how MaxEnt models allow for the natural incorporation of prior information. Furthermore, they satisfy a number of desirable properties of previously introduced randomization approaches. Lastly, they also have the benefit that they can be represented explicitly. We argue that our approach can be used for a variety of data types. However, for concreteness, we have chosen to demonstrate it in particular for databases and networks.
Title: Unsupervised Search-based Structured Prediction
Abstract: We describe an adaptation and application of a search-based structured prediction algorithm "Searn" to unsupervised learning problems. We show that it is possible to reduce unsupervised learning to supervised learning and demonstrate a high-quality unsupervised shift-reduce parsing model. We additionally show a close connection between unsupervised Searn and expectation maximization. Finally, we demonstrate the efficacy of a semi-supervised extension. The key idea that enables this is an application of the predict-self idea for unsupervised learning.
Title: Testing for white noise under unknown dependence and its applications to goodness-of-fit for time series models
Abstract: Testing for white noise has been well studied in the literature of econometrics and statistics. For most of the proposed test statistics, such as the well-known Box-Pierce's test statistic with fixed lag truncation number, the asymptotic null distributions are obtained under independent and identically distributed assumptions and may not be valid for the dependent white noise. Due to recent popularity of conditional heteroscedastic models (e.g., GARCH models), which imply nonlinear dependence with zero autocorrelation, there is a need to understand the asymptotic properties of the existing test statistics under unknown dependence. In this paper, we showed that the asymptotic null distribution of Box-Pierce's test statistic with general weights still holds under unknown weak dependence so long as the lag truncation number grows at an appropriate rate with increasing sample size. Further applications to diagnostic checking of the ARMA and FARIMA models with dependent white noise errors are also addressed. Our results go beyond earlier ones by allowing non-Gaussian and conditional heteroscedastic errors in the ARMA and FARIMA models and provide theoretical support for some empirical findings reported in the literature.
Title: High Dimensional Nonlinear Learning using Local Coordinate Coding
Abstract: This paper introduces a new method for semi-supervised learning on high dimensional nonlinear manifolds, which includes a phase of unsupervised basis learning and a phase of supervised function learning. The learned bases provide a set of anchor points to form a local coordinate system, such that each data point $x$ on the manifold can be locally approximated by a linear combination of its nearby anchor points, with the linear weights offering a local-coordinate coding of $x$. We show that a high dimensional nonlinear function can be approximated by a global linear function with respect to this coding scheme, and the approximation quality is ensured by the locality of such coding. The method turns a difficult nonlinear learning problem into a simple global linear learning problem, which overcomes some drawbacks of traditional local learning methods. The work also gives a theoretical justification to the empirical success of some biologically-inspired models using sparse coding of sensory data, since a local coding scheme must be sufficiently sparse. However, sparsity does not always satisfy locality conditions, and can thus possibly lead to suboptimal results. The properties and performances of the method are empirically verified on synthetic data, handwritten digit classification, and object recognition tasks.
Title: Restricted Global Grammar Constraints
Abstract: We investigate the global GRAMMAR constraint over restricted classes of context free grammars like deterministic and unambiguous context-free grammars. We show that detecting disentailment for the GRAMMAR constraint in these cases is as hard as parsing an unrestricted context free grammar.We also consider the class of linear grammars and give a propagator that runs in quadratic time. Finally, to demonstrate the use of linear grammars, we show that a weighted linear GRAMMAR constraint can efficiently encode the EDITDISTANCE constraint, and a conjunction of the EDITDISTANCE constraint and the REGULAR constraint
Title: Multiple Hypothesis Testing in Pattern Discovery
Abstract: The problem of multiple hypothesis testing arises when there are more than one hypothesis to be tested simultaneously for statistical significance. This is a very common situation in many data mining applications. For instance, assessing simultaneously the significance of all frequent itemsets of a single dataset entails a host of hypothesis, one for each itemset. A multiple hypothesis testing method is needed to control the number of false positives (Type I error). Our contribution in this paper is to extend the multiple hypothesis framework to be used with a generic data mining algorithm. We provide a method that provably controls the family-wise error rate (FWER, the probability of at least one false positive) in the strong sense. We evaluate the performance of our solution on both real and generated data. The results show that our method controls the FWER while maintaining the power of the test.
Title: Online Reinforcement Learning for Dynamic Multimedia Systems
Abstract: In our previous work, we proposed a systematic cross-layer framework for dynamic multimedia systems, which allows each layer to make autonomous and foresighted decisions that maximize the system's long-term performance, while meeting the application's real-time delay constraints. The proposed solution solved the cross-layer optimization offline, under the assumption that the multimedia system's probabilistic dynamics were known a priori. In practice, however, these dynamics are unknown a priori and therefore must be learned online. In this paper, we address this problem by allowing the multimedia system layers to learn, through repeated interactions with each other, to autonomously optimize the system's long-term performance at run-time. We propose two reinforcement learning algorithms for optimizing the system under different design constraints: the first algorithm solves the cross-layer optimization in a centralized manner, and the second solves it in a decentralized manner. We analyze both algorithms in terms of their required computation, memory, and inter-layer communication overheads. After noting that the proposed reinforcement learning algorithms learn too slowly, we introduce a complementary accelerated learning algorithm that exploits partial knowledge about the system's dynamics in order to dramatically improve the system's performance. In our experiments, we demonstrate that decentralized learning can perform as well as centralized learning, while enabling the layers to act autonomously. Additionally, we show that existing application-independent reinforcement learning algorithms, and existing myopic learning algorithms deployed in multimedia systems, perform significantly worse than our proposed application-aware and foresighted learning methods.
Title: Relative Density of the Random r-Factor Proximity Catch Digraph for Testing Spatial Patterns of Segregation and Association (Technical Report)
Abstract: Statistical pattern classification methods based on data-random graphs were introduced recently. In this approach, a random directed graph is constructed from the data using the relative positions of the data points from various classes. Different random graphs result from different definitions of the proximity region associated with each data point and different graph statistics can be employed for data reduction. The approach used in this article is based on a parameterized family of proximity maps determining an associated family of data-random digraphs. The relative arc density of the digraph is used as the summary statistic, providing an alternative to the domination number employed previously. An important advantage of the relative arc density is that, properly re-scaled, it is a U-statistic, facilitating analytic study of its asymptotic distribution using standard U-statistic central limit theory. The approach is illustrated with an application to the testing of spatial patterns of segregation and association. Knowledge of the asymptotic distribution allows evaluation of the Pitman and Hodges-Lehmann asymptotic efficacy, and selection of the proximity map parameter to optimize efficacy. Notice that the approach presented here also has the advantage of validity for data in any dimension.
Title: Relative Edge Density of the Underlying Graphs Based on Proportional-Edge Proximity Catch Digraphs for Testing Bivariate Spatial Patterns (Technical Report)
Abstract: The use of data-random graphs in statistical testing of spatial patterns is introduced recently. In this approach, a random directed graph is constructed from the data using the relative positions of the points from various classes. Different random graphs result from different definitions of the proximity region associated with each data point and different graph statistics can be employed for pattern testing. The approach used in this article is based on underlying graphs of a family of data-random digraphs which is determined by a family of parameterized proximity maps. The relative edge density of the AND- and OR-underlying graphs is used as the summary statistic, providing an alternative to the relative arc density and domination number of the digraph employed previously. Properly scaled, relative edge density of the underlying graphs is a U-statistic, facilitating analytic study of its asymptotic distribution using standard U-statistic central limit theory. The approach is illustrated with an application to the testing of bivariate spatial clustering patterns of segregation and association. Knowledge of the asymptotic distribution allows evaluation of the Pitman asymptotic efficiency, hence selection of the proximity map parameter to optimize efficiency. Asymptotic efficiency and Monte Carlo simulation analysis indicate that the AND-underlying version is better (in terms of power and efficiency) for the segregation alternative, while the OR-underlying version is better for the association alternative. The approach presented here is also valid for data in higher dimensions.
Title: Query Significance in Databases via Randomizations
Abstract: Many sorts of structured data are commonly stored in a multi-relational format of interrelated tables. Under this relational model, exploratory data analysis can be done by using relational queries. As an example, in the Internet Movie Database (IMDb) a query can be used to check whether the average rank of action movies is higher than the average rank of drama movies. We consider the problem of assessing whether the results returned by such a query are statistically significant or just a random artifact of the structure in the data. Our approach is based on randomizing the tables occurring in the queries and repeating the original query on the randomized tables. It turns out that there is no unique way of randomizing in multi-relational data. We propose several randomization techniques, study their properties, and show how to find out which queries or hypotheses about our data result in statistically significant information. We give results on real and generated data and show how the significance of some queries vary between different randomizations.
Title: A-Collapsibility of Distribution Dependence and Quantile Regression Coefficients
Abstract: The Yule-Simpson paradox notes that an association between random variables can be reversed when averaged over a background variable. Cox and Wermuth (2003) introduced the concept of distribution dependence between two random variables X and Y, and developed two dependence conditions, each of which guarantees that reversal cannot occur. Ma, Xie and Geng (2006) studied the collapsibility of distribution dependence over a background variable W, under a rather strong homogeneity condition. Collapsibility ensures the association remains the same for conditional and marginal models, so that Yule-Simpson reversal cannot occur. In this paper, we investigate a more general condition for avoiding effect reversal: A-collapsibility. The conditions of Cox and Wermuth imply A-collapsibility, without assuming homogeneity. In fact, we show that, when W is a binary variable, collapsibility is equivalent to A-collapsibility plus homogeneity, and A-collapsibility is equivalent to the conditions of Cox and Wermuth. Recently, Cox (2007) extended Cochran's result on regression coefficients of conditional and marginal models, to quantile regression coefficients. The conditions of Cox and Wermuth are sufficient for A-collapsibility of quantile regression coefficients. If the conditional distribution of W, given Y = y and X = x, belong to one-dimensional natural exponential family, they are also necessary. Some applications of A-collapsibility include the analysis of a contingency table, linear regression models and quantile regression models.
Title: Entropic Priors and Bayesian Model Selection
Abstract: We demonstrate that the principle of maximum relative entropy (ME), used judiciously, can ease the specification of priors in model selection problems. The resulting effect is that models that make sharp predictions are disfavoured, weakening the usual Bayesian "Occam's Razor". This is illustrated with a simple example involving what Jaynes called a "sure thing" hypothesis. Jaynes' resolution of the situation involved introducing a large number of alternative "sure thing" hypotheses that were possible before we observed the data. However, in more complex situations, it may not be possible to explicitly enumerate large numbers of alternatives. The entropic priors formalism produces the desired result without modifying the hypothesis space or requiring explicit enumeration of alternatives; all that is required is a good model for the prior predictive distribution for the data. This idea is illustrated with a simple rigged-lottery example, and we outline how this idea may help to resolve a recent debate amongst cosmologists: is dark energy a cosmological constant, or has it evolved with time in some way? And how shall we decide, when the data are in?
Title: A Novel Two-Staged Decision Support based Threat Evaluation and Weapon Assignment Algorithm, Asset-based Dynamic Weapon Scheduling using Artificial Intelligence Techinques
Abstract: Surveillance control and reporting (SCR) system for air threats play an important role in the defense of a country. SCR system corresponds to air and ground situation management/processing along with information fusion, communication, coordination, simulation and other critical defense oriented tasks. Threat Evaluation and Weapon Assignment (TEWA) sits at the core of SCR system. In such a system, maximal or near maximal utilization of constrained resources is of extreme importance. Manual TEWA systems cannot provide optimality because of different limitations e.g.surface to air missile (SAM) can fire from a distance of 5Km, but manual TEWA systems are constrained by human vision range and other constraints. Current TEWA systems usually work on target-by-target basis using some type of greedy algorithm thus affecting the optimality of the solution and failing in multi-target scenario. his paper relates to a novel two-staged flexible dynamic decision support based optimal threat evaluation and weapon assignment algorithm for multi-target air-borne threats.
Title: Coherent frequentism
Abstract: By representing the range of fair betting odds according to a pair of confidence set estimators, dual probability measures on parameter space called frequentist posteriors secure the coherence of subjective inference without any prior distribution. The closure of the set of expected losses corresponding to the dual frequentist posteriors constrains decisions without arbitrarily forcing optimization under all circumstances. This decision theory reduces to those that maximize expected utility when the pair of frequentist posteriors is induced by an exact or approximate confidence set estimator or when an automatic reduction rule is applied to the pair. In such cases, the resulting frequentist posterior is coherent in the sense that, as a probability distribution of the parameter of interest, it satisfies the axioms of the decision-theoretic and logic-theoretic systems typically cited in support of the Bayesian posterior. Unlike the p-value, the confidence level of an interval hypothesis derived from such a measure is suitable as an estimator of the indicator of hypothesis truth since it converges in sample-space probability to 1 if the hypothesis is true or to 0 otherwise under general conditions.
Title: Multi-Label MRF Optimization via Least Squares s-t Cuts
Abstract: There are many applications of graph cuts in computer vision, e.g. segmentation. We present a novel method to reformulate the NP-hard, k-way graph partitioning problem as an approximate minimal s-t graph cut problem, for which a globally optimal solution is found in polynomial time. Each non-terminal vertex in the original graph is replaced by a set of ceil(log_2(k)) new vertices. The original graph edges are replaced by new edges connecting the new vertices to each other and to only two, source s and sink t, terminal nodes. The weights of the new edges are obtained using a novel least squares solution approximating the constraints of the initial k-way setup. The minimal s-t cut labels each new vertex with a binary (s vs t) "Gray" encoding, which is then decoded into a decimal label number that assigns each of the original vertices to one of k classes. We analyze the properties of the approximation and present quantitative as well as qualitative segmentation results.
Title: A new model of artificial neuron: cyberneuron and its use
Abstract: This article describes a new type of artificial neuron, called the authors "cyberneuron". Unlike classical models of artificial neurons, this type of neuron used table substitution instead of the operation of multiplication of input values for the weights. This allowed to significantly increase the information capacity of a single neuron, but also greatly simplify the process of learning. Considered an example of the use of "cyberneuron" with the task of detecting computer viruses.
Title: An Iterative Fingerprint Enhancement Algorithm Based on Accurate Determination of Orientation Flow
Abstract: We describe an algorithm to enhance and binarize a fingerprint image. The algorithm is based on accurate determination of orientation flow of the ridges of the fingerprint image by computing variance of the neighborhood pixels around a pixel in different directions. We show that an iterative algorithm which captures the mutual interdependence of orientation flow computation, enhancement and binarization gives very good results on poor quality images.
Title: Degenerate neutrality creates evolvable fitness landscapes
Abstract: Understanding how systems can be designed to be evolvable is fundamental to research in optimization, evolution, and complex systems science. Many researchers have thus recognized the importance of evolvability, i.e. the ability to find new variants of higher fitness, in the fields of biological evolution and evolutionary computation. Recent studies by Ciliberti et al (Proc. Nat. Acad. Sci., 2007) and Wagner (Proc. R. Soc. B., 2008) propose a potentially important link between the robustness and the evolvability of a system. In particular, it has been suggested that robustness may actually lead to the emergence of evolvability. Here we study two design principles, redundancy and degeneracy, for achieving robustness and we show that they have a dramatically different impact on the evolvability of the system. In particular, purely redundant systems are found to have very little evolvability while systems with degeneracy, i.e. distributed robustness, can be orders of magnitude more evolvable. These results offer insights into the general principles for achieving evolvability and may prove to be an important step forward in the pursuit of evolvable representations in evolutionary computation.
Title: Evidence of coevolution in multi-objective evolutionary algorithms
Abstract: This paper demonstrates that simple yet important characteristics of coevolution can occur in evolutionary algorithms when only a few conditions are met. We find that interaction-based fitness measurements such as fitness (linear) ranking allow for a form of coevolutionary dynamics that is observed when 1) changes are made in what solutions are able to interact during the ranking process and 2) evolution takes place in a multi-objective environment. This research contributes to the study of simulated evolution in a at least two ways. First, it establishes a broader relationship between coevolution and multi-objective optimization than has been previously considered in the literature. Second, it demonstrates that the preconditions for coevolutionary behavior are weaker than previously thought. In particular, our model indicates that direct cooperation or competition between species is not required for coevolution to take place. Moreover, our experiments provide evidence that environmental perturbations can drive coevolutionary processes; a conclusion that mirrors arguments put forth in dual phase evolution theory. In the discussion, we briefly consider how our results may shed light onto this and other recent theories of evolution.
Title: Survival of the flexible: explaining the recent dominance of nature-inspired optimization within a rapidly evolving world
Abstract: Although researchers often comment on the rising popularity of nature-inspired meta-heuristics (NIM), there has been a paucity of data to directly support the claim that NIM are growing in prominence compared to other optimization techniques. This study presents evidence that the use of NIM is not only growing, but indeed appears to have surpassed mathematical optimization techniques (MOT) in several important metrics related to academic research activity (publication frequency) and commercial activity (patenting frequency). Motivated by these findings, this article discusses some of the possible origins of this growing popularity. I review different explanations for NIM popularity and discuss why some of these arguments remain unsatisfying. I argue that a compelling and comprehensive explanation should directly account for the manner in which most NIM success has actually been achieved, e.g. through hybridization and customization to different problem environments. By taking a problem lifecycle perspective, this paper offers a fresh look at the hypothesis that nature-inspired meta-heuristics derive much of their utility from being flexible. I discuss global trends within the business environments where optimization algorithms are applied and I speculate that highly flexible algorithm frameworks could become increasingly popular within our diverse and rapidly changing world.
Title: The Self-Organization of Interaction Networks for Nature-Inspired Optimization
Abstract: Over the last decade, significant progress has been made in understanding complex biological systems, however there have been few attempts at incorporating this knowledge into nature inspired optimization algorithms. In this paper, we present a first attempt at incorporating some of the basic structural properties of complex biological systems which are believed to be necessary preconditions for system qualities such as robustness. In particular, we focus on two important conditions missing in Evolutionary Algorithm populations; a self-organized definition of locality and interaction epistasis. We demonstrate that these two features, when combined, provide algorithm behaviors not observed in the canonical Evolutionary Algorithm or in Evolutionary Algorithms with structured populations such as the Cellular Genetic Algorithm. The most noticeable change in algorithm behavior is an unprecedented capacity for sustainable coexistence of genetically distinct individuals within a single population. This capacity for sustained genetic diversity is not imposed on the population but instead emerges as a natural consequence of the dynamics of the system.
Title: Strategic Positioning in Tactical Scenario Planning
Abstract: Capability planning problems are pervasive throughout many areas of human interest with prominent examples found in defense and security. Planning provides a unique context for optimization that has not been explored in great detail and involves a number of interesting challenges which are distinct from traditional optimization research. Planning problems demand solutions that can satisfy a number of competing objectives on multiple scales related to robustness, adaptiveness, risk, etc. The scenario method is a key approach for planning. Scenarios can be defined for long-term as well as short-term plans. This paper introduces computational scenario-based planning problems and proposes ways to accommodate strategic positioning within the tactical planning domain. We demonstrate the methodology in a resource planning problem that is solved with a multi-objective evolutionary algorithm. Our discussion and results highlight the fact that scenario-based planning is naturally framed within a multi-objective setting. However, the conflicting objectives occur on different system levels rather than within a single system alone. This paper also contends that planning problems are of vital interest in many human endeavors and that Evolutionary Computation may be well positioned for this problem domain.
Title: Bounding the Probability of Error for High Precision Recognition
Abstract: We consider models for which it is important, early in processing, to estimate some variables with high precision, but perhaps at relatively low rates of recall. If some variables can be identified with near certainty, then they can be conditioned upon, allowing further inference to be done efficiently. Specifically, we consider optical character recognition (OCR) systems that can be bootstrapped by identifying a subset of correctly translated document words with very high precision. This "clean set" is subsequently used as document-specific training data. While many current OCR systems produce measures of confidence for the identity of each letter or word, thresholding these confidence values, even at very high values, still produces some errors. We introduce a novel technique for identifying a set of correct words with very high precision. Rather than estimating posterior probabilities, we bound the probability that any given word is incorrect under very general assumptions, using an approximate worst case analysis. As a result, the parameters of the model are nearly irrelevant, and we are able to identify a subset of words, even in noisy documents, of which we are highly confident. On our set of 10 documents, we are able to identify about 6% of the words on average without making a single error. This ability to produce word lists with very high precision allows us to use a family of models which depends upon such clean word lists.
Title: Error analysis for circle fitting algorithms
Abstract: We study the problem of fitting circles (or circular arcs) to data points observed with errors in both variables. A detailed error analysis for all popular circle fitting methods -- geometric fit, Kasa fit, Pratt fit, and Taubin fit -- is presented. Our error analysis goes deeper than the traditional expansion to the leading order. We obtain higher order terms, which show exactly why and by how much circle fits differ from each other. Our analysis allows us to construct a new algebraic (non-iterative) circle fitting algorithm that outperforms all the existing methods, including the (previously regarded as unbeatable) geometric fit.