text
stringlengths 0
4.09k
|
|---|
Abstract: This paper examines the use of a hierarchical coevolutionary genetic algorithm under different partnering strategies. Cascading clusters of sub-populations are built from the bottom up, with higher-level sub-populations optimising larger parts of the problem. Hence higher-level sub-populations potentially search a larger search space with a lower resolution whilst lower-level sub-populations search a smaller search space with a higher resolution. The effects of different partner selection schemes amongst the sub-populations on solution quality are examined for two constrained optimisation problems. We examine a number of recombination partnering strategies in the construction of higher-level individuals and a number of related schemes for evaluating sub-solutions. It is shown that partnering strategies that exploit problem-specific knowledge are superior and can counter inappropriate (sub)fitness measurements.
|
Title: A Recommender System based on Idiotypic Artificial Immune Networks
|
Abstract: The immune system is a complex biological system with a highly distributed, adaptive and self-organising nature. This paper presents an Artificial Immune System (AIS) that exploits some of these characteristics and is applied to the task of film recommendation by Collaborative Filtering (CF). Natural evolution and in particular the immune system have not been designed for classical optimisation. However, for this problem, we are not interested in finding a single optimum. Rather we intend to identify a sub-set of good matches on which recommendations can be based. It is our hypothesis that an AIS built on two central aspects of the biological immune system will be an ideal candidate to achieve this: Antigen-antibody interaction for matching and idiotypic antibody-antibody interaction for diversity. Computational results are presented in support of this conjecture and compared to those found by other CF techniques.
|
Title: Idiotypic Immune Networks in Mobile Robot Control
|
Abstract: Jerne's idiotypic network theory postulates that the immune response involves inter-antibody stimulation and suppression as well as matching to antigens. The theory has proved the most popular Artificial Immune System (ais) model for incorporation into behavior-based robotics but guidelines for implementing idiotypic selection are scarce. Furthermore, the direct effects of employing the technique have not been demonstrated in the form of a comparison with non-idiotypic systems. This paper aims to address these issues. A method for integrating an idiotypic ais network with a Reinforcement Learning based control system (rl) is described and the mechanisms underlying antibody stimulation and suppression are explained in detail. Some hypotheses that account for the network advantage are put forward and tested using three systems with increasing idiotypic complexity. The basic rl, a simplified hybrid ais-rl that implements idiotypic selection independently of derived concentration levels and a full hybrid ais-rl scheme are examined. The test bed takes the form of a simulated Pioneer robot that is required to navigate through maze worlds detecting and tracking door markers.
|
Title: Eye-Tracking Evolutionary Algorithm to minimize user's fatigue in IEC applied to Interactive One-Max problem
|
Abstract: In this paper, we describe a new algorithm that consists in combining an eye-tracker for minimizing the fatigue of a user during the evaluation process of Interactive Evolutionary Computation. The approach is then applied to the Interactive One-Max optimization problem.
|
Title: Node discovery in a networked organization
|
Abstract: In this paper, I present a method to solve a node discovery problem in a networked organization. Covert nodes refer to the nodes which are not observable directly. They affect social interactions, but do not appear in the surveillance logs which record the participants of the social interactions. Discovering the covert nodes is defined as identifying the suspicious logs where the covert nodes would appear if the covert nodes became overt. A mathematical model is developed for the maximal likelihood estimation of the network behind the social interactions and for the identification of the suspicious logs. Precision, recall, and F measure characteristics are demonstrated with the dataset generated from a real organization and the computationally synthesized datasets. The performance is close to the theoretical limit for any covert nodes in the networks of any topologies and sizes if the ratio of the number of observation to the number of possible communication patterns is large.
|
Title: Robustness and Regularization of Support Vector Machines
|
Abstract: We consider regularized support vector machines (SVMs) and show that they are precisely equivalent to a new robust optimization formulation. We show that this equivalence of robust optimization and regularization has implications for both algorithms, and analysis. In terms of algorithms, the equivalence suggests more general SVM-like algorithms for classification that explicitly build in protection to noise, and at the same time control overfitting. On the analysis front, the equivalence of robustness and regularization, provides a robust optimization interpretation for the success of regularized SVMs. We use the this new robustness interpretation of SVMs to give a new proof of consistency of (kernelized) SVMs, thus establishing robustness as the reason regularized SVMs generalize well.
|
Title: Multiagent Approach for the Representation of Information in a Decision Support System
|
Abstract: In an emergency situation, the actors need an assistance allowing them to react swiftly and efficiently. In this prospect, we present in this paper a decision support system that aims to prepare actors in a crisis situation thanks to a decision-making support. The global architecture of this system is presented in the first part. Then we focus on a part of this system which is designed to represent the information of the current situation. This part is composed of a multiagent system that is made of factual agents. Each agent carries a semantic feature and aims to represent a partial part of a situation. The agents develop thanks to their interactions by comparing their semantic features using proximity measures and according to specific ontologies.
|
Title: Local Polynomial Estimation for Sensitivity Analysis on Models With Correlated Inputs
|
Abstract: Sensitivity indices when the inputs of a model are not independent are estimated by local polynomial techniques. Two original estimators based on local polynomial smoothers are proposed. Both have good theoretical properties which are exhibited and also illustrated through analytical examples. They are used to carry out a sensitivity analysis on a real case of a kinetic model with correlated parameters.
|
Title: Reinforcement Learning by Value Gradients
|
Abstract: The concept of the value-gradient is introduced and developed in the context of reinforcement learning. It is shown that by learning the value-gradients exploration or stochastic behaviour is no longer needed to find locally optimal trajectories. This is the main motivation for using value-gradients, and it is argued that learning value-gradients is the actual objective of any value-function learning algorithm for control problems. It is also argued that learning value-gradients is significantly more efficient than learning just the values, and this argument is supported in experiments by efficiency gains of several orders of magnitude, in several problem domains. Once value-gradients are introduced into learning, several analyses become possible. For example, a surprising equivalence between a value-gradient learning algorithm and a policy-gradient learning algorithm is proven, and this provides a robust convergence proof for control problems using a value function with a general function approximator.
|
Title: A new stochastic process to model Heart Rate series during exhaustive run and an estimator of its fractality parameter
|
Abstract: In order to interpret and explain the physiological signal behaviors, it can be interesting to find some constants among the fluctuations of these data during all the effort or during different stages of the race (which can be detected using a change points detection method). Several recent papers have proposed the long-range dependence (Hurst) parameter as such a constant. However, their results induce two main problems. Firstly, DFA method is usually applied for estimating this parameter. Clearly, such a method does not provide the most efficient estimator and moreover it is not at all robust even in the case of smooth trends. Secondly, this method often gives estimated Hurst parameters larger than 1, which is the larger possible value for long memory stationary processes. In this article we propose solutions for both these problems and we define a new model allowing such estimated parameters.
|
Title: Variable selection for the multicategory SVM via adaptive sup-norm regularization
|
Abstract: The Support Vector Machine (SVM) is a popular classification paradigm in machine learning and has achieved great success in real applications. However, the standard SVM can not select variables automatically and therefore its solution typically utilizes all the input variables without discrimination. This makes it difficult to identify important predictor variables, which is often one of the primary goals in data analysis. In this paper, we propose two novel types of regularization in the context of the multicategory SVM (MSVM) for simultaneous classification and variable selection. The MSVM generally requires estimation of multiple discriminating functions and applies the argmax rule for prediction. For each individual variable, we propose to characterize its importance by the supnorm of its coefficient vector associated with different functions, and then minimize the MSVM hinge loss function subject to a penalty on the sum of supnorms. To further improve the supnorm penalty, we propose the adaptive regularization, which allows different weights imposed on different variables according to their relative importance. Both types of regularization automate variable selection in the process of building classifiers, and lead to sparse multi-classifiers with enhanced interpretability and improved accuracy, especially for high dimensional low sample size data. One big advantage of the supnorm penalty is its easy implementation via standard linear programming. Several simulated examples and one real gene data analysis demonstrate the outstanding performance of the adaptive supnorm penalty in various data settings.
|
Title: Preferred extensions as stable models
|
Abstract: Given an argumentation framework AF, we introduce a mapping function that constructs a disjunctive logic program P, such that the preferred extensions of AF correspond to the stable models of P, after intersecting each stable model with the relevant atoms. The given mapping function is of polynomial size w.r.t. AF. In particular, we identify that there is a direct relationship between the minimal models of a propositional formula and the preferred extensions of an argumentation framework by working on representing the defeated arguments. Then we show how to infer the preferred extensions of an argumentation framework by using UNSAT algorithms and disjunctive stable model solvers. The relevance of this result is that we define a direct relationship between one of the most satisfactory argumentation semantics and one of the most successful approach of non-monotonic reasoning i.e., logic programming with the stable model semantics.
|
Title: Recorded Step Directional Mutation for Faster Convergence
|
Abstract: Two meta-evolutionary optimization strategies described in this paper accelerate the convergence of evolutionary programming algorithms while still retaining much of their ability to deal with multi-modal problems. The strategies, called directional mutation and recorded step in this paper, can operate independently but together they greatly enhance the ability of evolutionary programming algorithms to deal with fitness landscapes characterized by long narrow valleys. The directional mutation aspect of this combined method uses correlated meta-mutation but does not introduce a full covariance matrix. These new methods are thus much more economical in terms of storage for problems with high dimensionality. Additionally, directional mutation is rotationally invariant which is a substantial advantage over self-adaptive methods which use a single variance per coordinate for problems where the natural orientation of the problem is not oriented along the axes.
|
Title: Artificial Immune Systems Tutorial
|
Abstract: The biological immune system is a robust, complex, adaptive system that defends the body from foreign pathogens. It is able to categorize all cells (or molecules) within the body as self-cells or non-self cells. It does this with the help of a distributed task force that has the intelligence to take action from a local and also a global perspective using its network of chemical messengers for communication. There are two major branches of the immune system. The innate immune system is an unchanging mechanism that detects and destroys certain invading organisms, whilst the adaptive immune system responds to previously unknown foreign cells and builds a response to them that can remain in the body over a long period of time. This remarkable information processing biological system has caught the attention of computer science in recent years. A novel computational intelligence technique, inspired by immunology, has emerged, called Artificial Immune Systems. Several concepts from the immune have been extracted and applied for solution to real world science and engineering problems. In this tutorial, we briefly describe the immune system metaphors that are relevant to existing Artificial Immune Systems methods. We will then show illustrative real-world problems suitable for Artificial Immune Systems and give a step-by-step algorithm walkthrough for one such problem. A comparison of the Artificial Immune Systems to other well-known algorithms, areas for future work, tips & tricks and a list of resources will round this tutorial off. It should be noted that as Artificial Immune Systems is still a young and evolving field, there is not yet a fixed algorithm template and hence actual implementations might differ somewhat from time to time and from those examples given here.
|
Title: Reflective visualization and verbalization of unconscious preference
|
Abstract: A new method is presented, that can help a person become aware of his or her unconscious preferences, and convey them to others in the form of verbal explanation. The method combines the concepts of reflection, visualization, and verbalization. The method was tested in an experiment where the unconscious preferences of the subjects for various artworks were investigated. In the experiment, two lessons were learned. The first is that it helps the subjects become aware of their unconscious preferences to verbalize weak preferences as compared with strong preferences through discussion over preference diagrams. The second is that it is effective to introduce an adjustable factor into visualization to adapt to the differences in the subjects and to foster their mutual understanding.
|
Title: Analysis of boosting algorithms using the smooth margin function
|
Abstract: We introduce a useful tool for analyzing boosting algorithms called the ``smooth margin function,'' a differentiable approximation of the usual margin for boosting algorithms. We present two boosting algorithms based on this smooth margin, ``coordinate ascent boosting'' and ``approximate coordinate ascent boosting,'' which are similar to Freund and Schapire's AdaBoost algorithm and Breiman's arc-gv algorithm. We give convergence rates to the maximum margin solution for both of our algorithms and for arc-gv. We then study AdaBoost's convergence properties using the smooth margin function. We precisely bound the margin attained by AdaBoost when the edges of the weak classifiers fall within a specified range. This shows that a previous bound proved by R\"atsch and Warmuth is exactly tight. Furthermore, we use the smooth margin to capture explicit properties of AdaBoost in cases where cyclic behavior occurs.
|
Title: Combinatorial Explorations in Su-Doku
|
Abstract: Su-Doku, a popular combinatorial puzzle, provides an excellent testbench for heuristic explorations. Several interesting questions arise from its deceptively simple set of rules. How many distinct Su-Doku grids are there? How to find a solution to a Su-Doku puzzle? Is there a unique solution to a given Su-Doku puzzle? What is a good estimation of a puzzle's difficulty? What is the minimum puzzle size (the number of "givens")? This paper explores how these questions are related to the well-known alldifferent constraint which emerges in a wide variety of Constraint Satisfaction Problems (CSP) and compares various algorithmic approaches based on different formulations of Su-Doku.
|
Title: Grammar-Based Random Walkers in Semantic Networks
|
Abstract: Semantic networks qualify the meaning of an edge relating any two vertices. Determining which vertices are most "central" in a semantic network is difficult because one relationship type may be deemed subjectively more important than another. For this reason, research into semantic network metrics has focused primarily on context-based rankings (i.e. user prescribed contexts). Moreover, many of the current semantic network metrics rank semantic associations (i.e. directed paths between two vertices) and not the vertices themselves. This article presents a framework for calculating semantically meaningful primary eigenvector-based metrics such as eigenvector centrality and PageRank in semantic networks using a modified version of the random walker model of Markov chain analysis. Random walkers, in the context of this article, are constrained by a grammar, where the grammar is a user defined data structure that determines the meaning of the final vertex ranking. The ideas in this article are presented within the context of the Resource Description Framework (RDF) of the Semantic Web initiative.
|
Title: Extension of the SAEM algorithm for nonlinear mixed models with two levels of random effects
|
Abstract: This article focuses on parameter estimation of multi-levels nonlinear mixed effects models (MNLMEMs). These models are used to analyze data presenting multiple hierarchical levels of grouping (cluster data, clinical trials with several observation periods,...). The variability of the individual parameters of the regression function is thus decomposed as a between-sub ject variability and higher levels of variability (for example within-sub ject variability). We propose maximum likelihood estimates of parameters of those MNLMEMs with two levels of random effects, using an extension of the SAEM-MCMC algorithm. The extended SAEM algorithm is split into an explicit direct EM algorithm and a stochastic EM part. Compared to the original algorithm, additional sufficient statistics have to be approximated by relying on the conditional distribution of the second level of random effects. This estimation method is evaluated on pharmacokinetic cross-over simulated trials, mimicking theophyllin concentration data. Results obtained on those datasets with either the SAEM algorithm or the FOCE algorithm (implemented in the nlme function of R software) are compared: biases and RMSEs of almost all the SAEM estimates are smaller than the FOCE ones. Finally, we apply the extended SAEM algorithm to analyze the pharmacokinetic interaction of tenofovir on atazanavir, a novel protease inhibitor, from the ANRS 107-Puzzle 2 study. A significant decrease of the area under the curve of atazanavir is found in patients receiving both treatments.
|
Title: A hierarchical eigenmodel for pooled covariance estimation
|
Abstract: While a set of covariance matrices corresponding to different populations are unlikely to be exactly equal they can still exhibit a high degree of similarity. For example, some pairs of variables may be positively correlated across most groups, while the correlation between other pairs may be consistently negative. In such cases much of the similarity across covariance matrices can be described by similarities in their principal axes, the axes defined by the eigenvectors of the covariance matrices. Estimating the degree of across-population eigenvector heterogeneity can be helpful for a variety of estimation tasks. Eigenvector matrices can be pooled to form a central set of principal axes, and to the extent that the axes are similar, covariance estimates for populations having small sample sizes can be stabilized by shrinking their principal axes towards the across-population center. To this end, this article develops a hierarchical model and estimation procedure for pooling principal axes across several populations. The model for the across-group heterogeneity is based on a matrix-valued antipodally symmetric Bingham distribution that can flexibly describe notions of ``center'' and ``spread'' for a population of orthonormal matrices.
|
Title: Binary Decision Diagrams for Affine Approximation
|
Abstract: Selman and Kautz's work on ``knowledge compilation'' established how approximation (strengthening and/or weakening) of a propositional knowledge-base can be used to speed up query processing, at the expense of completeness. In this classical approach, querying uses Horn over- and under-approximations of a given knowledge-base, which is represented as a propositional formula in conjunctive normal form (CNF). Along with the class of Horn functions, one could imagine other Boolean function classes that might serve the same purpose, owing to attractive deduction-computational properties similar to those of the Horn functions. Indeed, Zanuttini has suggested that the class of affine Boolean functions could be useful in knowledge compilation and has presented an affine approximation algorithm. Since CNF is awkward for presenting affine functions, Zanuttini considers both a sets-of-models representation and the use of modulo 2 congruence equations. In this paper, we propose an algorithm based on reduced ordered binary decision diagrams (ROBDDs). This leads to a representation which is more compact than the sets of models and, once we have established some useful properties of affine Boolean functions, a more efficient algorithm.
|
Title: Effects of High-Order Co-occurrences on Word Semantic Similarities
|
Abstract: A computational model of the construction of word meaning through exposure to texts is built in order to simulate the effects of co-occurrence values on word semantic similarities, paragraph by paragraph. Semantic similarity is here viewed as association. It turns out that the similarity between two words W1 and W2 strongly increases with a co-occurrence, decreases with the occurrence of W1 without W2 or W2 without W1, and slightly increases with high-order co-occurrences. Therefore, operationalizing similarity as a frequency of co-occurrence probably introduces a bias: first, there are cases in which there is similarity without co-occurrence and, second, the frequency of co-occurrence overestimates similarity.
|
Title: Support Vector Machine Classification with Indefinite Kernels
|
Abstract: We propose a method for support vector machine classification using indefinite kernels. Instead of directly minimizing or stabilizing a nonconvex loss function, our algorithm simultaneously computes support vectors and a proxy kernel matrix used in forming the loss. This can be interpreted as a penalized kernel learning problem where indefinite kernel matrices are treated as a noisy observations of a true Mercer kernel. Our formulation keeps the problem convex and relatively large problems can be solved efficiently using the projected gradient or analytic center cutting plane methods. We compare the performance of our technique with other methods on several classic data sets.
|
Title: Parts-of-Speech Tagger Errors Do Not Necessarily Degrade Accuracy in Extracting Information from Biomedical Text
|
Abstract: A recent study reported development of Muscorian, a generic text processing tool for extracting protein-protein interactions from text that achieved comparable performance to biomedical-specific text processing tools. This result was unexpected since potential errors from a series of text analysis processes is likely to adversely affect the outcome of the entire process. Most biomedical entity relationship extraction tools have used biomedical-specific parts-of-speech (POS) tagger as errors in POS tagging and are likely to affect subsequent semantic analysis of the text, such as shallow parsing. This study aims to evaluate the parts-of-speech (POS) tagging accuracy and attempts to explore whether a comparable performance is obtained when a generic POS tagger, MontyTagger, was used in place of MedPost, a tagger trained in biomedical text. Our results demonstrated that MontyTagger, Muscorian's POS tagger, has a POS tagging accuracy of 83.1% when tested on biomedical text. Replacing MontyTagger with MedPost did not result in a significant improvement in entity relationship extraction from text; precision of 55.6% from MontyTagger versus 56.8% from MedPost on directional relationships and 86.1% from MontyTagger compared to 81.8% from MedPost on nondirectional relationships. This is unexpected as the potential for poor POS tagging by MontyTagger is likely to affect the outcome of the information extraction. An analysis of POS tagging errors demonstrated that 78.5% of tagging errors are being compensated by shallow parsing. Thus, despite 83.1% tagging accuracy, MontyTagger has a functional tagging accuracy of 94.6%.
|
Title: Permeability Analysis based on information granulation theory
|
Abstract: This paper describes application of information granulation theory, on the analysis of "lugeon data". In this manner, using a combining of Self Organizing Map (SOM) and Neuro-Fuzzy Inference System (NFIS), crisp and fuzzy granules are obtained. Balancing of crisp granules and sub- fuzzy granules, within non fuzzy information (initial granulation), is rendered in open-close iteration. Using two criteria, "simplicity of rules "and "suitable adaptive threshold error level", stability of algorithm is guaranteed. In other part of paper, rough set theory (RST), to approximate analysis, has been employed >.Validation of the proposed methods, on the large data set of in-situ permeability in rock masses, in the Shivashan dam, Iran, has been highlighted. By the implementation of the proposed algorithm on the lugeon data set, was proved the suggested method, relating the approximate analysis on the permeability, could be applied.
|
Title: Graphical Estimation of Permeability Using RST&NFIS
|
Abstract: This paper pursues some applications of Rough Set Theory (RST) and neural-fuzzy model to analysis of "lugeon data". In the manner, using Self Organizing Map (SOM) as a pre-processing the data are scaled and then the dominant rules by RST, are elicited. Based on these rules variations of permeability in the different levels of Shivashan dam, Iran has been highlighted. Then, via using a combining of SOM and an adaptive Neuro-Fuzzy Inference System (NFIS) another analysis on the data was carried out. Finally, a brief comparison between the obtained results of RST and SOM-NFIS (briefly SONFIS) has been rendered.
|
Title: Testing Consistency of Two Histograms
|
Abstract: Several approaches to testing the hypothesis that two histograms are drawn from the same distribution are investigated. We note that single-sample continuous distribution tests may be adapted to this two-sample grouped data situation. The difficulty of not having a fully-specified null hypothesis is an important consideration in the general case, and care is required in estimating probabilities with ``toy'' Monte Carlo simulations. The performance of several common tests is compared; no single test performs best in all situations.
|
Title: A practical procedure to find matching priors for frequentist inference
|
Abstract: In the manuscript, we present a practical way to find the matching priors proposed by Welch & Peers (1963) and Peers (1965). We investigate the use of saddlepoint approximations combined with matching priors and obtain p-values of the test of an interest parameter in the presence of nuisance parameter. The advantage of our procedure is the flexibility of choosing different initial conditions so that one can adjust the performance of the test. Two examples have been studied, with coverage verified via Monte Carlo simulation. One relates to the ratio of two exponential means, and the other relates the logistic regression model. Particularly, we are interested in small sample settings.
|
Title: Application of Rough Set Theory to Analysis of Hydrocyclone Operation
|
Abstract: This paper describes application of rough set theory, on the analysis of hydrocyclone operation. In this manner, using Self Organizing Map (SOM) as preprocessing step, best crisp granules of data are obtained. Then, using a combining of SOM and rough set theory (RST)-called SORST-, the dominant rules on the information table, obtained from laboratory tests, are extracted. Based on these rules, an approximate estimation on decision attribute is fulfilled. Finally, a brief comparison of this method with the SOM-NFIS system (briefly SONFIS) is highlighted.
|
Title: Agent-Based Perception of an Environment in an Emergency Situation
|
Abstract: We are interested in the problem of multiagent systems development for risk detecting and emergency response in an uncertain and partially perceived environment. The evaluation of the current situation passes by three stages inside the multiagent system. In a first time, the situation is represented in a dynamic way. The second step, consists to characterise the situation and finally, it is compared with other similar known situations. In this paper, we present an information modelling of an observed environment, that we have applied on the RoboCupRescue Simulation System. Information coming from the environment are formatted according to a taxonomy and using semantic features. The latter are defined thanks to a fine ontology of the domain and are managed by factual agents that aim to represent dynamically the current situation.
|
Title: An Artificial Immune System as a Recommender System for Web Sites
|
Abstract: Artificial Immune Systems have been used successfully to build recommender systems for film databases. In this research, an attempt is made to extend this idea to web site recommendation. A collection of more than 1000 individuals web profiles (alternatively called preferences / favourites / bookmarks file) will be used. URLs will be classified using the DMOZ (Directory Mozilla) database of the Open Directory Project as our ontology. This will then be used as the data for the Artificial Immune Systems rather than the actual addresses. The first attempt will involve using a simple classification code number coupled with the number of pages within that classification code. However, this implementation does not make use of the hierarchical tree-like structure of DMOZ. Consideration will then be given to the construction of a similarity measure for web profiles that makes use of this hierarchical information to build a better-informed Artificial Immune System.
|
Title: Explicit Learning: an Effort towards Human Scheduling Algorithms
|
Abstract: Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem.
|
Title: Symmetry Breaking for Maximum Satisfiability
|
Abstract: Symmetries are intrinsic to many combinatorial problems including Boolean Satisfiability (SAT) and Constraint Programming (CP). In SAT, the identification of symmetry breaking predicates (SBPs) is a well-known, often effective, technique for solving hard problems. The identification of SBPs in SAT has been the subject of significant improvements in recent years, resulting in more compact SBPs and more effective algorithms. The identification of SBPs has also been applied to pseudo-Boolean (PB) constraints, showing that symmetry breaking can also be an effective technique for PB constraints. This paper extends further the application of SBPs, and shows that SBPs can be identified and used in Maximum Satisfiability (MaxSAT), as well as in its most well-known variants, including partial MaxSAT, weighted MaxSAT and weighted partial MaxSAT. As with SAT and PB, symmetry breaking predicates for MaxSAT and variants are shown to be effective for a representative number of problem domains, allowing solving problem instances that current state of the art MaxSAT solvers could not otherwise solve.
|
Title: On the Influence of Selection Operators on Performances in Cellular Genetic Algorithms
|
Abstract: In this paper, we study the influence of the selective pressure on the performance of cellular genetic algorithms. Cellular genetic algorithms are genetic algorithms where the population is embedded on a toroidal grid. This structure makes the propagation of the best so far individual slow down, and allows to keep in the population potentially good solutions. We present two selective pressure reducing strategies in order to slow down even more the best solution propagation. We experiment these strategies on a hard optimization problem, the quadratic assignment problem, and we show that there is a value for of the control parameter for both which gives the best performance. This optimal value does not find explanation on only the selective pressure, measured either by take over time and diversity evolution. This study makes us conclude that we need other tools than the sole selective pressure measures to explain the performances of cellular genetic algorithms.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.