text
stringlengths 17
3.36M
| source
stringlengths 3
333
| __index_level_0__
int64 0
518k
|
|---|---|---|
The pharmacovigilance databases consist of several case reports involving drugs and adverse events (AEs). Some methods are applied consistently to highlight all signals, i.e. all statistically significant associations between a drug and an AE. These methods are appropriate for verification of more complex relationships involving one or several drug(s) and AE(s) (e.g; syndromes or interactions) but do not address the identification of them. We propose a method for the extraction of these relationships based on Formal Concept Analysis (FCA) associated with disproportionality measures. This method identifies all sets of drugs and AEs which are potential signals, syndromes or interactions. Compared to a previous experience of disproportionality analysis without FCA, the addition of FCA was more efficient for identifying false positives related to concomitant drugs.
|
Mining for adverse drug events with formal concept analysis
| 500
|
Domain experts should provide relevant domain knowledge to an Intelligent Tutoring System (ITS) so that it can guide a learner during problemsolving learning activities. However, for many ill-defined domains, the domain knowledge is hard to define explicitly. In previous works, we showed how sequential pattern mining can be used to extract a partial problem space from logged user interactions, and how it can support tutoring services during problem-solving exercises. This article describes an extension of this approach to extract a problem space that is richer and more adapted for supporting tutoring services. We combined sequential pattern mining with (1) dimensional pattern mining (2) time intervals, (3) the automatic clustering of valued actions and (4) closed sequences mining. Some tutoring services have been implemented and an experiment has been conducted in a tutoring system.
|
A Knowledge Discovery Framework for Learning Task Models from User
Interactions in Intelligent Tutoring Systems
| 501
|
In this paper we propose the CTS (Concious Tutoring System) technology, a biologically plausible cognitive agent based on human brain functions.This agent is capable of learning and remembering events and any related information such as corresponding procedures, stimuli and their emotional valences. Our proposed episodic memory and episodic learning mechanism are closer to the current multiple-trace theory in neuroscience, because they are inspired by it [5] contrary to other mechanisms that are incorporated in cognitive agents. This is because in our model emotions play a role in the encoding and remembering of events. This allows the agent to improve its behavior by remembering previously selected behaviors which are influenced by its emotional mechanism. Moreover, the architecture incorporates a realistic memory consolidation process based on a data mining algorithm.
|
How Emotional Mechanism Helps Episodic Learning in a Cognitive Agent
| 502
|
Consumers of mass media must have a comprehensive, balanced and plural selection of news to get an unbiased perspective; but achieving this goal can be very challenging, laborious and time consuming. News stories development over time, its (in)consistency, and different level of coverage across the media outlets are challenges that a conscientious reader has to overcome in order to alleviate bias. In this paper we present an intelligent agent framework currently facilitating analysis of the main sources of on-line news in El Salvador. We show how prior tools of text analysis and Web 2.0 technologies can be combined with minimal manual intervention to help individuals on their rational decision process, while holding media outlets accountable for their work.
|
Alleviating Media Bias Through Intelligent Agent Blogging
| 503
|
We study the logic of comparative concept similarity $\CSL$ introduced by Sheremet, Tishkovsky, Wolter and Zakharyaschev to capture a form of qualitative similarity comparison. In this logic we can formulate assertions of the form " objects A are more similar to B than to C". The semantics of this logic is defined by structures equipped by distance functions evaluating the similarity degree of objects. We consider here the particular case of the semantics induced by \emph{minspaces}, the latter being distance spaces where the minimum of a set of distances always exists. It turns out that the semantics over arbitrary minspaces can be equivalently specified in terms of preferential structures, typical of conditional logics. We first give a direct axiomatisation of this logic over Minspaces. We next define a decision procedure in the form of a tableaux calculus. Both the calculus and the axiomatisation take advantage of the reformulation of the semantics in terms of preferential structures.
|
Comparative concept similarity over Minspaces: Axiomatisation and
Tableaux Calculus
| 504
|
Data mining algorithms are now able to efficiently deal with huge amount of data. Various kinds of patterns may be discovered and may have some great impact on the general development of knowledge. In many domains, end users may want to have their data mined by data mining tools in order to extract patterns that could impact their business. Nevertheless, those users are often overwhelmed by the large quantity of patterns extracted in such a situation. Moreover, some privacy issues, or some commercial one may lead the users not to be able to mine the data by themselves. Thus, the users may not have the possibility to perform many experiments integrating various constraints in order to focus on specific patterns they would like to extract. Post processing of patterns may be an answer to that drawback. Thus, in this paper we present a framework that could allow end users to manage collections of patterns. We propose to use an efficient data structure on which some algebraic operators may be used in order to retrieve or access patterns in pattern bases.
|
A Model for Managing Collections of Patterns
| 505
|
Empirical evidence suggests that hashing is an effective strategy for dimensionality reduction and practical nonparametric estimation. In this paper we provide exponential tail bounds for feature hashing and show that the interaction between random subspaces is negligible with high probability. We demonstrate the feasibility of this approach with experimental results for a new use case -- multitask learning with hundreds of thousands of tasks.
|
Feature Hashing for Large Scale Multitask Learning
| 506
|
We propose a new extended format to represent constraint networks using XML. This format allows us to represent constraints defined either in extension or in intension. It also allows us to reference global constraints. Any instance of the problems CSP (Constraint Satisfaction Problem), QCSP (Quantified CSP) and WCSP (Weighted CSP) can be represented using this format.
|
XML Representation of Constraint Networks: Format XCSP 2.1
| 507
|
The present work consisted in developing a plateau game. There are the traditional ones (monopoly, cluedo, ect.) but those which interest us leave less place at the chance (luck) than to the strategy such that the chess game. Kallah is an old African game, its rules are simple but the strategies to be used are very complex to implement. Of course, they are based on a strongly mathematical basis as in the film "Rain-Man" where one can see that gambling can be payed with strategies based on mathematical theories. The Artificial Intelligence gives the possibility "of thinking" to a machine and, therefore, allows it to make decisions. In our work, we use it to give the means to the computer choosing its best movement.
|
The Semantics of Kalah Game
| 508
|
1-Nearest Neighbor with the Dynamic Time Warping (DTW) distance is one of the most effective classifiers on time series domain. Since the global constraint has been introduced in speech community, many global constraint models have been proposed including Sakoe-Chiba (S-C) band, Itakura Parallelogram, and Ratanamahatana-Keogh (R-K) band. The R-K band is a general global constraint model that can represent any global constraints with arbitrary shape and size effectively. However, we need a good learning algorithm to discover the most suitable set of R-K bands, and the current R-K band learning algorithm still suffers from an 'overfitting' phenomenon. In this paper, we propose two new learning algorithms, i.e., band boundary extraction algorithm and iterative learning algorithm. The band boundary extraction is calculated from the bound of all possible warping paths in each class, and the iterative learning is adjusted from the original R-K band learning. We also use a Silhouette index, a well-known clustering validation technique, as a heuristic function, and the lower bound function, LB_Keogh, to enhance the prediction speed. Twenty datasets, from the Workshop and Challenge on Time Series Classification, held in conjunction of the SIGKDD 2007, are used to evaluate our approach.
|
Learning DTW Global Constraint for Time Series Classification
| 509
|
We propose Range and Roots which are two common patterns useful for specifying a wide range of counting and occurrence constraints. We design specialised propagation algorithms for these two patterns. Counting and occurrence constraints specified using these patterns thus directly inherit a propagation algorithm. To illustrate the capabilities of the Range and Roots constraints, we specify a number of global constraints taken from the literature. Preliminary experiments demonstrate that propagating counting and occurrence constraints using these two patterns leads to a small loss in performance when compared to specialised global constraints and is competitive with alternative decompositions using elementary constraints.
|
Range and Roots: Two Common Patterns for Specifying and Propagating
Counting and Occurrence Constraints
| 510
|
The management and combination of uncertain, imprecise, fuzzy and even paradoxical or high conflicting sources of information has always been, and still remains today, of primal importance for the development of reliable modern information systems involving artificial reasoning. In this introduction, we present a survey of our recent theory of plausible and paradoxical reasoning, known as Dezert-Smarandache Theory (DSmT), developed for dealing with imprecise, uncertain and conflicting sources of information. We focus our presentation on the foundations of DSmT and on its most important rules of combination, rather than on browsing specific applications of DSmT available in literature. Several simple examples are given throughout this presentation to show the efficiency and the generality of this new approach.
|
An introduction to DSmT
| 511
|
When mathematicians present proofs they usually adapt their explanations to their didactic goals and to the (assumed) knowledge of their addressees. Modern automated theorem provers, in contrast, present proofs usually at a fixed level of detail (also called granularity). Often these presentations are neither intended nor suitable for human use. A challenge therefore is to develop user- and goal-adaptive proof presentation techniques that obey common mathematical practice. We present a flexible and adaptive approach to proof presentation that exploits machine learning techniques to extract a model of the specific granularity of proof examples and employs this model for the automated generation of further proofs at an adapted level of granularity.
|
Granularity-Adaptive Proof Presentation
| 512
|
Symmetry is an important factor in solving many constraint satisfaction problems. One common type of symmetry is when we have symmetric values. In a recent series of papers, we have studied methods to break value symmetries. Our results identify computational limits on eliminating value symmetry. For instance, we prove that pruning all symmetric values is NP-hard in general. Nevertheless, experiments show that much value symmetry can be broken in practice. These results may be useful to researchers in planning, scheduling and other areas as value symmetry occurs in many different domains.
|
Breaking Value Symmetry
| 513
|
An attractive mechanism to specify global constraints in rostering and other domains is via formal languages. For instance, the Regular and Grammar constraints specify constraints in terms of the languages accepted by an automaton and a context-free grammar respectively. Taking advantage of the fixed length of the constraint, we give an algorithm to transform a context-free grammar into an automaton. We then study the use of minimization techniques to reduce the size of such automata and speed up propagation. We show that minimizing such automata after they have been unfolded and domains initially reduced can give automata that are more compact than minimizing before unfolding and reducing. Experimental results show that such transformations can improve the size of rostering problems that we can 'model and run'.
|
Reformulating Global Grammar Constraints
| 514
|
We propose a new family of constraints which combine together lexicographical ordering constraints for symmetry breaking with other common global constraints. We give a general purpose propagator for this family of constraints, and show how to improve its complexity by exploiting properties of the included global constraints.
|
Combining Symmetry Breaking and Global Constraints
| 515
|
We present an online method for estimating the cost of solving SAT problems. Modern SAT solvers present several challenges to estimate search cost including non-chronological backtracking, learning and restarts. Our method uses a linear model trained on data gathered at the start of search. We show the effectiveness of this method using random and structured problems. We demonstrate that predictions made in early restarts can be used to improve later predictions. We also show that we can use such cost estimations to select a solver from a portfolio.
|
Online Estimation of SAT Solving Runtime
| 516
|
In this work, we deal with the question of modeling programming exercises for novices pointing to an e-learning scenario. Our purpose is to identify basic requirements, raise some key questions and propose potential answers from a conceptual perspective. Presented as a general picture, we hypothetically situate our work in a general context where e-learning instructional material needs to be adapted to form part of an introductory Computer Science (CS) e-learning course at the CS1-level. Meant is a potential course which aims at improving novices skills and knowledge on the essentials of programming by using e-learning based approaches in connection (at least conceptually) with a general host framework like Activemath (www.activemath.org). Our elaboration covers contextual and, particularly, cognitive elements preparing the terrain for eventual research stages in a derived project, as indicated. We concentrate our main efforts on reasoning mechanisms about exercise complexity that can eventually offer tool support for the task of exercise authoring. We base our requirements analysis on our own perception of the exercise subsystem provided by Activemath especially within the domain reasoner area. We enrich the analysis by bringing to the discussion several relevant contextual elements from the CS1 courses, its definition and implementation. Concerning cognitive models and exercises, we build upon the principles of Bloom's Taxonomy as a relatively standardized basis and use them as a framework for study and analysis of complexity in basic programming exercises. Our analysis includes requirements for the domain reasoner which are necessary for the exercise analysis. We propose for such a purpose a three-layered conceptual model considering exercise evaluation, programming and metaprogramming.
|
On Requirements for Programming Exercises from an E-learning Perspective
| 517
|
Successful management of emotional stimuli is a pivotal issue concerning Affective Computing (AC) and the related research. As a subfield of Artificial Intelligence, AC is concerned not only with the design of computer systems and the accompanying hardware that can recognize, interpret, and process human emotions, but also with the development of systems that can trigger human emotional response in an ordered and controlled manner. This requires the maximum attainable precision and efficiency in the extraction of data from emotionally annotated databases While these databases do use keywords or tags for description of the semantic content, they do not provide either the necessary flexibility or leverage needed to efficiently extract the pertinent emotional content. Therefore, to this extent we propose an introduction of ontologies as a new paradigm for description of emotionally annotated data. The ability to select and sequence data based on their semantic attributes is vital for any study involving metadata, semantics and ontological sorting like the Semantic Web or the Social Semantic Desktop, and the approach described in the paper facilitates reuse in these areas as well.
|
Tagging multimedia stimuli with ontologies
| 518
|
To model combinatorial decision problems involving uncertainty and probability, we introduce scenario based stochastic constraint programming. Stochastic constraint programs contain both decision variables, which we can set, and stochastic variables, which follow a discrete probability distribution. We provide a semantics for stochastic constraint programs based on scenario trees. Using this semantics, we can compile stochastic constraint programs down into conventional (non-stochastic) constraint programs. This allows us to exploit the full power of existing constraint solvers. We have implemented this framework for decision making under uncertainty in stochastic OPL, a language which is based on the OPL constraint modelling language [Hentenryck et al., 1999]. To illustrate the potential of this framework, we model a wide range of problems in areas as diverse as portfolio diversification, agricultural planning and production/inventory management.
|
Stochastic Constraint Programming: A Scenario-Based Approach
| 519
|
To model combinatorial decision problems involving uncertainty and probability, we introduce stochastic constraint programming. Stochastic constraint programs contain both decision variables (which we can set) and stochastic variables (which follow a probability distribution). They combine together the best features of traditional constraint satisfaction, stochastic integer programming, and stochastic satisfiability. We give a semantics for stochastic constraint programs, and propose a number of complete algorithms and approximation procedures. Finally, we discuss a number of extensions of stochastic constraint programming to relax various assumptions like the independence between stochastic variables, and compare with other approaches for decision making under uncertainty.
|
Stochastic Constraint Programming
| 520
|
Often user interfaces of theorem proving systems focus on assisting particularly trained and skilled users, i.e., proof experts. As a result, the systems are difficult to use for non-expert users. This paper describes a paper and pencil HCI experiment, in which (non-expert) students were asked to make suggestions for a GUI for an interactive system for mathematical proofs. They had to explain the usage of the GUI by applying it to construct a proof sketch for a given theorem. The evaluation of the experiment provides insights for the interaction design for non-expert users and the needs and wants of this user group.
|
Designing a GUI for Proofs - Evaluation of an HCI Experiment
| 521
|
The Ouroboros Model is a new conceptual proposal for an algorithmic structure for efficient data processing in living beings as well as for artificial agents. Its central feature is a general repetitive loop where one iteration cycle sets the stage for the next. Sensory input activates data structures (schemata) with similar constituents encountered before, thus expectations are kindled. This corresponds to the highlighting of empty slots in the selected schema, and these expectations are compared with the actually encountered input. Depending on the outcome of this consumption analysis different next steps like search for further data or a reset, i.e. a new attempt employing another schema, are triggered. Monitoring of the whole process, and in particular of the flow of activation directed by the consumption analysis, yields valuable feedback for the optimum allocation of attention and resources including the selective establishment of useful new memory entries.
|
Flow of Activity in the Ouroboros Model
| 522
|
In a certain number of situations, human cognitive functioning is difficult to represent with classical artificial intelligence structures. Such a difficulty arises in the polyneuropathy diagnosis which is based on the spatial distribution, along the nerve fibres, of lesions, together with the synthesis of several partial diagnoses. Faced with this problem while building up an expert system (NEUROP), we developed a heterogeneous knowledge representation associating a finite automaton with first order logic. A number of knowledge representation problems raised by the electromyography test features are examined in this study and the expert system architecture allowing such a knowledge modeling are laid out.
|
Heterogeneous knowledge representation using a finite automaton and
first order logic: a case study in electromyography
| 523
|
In this paper a new dynamic subsumption technique for Boolean CNF formulae is proposed. It exploits simple and sufficient conditions to detect during conflict analysis, clauses from the original formula that can be reduced by subsumption. During the learnt clause derivation, and at each step of the resolution process, we simply check for backward subsumption between the current resolvent and clauses from the original formula and encoded in the implication graph. Our approach give rise to a strong and dynamic simplification technique that exploits learning to eliminate literals from the original clauses. Experimental results show that the integration of our dynamic subsumption approach within the state-of-the-art SAT solvers Minisat and Rsat achieves interesting improvements particularly on crafted instances.
|
Learning for Dynamic subsumption
| 524
|
Today, science have a powerful tool for the description of reality - the numbers. However, the concept of number was not immediately, lets try to trace the evolution of the concept. The numbers emerged as the need for accurate estimates of the amount in order to permit a comparison of some objects. So if you see to it how many times a day a person uses the numbers and compare, it becomes evident that the comparison is used much more frequently. However, the comparison is not possible without two opposite basic standards. Thus, to introduce the concept of comparison, must have two opposing standards, in turn, the operation of comparison is necessary to introduce the concept of number. Arguably, the scientific description of reality is impossible without the concept of opposites. In this paper analyzes the concept of opposites, as the basis for the introduction of the principle of development.
|
Principle of development
| 525
|
Social Network Analysis (SNA) tries to understand and exploit the key features of social networks in order to manage their life cycle and predict their evolution. Increasingly popular web 2.0 sites are forming huge social network. Classical methods from social network analysis (SNA) have been applied to such online networks. In this paper, we propose leveraging semantic web technologies to merge and exploit the best features of each domain. We present how to facilitate and enhance the analysis of online social networks, exploiting the power of semantic social network analysis.
|
Semantic Social Network Analysis
| 526
|
We apply proof-theoretic techniques in answer Set Programming. The main results include: 1. A characterization of continuity properties of Gelfond-Lifschitz operator for logic program. 2. A propositional characterization of stable models of logic programs (without referring to loop formulas.
|
An Application of Proof-Theory in Answer Set Programming
| 527
|
We show that some common and important global constraints like ALL-DIFFERENT and GCC can be decomposed into simple arithmetic constraints on which we achieve bound or range consistency, and in some cases even greater pruning. These decompositions can be easily added to new solvers. They also provide other constraints with access to the state of the propagator by sharing of variables. Such sharing can be used to improve propagation between constraints. We report experiments with our decomposition in a pseudo-Boolean solver.
|
Decompositions of All Different, Global Cardinality and Related
Constraints
| 528
|
To model combinatorial decision problems involving uncertainty and probability, we extend the stochastic constraint programming framework proposed in [Walsh, 2002] along a number of important dimensions (e.g. to multiple chance constraints and to a range of new objectives). We also provide a new (but equivalent) semantics based on scenarios. Using this semantics, we can compile stochastic constraint programs down into conventional (nonstochastic) constraint programs. This allows us to exploit the full power of existing constraint solvers. We have implemented this framework for decision making under uncertainty in stochastic OPL, a language which is based on the OPL constraint modelling language [Hentenryck et al., 1999]. To illustrate the potential of this framework, we model a wide range of problems in areas as diverse as finance, agriculture and production.
|
Scenario-based Stochastic Constraint Programming
| 529
|
Many real life optimization problems contain both hard and soft constraints, as well as qualitative conditional preferences. However, there is no single formalism to specify all three kinds of information. We therefore propose a framework, based on both CP-nets and soft constraints, that handles both hard and soft constraints as well as conditional preferences efficiently and uniformly. We study the complexity of testing the consistency of preference statements, and show how soft constraints can faithfully approximate the semantics of conditional preference statements whilst improving the computational complexity
|
Reasoning about soft constraints and conditional preferences: complexity
results and approximation techniques
| 530
|
We identify a new and important global (or non-binary) constraint. This constraint ensures that the values taken by two vectors of variables, when viewed as multisets, are ordered. This constraint is useful for a number of different applications including breaking symmetry and fuzzy constraint satisfaction. We propose and implement an efficient linear time algorithm for enforcing generalised arc consistency on such a multiset ordering constraint. Experimental results on several problem domains show considerable promise.
|
Multiset Ordering Constraints
| 531
|
We relate tag clouds to other forms of visualization, including planar or reduced dimensionality mapping, and Kohonen self-organizing maps. Using a modified tag cloud visualization, we incorporate other information into it, including text sequence and most pertinent words. Our notion of word pertinence goes beyond just word frequency and instead takes a word in a mathematical sense as located at the average of all of its pairwise relationships. We capture semantics through context, taken as all pairwise relationships. Our domain of application is that of filmscript analysis. The analysis of filmscripts, always important for cinema, is experiencing a major gain in importance in the context of television. Our objective in this work is to visualize the semantics of filmscript, and beyond filmscript any other partially structured, time-ordered, sequence of text segments. In particular we develop an innovative approach to plot characterization.
|
Tag Clouds for Displaying Semantics: The Case of Filmscripts
| 532
|
The paper proposes an analysis on some existent ontologies, in order to point out ways to resolve semantic heterogeneity in information systems. Authors are highlighting the tasks in a Knowledge Acquisiton System and identifying aspects related to the addition of new information to an intelligent system. A solution is proposed, as a combination of ontology reasoning services and natural languages generation. A multi-agent system will be conceived with an extractor agent, a reasoner agent and a competence management agent.
|
Considerations on Construction Ontologies
| 533
|
We have been developing a system for recognising human activity given a symbolic representation of video content. The input of our system is a set of time-stamped short-term activities detected on video frames. The output of our system is a set of recognised long-term activities, which are pre-defined temporal combinations of short-term activities. The constraints on the short-term activities that, if satisfied, lead to the recognition of a long-term activity, are expressed using a dialect of the Event Calculus. We illustrate the expressiveness of the dialect by showing the representation of several typical complex activities. Furthermore, we present a detailed evaluation of the system through experimentation on a benchmark dataset of surveillance videos.
|
A Logic Programming Approach to Activity Recognition
| 534
|
People have to make important decisions within a time frame. Hence, it is imperative to employ means or strategy to aid effective decision making. Consequently, Economic Intelligence (EI) has emerged as a field to aid strategic and timely decision making in an organization. In the course of attaining this goal: it is indispensable to be more optimistic towards provision for conservation of intellectual resource invested into the process of decision making. This intellectual resource is nothing else but the knowledge of the actors as well as that of the various processes for effecting decision making. Knowledge has been recognized as a strategic economic resource for enhancing productivity and a key for innovation in any organization or community. Thus, its adequate management with cognizance of its temporal properties is highly indispensable. Temporal properties of knowledge refer to the date and time (known as timestamp) such knowledge is created as well as the duration or interval between related knowledge. This paper focuses on the needs for a user-centered knowledge management approach as well as exploitation of associated temporal properties. Our perspective of knowledge is with respect to decision-problems projects in EI. Our hypothesis is that the possibility of reasoning about temporal properties in exploitation of knowledge in EI projects should foster timely decision making through generation of useful inferences from available and reusable knowledge for a new project.
|
Knowledge Management in Economic Intelligence with Reasoning on Temporal
Attributes
| 535
|
I discuss (ontologies_and_ontological_knowledge_bases / formal_methods_and_theories) duality and its category theory extensions as a step toward a solution to Knowledge-Based Systems Theory. In particular I focus on the example of the design of elements of ontologies and ontological knowledge bases of next three electronic courses: Foundations of Research Activities, Virtual Modeling of Complex Systems and Introduction to String Theory.
|
Toward a Category Theory Design of Ontological Knowledge Bases
| 536
|
Mnesors are defined as elements of a semimodule over the min-plus integers. This two-sorted structure is able to merge graduation properties of vectors and idempotent properties of boolean numbers, which makes it appropriate for hybrid systems. We apply it to the control of an inverted pendulum and design a full logical controller, that is, without the usual algebra of real numbers.
|
Mnesors for automatic control
| 537
|
We consider the following sequential decision problem. Given a set of items of unknown utility, we need to select one of as high a utility as possible (``the selection problem''). Measurements (possibly noisy) of item values prior to selection are allowed, at a known cost. The goal is to optimize the overall sequential decision process of measurements and selection. Value of information (VOI) is a well-known scheme for selecting measurements, but the intractability of the problem typically leads to using myopic VOI estimates. In the selection problem, myopic VOI frequently badly underestimates the value of information, leading to inferior sensing plans. We relax the strict myopic assumption into a scheme we term semi-myopic, providing a spectrum of methods that can improve the performance of sensing plans. In particular, we propose the efficiently computable method of ``blinkered'' VOI, and examine theoretical bounds for special cases. Empirical evaluation of ``blinkered'' VOI in the selection problem with normally distributed item values shows that is performs much better than pure myopic VOI.
|
Semi-Myopic Sensing Plans for Value Optimization
| 538
|
There are several well-known justifications for conditioning as the appropriate method for updating a single probability measure, given an observation. However, there is a significant body of work arguing for sets of probability measures, rather than single measures, as a more realistic model of uncertainty. Conditioning still makes sense in this context--we can simply condition each measure in the set individually, then combine the results--and, indeed, it seems to be the preferred updating procedure in the literature. But how justified is conditioning in this richer setting? Here we show, by considering an axiomatic account of conditioning given by van Fraassen, that the single-measure and sets-of-measures cases are very different. We show that van Fraassen's axiomatization for the former case is nowhere near sufficient for updating sets of measures. We give a considerably longer (and not as compelling) list of axioms that together force conditioning in this setting, and describe other update methods that are allowed once any of these axioms is dropped.
|
Updating Sets of Probabilities
| 539
|
This paper presents a novel two-stage flexible dynamic decision support based optimal threat evaluation and defensive resource scheduling algorithm for multi-target air-borne threats. The algorithm provides flexibility and optimality by swapping between two objective functions, i.e. the preferential and subtractive defense strategies as and when required. To further enhance the solution quality, it outlines and divides the critical parameters used in Threat Evaluation and Weapon Assignment (TEWA) into three broad categories (Triggering, Scheduling and Ranking parameters). Proposed algorithm uses a variant of many-to-many Stable Marriage Algorithm (SMA) to solve Threat Evaluation (TE) and Weapon Assignment (WA) problem. In TE stage, Threat Ranking and Threat-Asset pairing is done. Stage two is based on a new flexible dynamic weapon scheduling algorithm, allowing multiple engagements using shoot-look-shoot strategy, to compute near-optimal solution for a range of scenarios. Analysis part of this paper presents the strengths and weaknesses of the proposed algorithm over an alternative greedy algorithm as applied to different offline scenarios.
|
A Novel Two-Stage Dynamic Decision Support based Optimal Threat
Evaluation and Defensive Resource Scheduling Algorithm for Multi Air-borne
threats
| 540
|
Martin and Osswald \cite{Martin07} have recently proposed many generalizations of combination rules on quantitative beliefs in order to manage the conflict and to consider the specificity of the responses of the experts. Since the experts express themselves usually in natural language with linguistic labels, Smarandache and Dezert \cite{Li07} have introduced a mathematical framework for dealing directly also with qualitative beliefs. In this paper we recall some element of our previous works and propose the new combination rules, developed for the fusion of both qualitative or quantitative beliefs.
|
General combination rules for qualitative and quantitative beliefs
| 541
|
Surveillance control and reporting (SCR) system for air threats play an important role in the defense of a country. SCR system corresponds to air and ground situation management/processing along with information fusion, communication, coordination, simulation and other critical defense oriented tasks. Threat Evaluation and Weapon Assignment (TEWA) sits at the core of SCR system. In such a system, maximal or near maximal utilization of constrained resources is of extreme importance. Manual TEWA systems cannot provide optimality because of different limitations e.g.surface to air missile (SAM) can fire from a distance of 5Km, but manual TEWA systems are constrained by human vision range and other constraints. Current TEWA systems usually work on target-by-target basis using some type of greedy algorithm thus affecting the optimality of the solution and failing in multi-target scenario. his paper relates to a novel two-staged flexible dynamic decision support based optimal threat evaluation and weapon assignment algorithm for multi-target air-borne threats.
|
A Novel Two-Staged Decision Support based Threat Evaluation and Weapon
Assignment Algorithm, Asset-based Dynamic Weapon Scheduling using Artificial
Intelligence Techinques
| 542
|
Collective graphical models exploit inter-instance associative dependence to output more accurate labelings. However existing models support very limited kind of associativity which restricts accuracy gains. This paper makes two major contributions. First, we propose a general collective inference framework that biases data instances to agree on a set of {\em properties} of their labelings. Agreement is encouraged through symmetric clique potentials. We show that rich properties leads to bigger gains, and present a systematic inference procedure for a large class of such properties. The procedure performs message passing on the cluster graph, where property-aware messages are computed with cluster specific algorithms. This provides an inference-only solution for domain adaptation. Our experiments on bibliographic information extraction illustrate significant test error reduction over unseen domains. Our second major contribution consists of algorithms for computing outgoing messages from clique clusters with symmetric clique potentials. Our algorithms are exact for arbitrary symmetric potentials on binary labels and for max-like and majority-like potentials on multiple labels. For majority potentials, we also provide an efficient Lagrangian Relaxation based algorithm that compares favorably with the exact algorithm. We present a 13/15-approximation algorithm for the NP-hard Potts potential, with runtime sub-quadratic in the clique size. In contrast, the best known previous guarantee for graphs with Potts potentials is only 1/2. We empirically show that our method for Potts potentials is an order of magnitude faster than the best alternatives, and our Lagrangian Relaxation based algorithm for majority potentials beats the best applicable heuristic -- ICM.
|
Generalized Collective Inference with Symmetric Clique Potentials
| 543
|
In this paper, we propose a first-order ontology for generalized stratified order structure. We then classify the models of the theory using model-theoretic techniques. An ontology mapping from this ontology to the core theory of Process Specification Language is also discussed.
|
Modelling Concurrent Behaviors in the Process Specification Language
| 544
|
The article presents a study of rather simple local search heuristics for the single machine total weighted tardiness problem (SMTWTP), namely hillclimbing and Variable Neighborhood Search. In particular, we revisit these approaches for the SMTWTP as there appears to be a lack of appropriate/challenging benchmark instances in this case. The obtained results are impressive indeed. Only few instances remain unsolved, and even those are approximated within 1% of the optimal/best known solutions. Our experiments support the claim that metaheuristics for the SMTWTP are very likely to lead to good results, and that, before refining search strategies, more work must be done with regard to the proposition of benchmark data. Some recommendations for the construction of such data sets are derived from our investigations.
|
The Single Machine Total Weighted Tardiness Problem - Is it (for
Metaheuristics) a Solved Problem ?
| 545
|
The article describes the proposition and application of a local search metaheuristic for multi-objective optimization problems. It is based on two main principles of heuristic search, intensification through variable neighborhoods, and diversification through perturbations and successive iterations in favorable regions of the search space. The concept is successfully tested on permutation flow shop scheduling problems under multiple objectives and compared to other local search approaches. While the obtained results are encouraging in terms of their quality, another positive attribute of the approach is its simplicity as it does require the setting of only very few parameters.
|
Improvements for multi-objective flow shop scheduling by Pareto Iterated
Local Search
| 546
|
This paper discusses "computational" systems capable of "computing" functions not computable by predefined Turing machines if the systems are not isolated from their environment. Roughly speaking, these systems can change their finite descriptions by interacting with their environment.
|
Beyond Turing Machines
| 547
|
I propose that pattern recognition, memorization and processing are key concepts that can be a principle set for the theoretical modeling of the mind function. Most of the questions about the mind functioning can be answered by a descriptive modeling and definitions from these principles. An understandable consciousness definition can be drawn based on the assumption that a pattern recognition system can recognize its own patterns of activity. The principles, descriptive modeling and definitions can be a basis for theoretical and applied research on cognitive sciences, particularly at artificial intelligence studies.
|
Pattern Recognition Theory of Mind
| 548
|
Restart strategies are an important factor in the performance of conflict-driven Davis Putnam style SAT solvers. Selecting a good restart strategy for a problem instance can enhance the performance of a solver. Inspired by recent success applying machine learning techniques to predict the runtime of SAT solvers, we present a method which uses machine learning to boost solver performance through a smart selection of the restart strategy. Based on easy to compute features, we train both a satisfiability classifier and runtime models. We use these models to choose between restart strategies. We present experimental results comparing this technique with the most commonly used restart strategies. Our results demonstrate that machine learning is effective in improving solver performance.
|
Restart Strategy Selection using Machine Learning Techniques
| 549
|
We present two different methods for estimating the cost of solving SAT problems. The methods focus on the online behaviour of the backtracking solver, as well as the structure of the problem. Modern SAT solvers present several challenges to estimate search cost including coping with nonchronological backtracking, learning and restarts. Our first method adapt an existing algorithm for estimating the size of a search tree to deal with these challenges. We then suggest a second method that uses a linear model trained on data gathered online at the start of search. We compare the effectiveness of these two methods using random and structured problems. We also demonstrate that predictions made in early restarts can be used to improve later predictions. We conclude by showing that the cost of solving a set of problems can be reduced by selecting a solver from a portfolio based on such cost estimations.
|
Online Search Cost Estimation for SAT Solvers
| 550
|
Classification is the basis of cognition. Unlike other solutions, this study approaches it from the view of outliers. We present an expanding algorithm to detect outliers in univariate datasets, together with the underlying foundation. The expanding algorithm runs in a holistic way, making it a rather robust solution. Synthetic and real data experiments show its power. Furthermore, an application for multi-class problems leads to the introduction of the oscillator algorithm. The corresponding result implies the potential wide use of the expanding algorithm.
|
On Classification from Outlier View
| 551
|
We consider a sequence of repeated interactions between an agent and an environment. Uncertainty about the environment is captured by a probability distribution over a space of hypotheses, which includes all computable functions. Given a utility function, we can evaluate the expected utility of any computational policy for interaction with the environment. After making some plausible assumptions (and maybe one not-so-plausible assumption), we show that if the utility function is unbounded, then the expected utility of any policy is undefined.
|
Convergence of Expected Utility for Universal AI
| 552
|
This study describes application of some approximate reasoning methods to analysis of hydrocyclone performance. In this manner, using a combining of Self Organizing Map (SOM), Neuro-Fuzzy Inference System (NFIS)-SONFIS- and Rough Set Theory (RST)-SORST-crisp and fuzzy granules are obtained. Balancing of crisp granules and non-crisp granules can be implemented in close-open iteration. Using different criteria and based on granulation level balance point (interval) or a pseudo-balance point is estimated. Validation of the proposed methods, on the data set of the hydrocyclone is rendered.
|
Knowledge Discovery of Hydrocyclone s Circuit Based on SONFIS and SORST
| 553
|
When implementing a propagator for a constraint, one must decide about variants: When implementing min, should one also implement max? Should one implement linear constraints both with unit and non-unit coefficients? Constraint variants are ubiquitous: implementing them requires considerable (if not prohibitive) effort and decreases maintainability, but will deliver better performance than resorting to constraint decomposition. This paper shows how to use views to derive perfect propagator variants. A model for views and derived propagators is introduced. Derived propagators are proved to be indeed perfect in that they inherit essential properties such as correctness and domain and bounds consistency. Techniques for systematically deriving propagators such as transformation, generalization, specialization, and type conversion are developed. The paper introduces an implementation architecture for views that is independent of the underlying constraint programming system. A detailed evaluation of views implemented in Gecode shows that derived propagators are efficient and that views often incur no overhead. Without views, Gecode would either require 180 000 rather than 40 000 lines of propagator code, or would lack many efficient propagator variants. Compared to 8 000 lines of code for views, the reduction in code for propagators yields a 1750% return on investment.
|
View-based Propagator Derivation
| 554
|
The explorative mind-map is a dynamic framework, that emerges automatically from the input, it gets. It is unlike a verificative modeling system where existing (human) thoughts are placed and connected together. In this regard, explorative mind-maps change their size continuously, being adaptive with connectionist cells inside; mind-maps process data input incrementally and offer lots of possibilities to interact with the user through an appropriate communication interface. With respect to a cognitive motivated situation like a conversation between partners, mind-maps become interesting as they are able to process stimulating signals whenever they occur. If these signals are close to an own understanding of the world, then the conversational partner becomes automatically more trustful than if the signals do not or less match the own knowledge scheme. In this (position) paper, we therefore motivate explorative mind-maps as a cognitive engine and propose these as a decision support engine to foster trust.
|
A Cognitive Mind-map Framework to Foster Trust
| 555
|
To capture the uncertainty of information or knowledge in information systems, various information granulations, also known as knowledge granulations, have been proposed. Recently, several axiomatic definitions of information granulation have been introduced. In this paper, we try to improve these axiomatic definitions and give a universal construction of information granulation by relating information granulations with a class of functions of multiple variables. We show that the improved axiomatic definition has some concrete information granulations in the literature as instances.
|
An improved axiomatic definition of information granulation
| 556
|
Current research on qualitative spatial representation and reasoning mainly focuses on one single aspect of space. In real world applications, however, multiple spatial aspects are often involved simultaneously. This paper investigates problems arising in reasoning with combined topological and directional information. We use the RCC8 algebra and the Rectangle Algebra (RA) for expressing topological and directional information respectively. We give examples to show that the bipath-consistency algorithm BIPATH is incomplete for solving even basic RCC8 and RA constraints. If topological constraints are taken from some maximal tractable subclasses of RCC8, and directional constraints are taken from a subalgebra, termed DIR49, of RA, then we show that BIPATH is able to separate topological constraints from directional ones. This means, given a set of hybrid topological and directional constraints from the above subclasses of RCC8 and RA, we can transfer the joint satisfaction problem in polynomial time to two independent satisfaction problems in RCC8 and RA. For general RA constraints, we give a method to compute solutions that satisfy all topological constraints and approximately satisfy each RA constraint to any prescribed precision.
|
Reasoning with Topological and Directional Spatial Information
| 557
|
Direction relations between extended spatial objects are important commonsense knowledge. Recently, Goyal and Egenhofer proposed a formal model, known as Cardinal Direction Calculus (CDC), for representing direction relations between connected plane regions. CDC is perhaps the most expressive qualitative calculus for directional information, and has attracted increasing interest from areas such as artificial intelligence, geographical information science, and image retrieval. Given a network of CDC constraints, the consistency problem is deciding if the network is realizable by connected regions in the real plane. This paper provides a cubic algorithm for checking consistency of basic CDC constraint networks, and proves that reasoning with CDC is in general an NP-Complete problem. For a consistent network of basic CDC constraints, our algorithm also returns a 'canonical' solution in cubic time. This cubic algorithm is also adapted to cope with cardinal directions between possibly disconnected regions, in which case currently the best algorithm is of time complexity O(n^5).
|
Reasoning about Cardinal Directions between Extended Objects
| 558
|
In this paper, we address the problem of generating preferred plans by combining the procedural control knowledge specified by Hierarchical Task Networks (HTNs) with rich qualitative user preferences. The outcome of our work is a language for specifyin user preferences, tailored to HTN planning, together with a provably optimal preference-based planner, HTNPLAN, that is implemented as an extension of SHOP2. To compute preferred plans, we propose an approach based on forward-chaining heuristic search. Our heuristic uses an admissible evaluation function measuring the satisfaction of preferences over partial plans. Our empirical evaluation demonstrates the effectiveness of our HTNPLAN heuristics. We prove our approach sound and optimal with respect to the plans it generates by appealing to a situation calculus semantics of our preference language and of HTN planning. While our implementation builds on SHOP2, the language and techniques proposed here are relevant to a broad range of HTN planners.
|
On Planning with Preferences in HTN
| 559
|
We study the notion of informedness in a client-consultant setting. Using a software simulator, we examine the extent to which it pays off for consultants to provide their clients with advice that is well-informed, or with advice that is merely meant to appear to be well-informed. The latter strategy is beneficial in that it costs less resources to keep up-to-date, but carries the risk of a decreased reputation if the clients discover the low level of informedness of the consultant. Our experimental results indicate that under different circumstances, different strategies yield the optimal results (net profit) for the consultants.
|
Assessing the Impact of Informedness on a Consultant's Profit
| 560
|
We describe in this article a multiagent urban traffic simulation, as we believe individual-based modeling is necessary to encompass the complex influence the actions of an individual vehicle can have on the overall flow of vehicles. We first describe how we build a graph description of the network from purely geometric data, ESRI shapefiles. We then explain how we include traffic related data to this graph. We go on after that with the model of the vehicle agents: origin and destination, driving behavior, multiple lanes, crossroads, and interactions with the other vehicles in day-to-day, ?ordinary? traffic. We conclude with the presentation of the resulting simulation of this model on the Rouen agglomeration.
|
A multiagent urban traffic simulation Part I: dealing with the ordinary
| 561
|
2007 was the first international congress on the ?square of oppositions?. A first attempt to structure debate using n-opposition theory was presented along with the results of a first experiment on the web. Our proposal for this paper is to define relations between arguments through a structure of opposition (square of oppositions is one structure of opposition). We will be trying to answer the following questions: How to organize debates on the web 2.0? How to structure them in a logical way? What is the role of n-opposition theory, in this context? We present in this paper results of three experiments (Betapolitique 2007, ECAP 2008, Intermed 2008).
|
n-Opposition theory to structure debates
| 562
|
We propose Interactive Differential Evolution (IDE) based on paired comparisons for reducing user fatigue and evaluate its convergence speed in comparison with Interactive Genetic Algorithms (IGA) and tournament IGA. User interface and convergence performance are two big keys for reducing Interactive Evolutionary Computation (IEC) user fatigue. Unlike IGA and conventional IDE, users of the proposed IDE and tournament IGA do not need to compare whole individuals each other but compare pairs of individuals, which largely decreases user fatigue. In this paper, we design a pseudo-IEC user and evaluate another factor, IEC convergence performance, using IEC simulators and show that our proposed IDE converges significantly faster than IGA and tournament IGA, i.e. our proposed one is superior to others from both user interface and convergence performance points of view.
|
Paired Comparisons-based Interactive Differential Evolution
| 563
|
This paper describes application of information granulation theory, on the back analysis of Jeffrey mine southeast wall Quebec. In this manner, using a combining of Self Organizing Map (SOM) and rough set theory (RST), crisp and rough granules are obtained. Balancing of crisp granules and sub rough granules is rendered in close-open iteration. Combining of hard and soft computing, namely finite difference method (FDM) and computational intelligence and taking in to account missing information are two main benefits of the proposed method. As a practical example, reverse analysis on the failure of the southeast wall Jeffrey mine is accomplished.
|
Back analysis based on SOM-RST system
| 564
|
Fault diagnosis has become a very important area of research during the last decade due to the advancement of mechanical and electrical systems in industries. The automobile is a crucial field where fault diagnosis is given a special attention. Due to the increasing complexity and newly added features in vehicles, a comprehensive study has to be performed in order to achieve an appropriate diagnosis model. A diagnosis system is capable of identifying the faults of a system by investigating the observable effects (or symptoms). The system categorizes the fault into a diagnosis class and identifies a probable cause based on the supplied fault symptoms. Fault categorization and identification are done using similarity matching techniques. The development of diagnosis classes is done by making use of previous experience, knowledge or information within an application area. The necessary information used may come from several sources of knowledge, such as from system analysis. In this paper similarity matching techniques for fault diagnosis in automotive infotainment applications are discussed.
|
Similarity Matching Techniques for Fault Diagnosis in Automotive
Infotainment Electronics
| 565
|
Diverse recommendation techniques have been already proposed and encapsulated into several e-business applications, aiming to perform a more accurate evaluation of the existing information and accordingly augment the assistance provided to the users involved. This paper reports on the development and integration of a recommendation module in an agent-based transportation transactions management system. The module is built according to a novel hybrid recommendation technique, which combines the advantages of collaborative filtering and knowledge-based approaches. The proposed technique and supporting module assist customers in considering in detail alternative transportation transactions that satisfy their requests, as well as in evaluating completed transactions. The related services are invoked through a software agent that constructs the appropriate knowledge rules and performs a synthesis of the recommendation policy.
|
Performing Hybrid Recommendation in Intermodal Transportation-the
FTMarket System's Recommendation Module
| 566
|
We study decompositions of NVALUE, a global constraint that can be used to model a wide range of problems where values need to be counted. Whilst decomposition typically hinders propagation, we identify one decomposition that maintains a global view as enforcing bound consistency on the decomposition achieves bound consistency on the original global NVALUE constraint. Such decompositions offer the prospect for advanced solving techniques like nogood learning and impact based branching heuristics. They may also help SAT and IP solvers take advantage of the propagation of global constraints.
|
Decomposition of the NVALUE constraint
| 567
|
Symmetry is an important feature of many constraint programs. We show that any symmetry acting on a set of symmetry breaking constraints can be used to break symmetry. Different symmetries pick out different solutions in each symmetry class. We use these observations in two methods for eliminating symmetry from a problem. These methods are designed to have many of the advantages of symmetry breaking methods that post static symmetry breaking constraint without some of the disadvantages. In particular, the two methods prune the search space using fast and efficient propagation of posted constraints, whilst reducing the conflict between symmetry breaking and branching heuristics. Experimental results show that the two methods perform well on some standard benchmarks.
|
Symmetries of Symmetry Breaking Constraints
| 568
|
Fuzzy constraints are a popular approach to handle preferences and over-constrained problems in scenarios where one needs to be cautious, such as in medical or space applications. We consider here fuzzy constraint problems where some of the preferences may be missing. This models, for example, settings where agents are distributed and have privacy issues, or where there is an ongoing preference elicitation process. In this setting, we study how to find a solution which is optimal irrespective of the missing preferences. In the process of finding such a solution, we may elicit preferences from the user if necessary. However, our goal is to ask the user as little as possible. We define a combined solving and preference elicitation scheme with a large number of different instantiations, each corresponding to a concrete algorithm which we compare experimentally. We compute both the number of elicited preferences and the "user effort", which may be larger, as it contains all the preference values the user has to compute to be able to respond to the elicitation requests. While the number of elicited preferences is important when the concern is to communicate as little information as possible, the user effort measures also the hidden work the user has to do to be able to communicate the elicited preferences. Our experimental results show that some of our algorithms are very good at finding a necessarily optimal solution while asking the user for only a very small fraction of the missing preferences. The user effort is also very small for the best algorithms. Finally, we test these algorithms on hard constraint problems with possibly missing constraints, where the aim is to find feasible solutions irrespective of the missing constraints.
|
Elicitation strategies for fuzzy constraint problems with missing
preferences: algorithms and experimental studies
| 569
|
We propose new filtering algorithms for the SEQUENCE constraint and some extensions of the SEQUENCE constraint based on network flows. We enforce domain consistency on the SEQUENCE constraint in $O(n^2)$ time down a branch of the search tree. This improves upon the best existing domain consistency algorithm by a factor of $O(\log n)$. The flows used in these algorithms are derived from a linear program. Some of them differ from the flows used to propagate global constraints like GCC since the domains of the variables are encoded as costs on the edges rather than capacities. Such flows are efficient for maintaining bounds consistency over large domains and may be useful for other global constraints.
|
Flow-Based Propagators for the SEQUENCE and Related Global Constraints
| 570
|
We introduce the weighted CFG constraint and propose a propagation algorithm that enforces domain consistency in $O(n^3|G|)$ time. We show that this algorithm can be decomposed into a set of primitive arithmetic constraints without hindering propagation.
|
The Weighted CFG Constraint
| 571
|
Many models in natural and social sciences are comprised of sets of inter-acting entities whose intensity of interaction decreases with distance. This often leads to structures of interest in these models composed of dense packs of entities. Fast Multipole Methods are a family of methods developed to help with the calculation of a number of computable models such as described above. We propose a method that builds upon FMM to detect and model the dense structures of these systems.
|
Building upon Fast Multipole Methods to Detect and Model Organizations
| 572
|
In Probabilistic Risk Management, risk is characterized by two quantities: the magnitude (or severity) of the adverse consequences that can potentially result from the given activity or action, and by the likelihood of occurrence of the given adverse consequences. But a risk seldom exists in isolation: chain of consequences must be examined, as the outcome of one risk can increase the likelihood of other risks. Systemic theory must complement classic PRM. Indeed these chains are composed of many different elements, all of which may have a critical importance at many different levels. Furthermore, when urban catastrophes are envisioned, space and time constraints are key determinants of the workings and dynamics of these chains of catastrophes: models must include a correct spatial topology of the studied risk. Finally, literature insists on the importance small events can have on the risk on a greater scale: urban risks management models belong to self-organized criticality theory. We chose multiagent systems to incorporate this property in our model: the behavior of an agent can transform the dynamics of important groups of them.
|
A multiagent urban traffic simulation. Part II: dealing with the
extraordinary
| 573
|
Constrained Optimum Path (COP) problems appear in many real-life applications, especially on communication networks. Some of these problems have been considered and solved by specific techniques which are usually difficult to extend. In this paper, we introduce a novel local search modeling for solving some COPs by local search. The modeling features the compositionality, modularity, reuse and strengthens the benefits of Constrained-Based Local Search. We also apply the modeling to the edge-disjoint paths problem (EDP). We show that side constraints can easily be added in the model. Computational results show the significance of the approach.
|
A Local Search Modeling for Constrained Optimum Paths Problems (Extended
Abstract)
| 574
|
Using constraint-based local search, we effectively model and efficiently solve the problem of balancing the traffic demands on portions of the European airspace while ensuring that their capacity constraints are satisfied. The traffic demand of a portion of airspace is the hourly number of flights planned to enter it, and its capacity is the upper bound on this number under which air-traffic controllers can work. Currently, the only form of demand-capacity balancing we allow is ground holding, that is the changing of the take-off times of not yet airborne flights. Experiments with projected European flight plans of the year 2030 show that already this first form of demand-capacity balancing is feasible without incurring too much total delay and that it can lead to a significantly better demand-capacity balance.
|
Dynamic Demand-Capacity Balancing for Air Traffic Management Using
Constraint-Based Local Search: First Results
| 575
|
Stochastic local search (SLS) has been an active field of research in the last few years, with new techniques and procedures being developed at an astonishing rate. SLS has been traditionally associated with satisfiability solving, that is, finding a solution for a given problem instance, as its intrinsic nature does not address unsatisfiable problems. Unsatisfiable instances were therefore commonly solved using backtrack search solvers. For this reason, in the late 90s Selman, Kautz and McAllester proposed a challenge to use local search instead to prove unsatisfiability. More recently, two SLS solvers - Ranger and Gunsat - have been developed, which are able to prove unsatisfiability albeit being SLS solvers. In this paper, we first compare Ranger with Gunsat and then propose to improve Ranger performance using some of Gunsat's techniques, namely unit propagation look-ahead and extended resolution.
|
On Improving Local Search for Unsatisfiability
| 576
|
This article introduces SatHyS (SAT HYbrid Solver), a novel hybrid approach for propositional satisfiability. It combines local search and conflict driven clause learning (CDCL) scheme. Each time the local search part reaches a local minimum, the CDCL is launched. For SAT problems it behaves like a tabu list, whereas for UNSAT ones, the CDCL part tries to focus on minimum unsatisfiable sub-formula (MUS). Experimental results show good performances on many classes of SAT instances from the last SAT competitions.
|
Integrating Conflict Driven Clause Learning to Local Search
| 577
|
In this paper, we investigate the hybridization of constraint programming and local search techniques within a large neighbourhood search scheme for solving highly constrained nurse rostering problems. As identified by the research, a crucial part of the large neighbourhood search is the selection of the fragment (neighbourhood, i.e. the set of variables), to be relaxed and re-optimized iteratively. The success of the large neighbourhood search depends on the adequacy of this identified neighbourhood with regard to the problematic part of the solution assignment and the choice of the neighbourhood size. We investigate three strategies to choose the fragment of different sizes within the large neighbourhood search scheme. The first two strategies are tailored concerning the problem properties. The third strategy is more general, using the information of the cost from the soft constraint violations and their propagation as the indicator to choose the variables added into the fragment. The three strategies are analyzed and compared upon a benchmark nurse rostering problem. Promising results demonstrate the possibility of future work in the hybrid approach.
|
A Constraint-directed Local Search Approach to Nurse Rostering Problems
| 578
|
This paper presents a new method and a constraint-based objective function to solve two problems related to the design of optical telecommunication networks, namely the Synchronous Optical Network Ring Assignment Problem (SRAP) and the Intra-ring Synchronous Optical Network Design Problem (IDP). These network topology problems can be represented as a graph partitioning with capacity constraints as shown in previous works. We present here a new objective function and a new local search algorithm to solve these problems. Experiments conducted in Comet allow us to compare our method to previous ones and show that we obtain better results.
|
Sonet Network Design Problems
| 579
|
We explore the use of the Cell Broadband Engine (Cell/BE for short) for combinatorial optimization applications: we present a parallel version of a constraint-based local search algorithm that has been implemented on a multiprocessor BladeCenter machine with twin Cell/BE processors (total of 16 SPUs per blade). This algorithm was chosen because it fits very well the Cell/BE architecture and requires neither shared memory nor communication between processors, while retaining a compact memory footprint. We study the performance on several large optimization benchmarks and show that this achieves mostly linear time speedups, even sometimes super-linear. This is possible because the parallel implementation might explore simultaneously different parts of the search space and therefore converge faster towards the best sub-space and thus towards a solution. Besides getting speedups, the resulting times exhibit a much smaller variance, which benefits applications where a timely reply is critical.
|
Parallel local search for solving Constraint Problems on the Cell
Broadband Engine (Preliminary Results)
| 580
|
We explore the idea of using finite automata to implement new constraints for local search (this is already a successful technique in constraint-based global search). We show how it is possible to maintain incrementally the violations of a constraint and its decision variables from an automaton that describes a ground checker for that constraint. We establish the practicality of our approach idea on real-life personnel rostering problems, and show that it is competitive with the approach of [Pralong, 2007].
|
Toward an automaton Constraint for Local Search
| 581
|
LSCS is a satellite workshop of the international conference on principles and practice of Constraint Programming (CP), since 2004. It is devoted to local search techniques in constraint satisfaction, and focuses on all aspects of local search techniques, including: design and implementation of new algorithms, hybrid stochastic-systematic search, reactive search optimization, adaptive search, modeling for local-search, global constraints, flexibility and robustness, learning methods, and specific applications.
|
Proceedings 6th International Workshop on Local Search Techniques in
Constraint Satisfaction
| 582
|
In this paper the behavior of three combinational rules for temporal/sequential attribute data fusion for target type estimation are analyzed. The comparative analysis is based on: Dempster's fusion rule proposed in Dempster-Shafer Theory; Proportional Conflict Redistribution rule no. 5 (PCR5), proposed in Dezert-Smarandache Theory and one alternative class fusion rule, connecting the combination rules for information fusion with particular fuzzy operators, focusing on the t-norm based Conjunctive rule as an analog of the ordinary conjunctive rule and t-conorm based Disjunctive rule as an analog of the ordinary disjunctive rule. The way how different t-conorms and t-norms functions within TCN fusion rule influence over target type estimation performance is studied and estimated.
|
Tracking object's type changes with fuzzy based fusion rule
| 583
|
This paper proposes the application of particle swarm optimization (PSO) to the problem of finite element model (FEM) selection. This problem arises when a choice of the best model for a system has to be made from set of competing models, each developed a priori from engineering judgment. PSO is a population-based stochastic search algorithm inspired by the behaviour of biological entities in nature when they are foraging for resources. Each potentially correct model is represented as a particle that exhibits both individualistic and group behaviour. Each particle moves within the model search space looking for the best solution by updating the parameters values that define it. The most important step in the particle swarm algorithm is the method of representing models which should take into account the number, location and variables of parameters to be updated. One example structural system is used to show the applicability of PSO in finding an optimal FEM. An optimal model is defined as the model that has the least number of updated parameters and has the smallest parameter variable variation from the mean material properties. Two different objective functions are used to compare performance of the PSO algorithm.
|
Finite element model selection using Particle Swarm Optimization
| 584
|
Motivated by Zadeh's paradigm of computing with words rather than numbers, several formal models of computing with words have recently been proposed. These models are based on automata and thus are not well-suited for concurrent computing. In this paper, we incorporate the well-known model of concurrent computing, Petri nets, together with fuzzy set theory and thereby establish a concurrency model of computing with words--fuzzy Petri nets for computing with words (FPNCWs). The new feature of such fuzzy Petri nets is that the labels of transitions are some special words modeled by fuzzy sets. By employing the methodology of fuzzy reasoning, we give a faithful extension of an FPNCW which makes it possible for computing with more words. The language expressiveness of the two formal models of computing with words, fuzzy automata for computing with words and FPNCWs, is compared as well. A few small examples are provided to illustrate the theoretical development.
|
A Fuzzy Petri Nets Model for Computing With Words
| 585
|
Modelling emotion has become a challenge nowadays. Therefore, several models have been produced in order to express human emotional activity. However, only a few of them are currently able to express the close relationship existing between emotion and cognition. An appraisal-coping model is presented here, with the aim to simulate the emotional impact caused by the evaluation of a particular situation (appraisal), along with the consequent cognitive reaction intended to face the situation (coping). This model is applied to the "Cascades" problem, a small arithmetical exercise designed for ten-year-old pupils. The goal is to create a model corresponding to a child's behaviour when solving the problem using his own strategies.
|
Emotion: Appraisal-coping model for the "Cascades" problem
| 586
|
Modeling emotion has become a challenge nowadays. Therefore, several models have been produced in order to express human emotional activity. However, only a few of them are currently able to express the close relationship existing between emotion and cognition. An appraisal-coping model is presented here, with the aim to simulate the emotional impact caused by the evaluation of a particular situation (appraisal), along with the consequent cognitive reaction intended to face the situation (coping). This model is applied to the ?Cascades? problem, a small arithmetical exercise designed for ten-year-old pupils. The goal is to create a model corresponding to a child's behavior when solving the problem using his own strategies.
|
Emotion : modèle d'appraisal-coping pour le problème des Cascades
| 587
|
Rough set theory, a mathematical tool to deal with inexact or uncertain knowledge in information systems, has originally described the indiscernibility of elements by equivalence relations. Covering rough sets are a natural extension of classical rough sets by relaxing the partitions arising from equivalence relations to coverings. Recently, some topological concepts such as neighborhood have been applied to covering rough sets. In this paper, we further investigate the covering rough sets based on neighborhoods by approximation operations. We show that the upper approximation based on neighborhoods can be defined equivalently without using neighborhoods. To analyze the coverings themselves, we introduce unary and composition operations on coverings. A notion of homomorphismis provided to relate two covering approximation spaces. We also examine the properties of approximations preserved by the operations and homomorphisms, respectively.
|
Covering rough sets based on neighborhoods: An approach without using
neighborhoods
| 588
|
In Pawlak's rough set theory, a set is approximated by a pair of lower and upper approximations. To measure numerically the roughness of an approximation, Pawlak introduced a quantitative measure of roughness by using the ratio of the cardinalities of the lower and upper approximations. Although the roughness measure is effective, it has the drawback of not being strictly monotonic with respect to the standard ordering on partitions. Recently, some improvements have been made by taking into account the granularity of partitions. In this paper, we approach the roughness measure in an axiomatic way. After axiomatically defining roughness measure and partition measure, we provide a unified construction of roughness measure, called strong Pawlak roughness measure, and then explore the properties of this measure. We show that the improved roughness measures in the literature are special instances of our strong Pawlak roughness measure and introduce three more strong Pawlak roughness measures as well. The advantage of our axiomatic approach is that some properties of a roughness measure follow immediately as soon as the measure satisfies the relevant axiomatic definition.
|
An axiomatic approach to the roughness measure of rough sets
| 589
|
Adaptation has long been considered as the Achilles' heel of case-based reasoning since it requires some domain-specific knowledge that is difficult to acquire. In this paper, two strategies are combined in order to reduce the knowledge engineering cost induced by the adaptation knowledge (CA) acquisition task: CA is learned from the case base by the means of knowledge discovery techniques, and the CA acquisition sessions are opportunistically triggered, i.e., at problem-solving time.
|
Opportunistic Adaptation Knowledge Discovery
| 590
|
Real-time heuristic search algorithms are suitable for situated agents that need to make their decisions in constant time. Since the original work by Korf nearly two decades ago, numerous extensions have been suggested. One of the most intriguing extensions is the idea of backtracking wherein the agent decides to return to a previously visited state as opposed to moving forward greedily. This idea has been empirically shown to have a significant impact on various performance measures. The studies have been carried out in particular empirical testbeds with specific real-time search algorithms that use backtracking. Consequently, the extent to which the trends observed are characteristic of backtracking in general is unclear. In this paper, we present the first entirely theoretical study of backtracking in real-time heuristic search. In particular, we present upper bounds on the solution cost exponential and linear in a parameter regulating the amount of backtracking. The results hold for a wide class of real-time heuristic search algorithms that includes many existing algorithms as a small subclass.
|
On Backtracking in Real-time Heuristic Search
| 591
|
This paper presents several novel generalization bounds for the problem of learning kernels based on the analysis of the Rademacher complexity of the corresponding hypothesis sets. Our bound for learning kernels with a convex combination of p base kernels has only a log(p) dependency on the number of kernels, p, which is considerably more favorable than the previous best bound given for the same problem. We also give a novel bound for learning with a linear combination of p base kernels with an L_2 regularization whose dependency on p is only in p^{1/4}.
|
New Generalization Bounds for Learning Kernels
| 592
|
This paper formulates a necessary and sufficient condition for a generic graph matching problem to be equivalent to the maximum vertex and edge weight clique problem in a derived association graph. The consequences of this results are threefold: first, the condition is general enough to cover a broad range of practical graph matching problems; second, a proof to establish equivalence between graph matching and clique search reduces to showing that a given graph matching problem satisfies the proposed condition; and third, the result sets the scene for generic continuous solutions for a broad range of graph matching problems. To illustrate the mathematical framework, we apply it to a number of graph matching problems, including the problem of determining the graph edit distance.
|
A Necessary and Sufficient Condition for Graph Matching Being Equivalent
to the Maximum Weight Clique Problem
| 593
|
This paper extends k-means algorithms from the Euclidean domain to the domain of graphs. To recompute the centroids, we apply subgradient methods for solving the optimization-based formulation of the sample mean of graphs. To accelerate the k-means algorithm for graphs without trading computational time against solution quality, we avoid unnecessary graph distance calculations by exploiting the triangle inequality of the underlying distance metric following Elkan's k-means algorithm proposed in \cite{Elkan03}. In experiments we show that the accelerated k-means algorithm are faster than the standard k-means algorithm for graphs provided there is a cluster structure in the data.
|
Elkan's k-Means for Graphs
| 594
|
Traditional staging is based on a formal approach of similarity leaning on dramaturgical ontologies and instanciation variations. Inspired by interactive data mining, that suggests different approaches, we give an overview of computer science and theater researches using computers as partners of the actor to escape the a priori specification of roles.
|
Similarité en intension vs en extension : à la croisée de
l'informatique et du théâtre
| 595
|
We address the problem of belief change in (nonmonotonic) logic programming under answer set semantics. Unlike previous approaches to belief change in logic programming, our formal techniques are analogous to those of distance-based belief revision in propositional logic. In developing our results, we build upon the model theory of logic programs furnished by SE models. Since SE models provide a formal, monotonic characterisation of logic programs, we can adapt techniques from the area of belief revision to belief change in logic programs. We introduce methods for revising and merging logic programs, respectively. For the former, we study both subset-based revision as well as cardinality-based revision, and we show that they satisfy the majority of the AGM postulates for revision. For merging, we consider operators following arbitration merging and IC merging, respectively. We also present encodings for computing the revision as well as the merging of logic programs within the same logic programming framework, giving rise to a direct implementation of our approach in terms of off-the-shelf answer set solvers. These encodings reflect in turn the fact that our change operators do not increase the complexity of the base formalism.
|
A general approach to belief change in answer set programming
| 596
|
Nearly 15 years ago, a set of qualitative spatial relations between oriented straight line segments (dipoles) was suggested by Schlieder. This work received substantial interest amongst the qualitative spatial reasoning community. However, it turned out to be difficult to establish a sound constraint calculus based on these relations. In this paper, we present the results of a new investigation into dipole constraint calculi which uses algebraic methods to derive sound results on the composition of relations and other properties of dipole calculi. Our results are based on a condensed semantics of the dipole relations. In contrast to the points that are normally used, dipoles are extended and have an intrinsic direction. Both features are important properties of natural objects. This allows for a straightforward representation of prototypical reasoning tasks for spatial agents. As an example, we show how to generate survey knowledge from local observations in a street network. The example illustrates the fast constraint-based reasoning capabilities of the dipole calculus. We integrate our results into two reasoning tools which are publicly available.
|
Oriented Straight Line Segment Algebra: Qualitative Spatial Reasoning
about Oriented Objects
| 597
|
In this paper we give a thorough presentation of a model proposed by Tononi et al. for modeling \emph{integrated information}, i.e. how much information is generated in a system transitioning from one state to the next one by the causal interaction of its parts and \emph{above and beyond} the information given by the sum of its parts. We also provides a more general formulation of such a model, independent from the time chosen for the analysis and from the uniformity of the probability distribution at the initial time instant. Finally, we prove that integrated information is null for disconnected systems.
|
On a Model for Integrated Information
| 598
|
Vector quantization(VQ) is a lossy data compression technique from signal processing, which is restricted to feature vectors and therefore inapplicable for combinatorial structures. This contribution presents a theoretical foundation of graph quantization (GQ) that extends VQ to the domain of attributed graphs. We present the necessary Lloyd-Max conditions for optimality of a graph quantizer and consistency results for optimal GQ design based on empirical distortion measures and stochastic optimization. These results statistically justify existing clustering algorithms in the domain of graphs. The proposed approach provides a template of how to link structural pattern recognition methods other than GQ to statistical pattern recognition.
|
Graph Quantization
| 599
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.