text
stringlengths
17
3.36M
source
stringlengths
3
333
__index_level_0__
int64
0
518k
This paper presents a fault classification method which makes use of a Takagi-Sugeno neuro-fuzzy model and Pseudomodal energies calculated from the vibration signals of cylindrical shells. The calculation of Pseudomodal Energies, for the purposes of condition monitoring, has previously been found to be an accurate method of extracting features from vibration signals. This calculation is therefore used to extract features from vibration signals obtained from a diverse population of cylindrical shells. Some of the cylinders in the population have faults in different substructures. The pseudomodal energies calculated from the vibration signals are then used as inputs to a neuro-fuzzy model. A leave-one-out cross-validation process is used to test the performance of the model. It is found that the neuro-fuzzy model is able to classify faults with an accuracy of 91.62%, which is higher than the previously used multilayer perceptron.
Fault Classification using Pseudomodal Energies and Neuro-fuzzy modelling
400
This paper presents bushing condition monitoring frameworks that use multi-layer perceptrons (MLP), radial basis functions (RBF) and support vector machines (SVM) classifiers. The first level of the framework determines if the bushing is faulty or not while the second level determines the type of fault. The diagnostic gases in the bushings are analyzed using the dissolve gas analysis. MLP gives superior performance in terms of accuracy and training time than SVM and RBF. In addition, an on-line bushing condition monitoring approach, which is able to adapt to newly acquired data are introduced. This approach is able to accommodate new classes that are introduced by incoming data and is implemented using an incremental learning algorithm that uses MLP. The testing results improved from 67.5% to 95.8% as new data were introduced and the testing results improved from 60% to 95.3% as new conditions were introduced. On average the confidence value of the framework on its decision was 0.92.
On-Line Condition Monitoring using Computational Intelligence
401
This paper overviews the basic principles and recent advances in the emerging field of Quantum Computation (QC), highlighting its potential application to Artificial Intelligence (AI). The paper provides a very brief introduction to basic QC issues like quantum registers, quantum gates and quantum algorithms and then it presents references, ideas and research guidelines on how QC can be used to deal with some basic AI problems, such as search and pattern matching, as soon as quantum computers become widely available.
The Road to Quantum Artificial Intelligence
402
Cluster matching by permuting cluster labels is important in many clustering contexts such as cluster validation and cluster ensemble techniques. The classic approach is to minimize the euclidean distance between two cluster solutions which induces inappropriate stability in certain settings. Therefore, we present the truematch algorithm that introduces two improvements best explained in the crisp case. First, instead of maximizing the trace of the cluster crosstable, we propose to maximize a chi-square transformation of this crosstable. Thus, the trace will not be dominated by the cells with the largest counts but by the cells with the most non-random observations, taking into account the marginals. Second, we suggest a probabilistic component in order to break ties and to make the matching algorithm truly random on random data. The truematch algorithm is designed as a building block of the truecluster framework and scales in polynomial time. First simulation results confirm that the truematch algorithm gives more consistent truecluster results for unequal cluster sizes. Free R software is available.
Truecluster matching
403
Semantic network research has seen a resurgence from its early history in the cognitive sciences with the inception of the Semantic Web initiative. The Semantic Web effort has brought forth an array of technologies that support the encoding, storage, and querying of the semantic network data structure at the world stage. Currently, the popular conception of the Semantic Web is that of a data modeling medium where real and conceptual entities are related in semantically meaningful ways. However, new models have emerged that explicitly encode procedural information within the semantic network substrate. With these new technologies, the Semantic Web has evolved from a data modeling medium to a computational medium. This article provides a classification of existing computational modeling efforts and the requirements of supporting technologies that will aid in the further growth of this burgeoning domain.
Modeling Computations in a Semantic Network
404
This paper describes a system capable of semi-automatically filling an XML template from free texts in the clinical domain (practice guidelines). The XML template includes semantic information not explicitly encoded in the text (pairs of conditions and actions/recommendations). Therefore, there is a need to compute the exact scope of conditions over text sequences expressing the required actions. We present a system developed for this task. We show that it yields good performance when applied to the analysis of French practice guidelines.
Automatically Restructuring Practice Guidelines using the GEM DTD
405
Representing and reasoning about qualitative temporal information is an essential part of many artificial intelligence tasks. Lots of models have been proposed in the litterature for representing such temporal information. All derive from a point-based or an interval-based framework. One fundamental reasoning task that arises in applications of these frameworks is given by the following scheme: given possibly indefinite and incomplete knowledge of the binary relationships between some temporal objects, find the consistent scenarii between all these objects. All these models require transitive tables -- or similarly inference rules-- for solving such tasks. We have defined an alternative model, S-languages - to represent qualitative temporal information, based on the only two relations of \emph{precedence} and \emph{simultaneity}. In this paper, we show how this model enables to avoid transitive tables or inference rules to handle this kind of problem.
Temporal Reasoning without Transitive Tables
406
This paper is a survey of a large number of informal definitions of ``intelligence'' that the authors have collected over the years. Naturally, compiling a complete list would be impossible as many definitions of intelligence are buried deep inside articles and books. Nevertheless, the 70-odd definitions presented here are, to the authors' knowledge, the largest and most well referenced collection there is.
A Collection of Definitions of Intelligence
407
Web semantic access in specific domains calls for specialized search engines with enhanced semantic querying and indexing capacities, which pertain both to information retrieval (IR) and to information extraction (IE). A rich linguistic analysis is required either to identify the relevant semantic units to index and weight them according to linguistic specific statistical distribution, or as the basis of an information extraction process. Recent developments make Natural Language Processing (NLP) techniques reliable enough to process large collections of documents and to enrich them with semantic annotations. This paper focuses on the design and the development of a text processing platform, Ogmios, which has been developed in the ALVIS project. The Ogmios platform exploits existing NLP modules and resources, which may be tuned to specific domains and produces linguistically annotated documents. We show how the three constraints of genericity, domain semantic awareness and performance can be handled all together.
A Robust Linguistic Platform for Efficient and Domain specific Web Content Analysis
408
We consider the problem of finding an n-agent joint-policy for the optimal finite-horizon control of a decentralized Pomdp (Dec-Pomdp). This is a problem of very high complexity (NEXP-hard in n >= 2). In this paper, we propose a new mathematical programming approach for the problem. Our approach is based on two ideas: First, we represent each agent's policy in the sequence-form and not in the tree-form, thereby obtaining a very compact representation of the set of joint-policies. Second, using this compact representation, we solve this problem as an instance of combinatorial optimization for which we formulate a mixed integer linear program (MILP). The optimal solution of the MILP directly yields an optimal joint-policy for the Dec-Pomdp. Computational experience shows that formulating and solving the MILP requires significantly less time to solve benchmark Dec-Pomdp problems than existing algorithms. For example, the multi-agent tiger problem for horizon 4 is solved in 72 secs with the MILP whereas existing algorithms require several hours to solve it.
Mixed Integer Linear Programming For Exact Finite-Horizon Planning In Decentralized Pomdps
409
In this paper, we employ Probabilistic Neural Network (PNN) with image and data processing techniques to implement a general purpose automated leaf recognition algorithm. 12 leaf features are extracted and orthogonalized into 5 principal variables which consist the input vector of the PNN. The PNN is trained by 1800 leaves to classify 32 kinds of plants with an accuracy greater than 90%. Compared with other approaches, our algorithm is an accurate artificial intelligence approach which is fast in execution and easy in implementation.
A Leaf Recognition Algorithm for Plant Classification Using Probabilistic Neural Network
410
When Kurt Goedel layed the foundations of theoretical computer science in 1931, he also introduced essential concepts of the theory of Artificial Intelligence (AI). Although much of subsequent AI research has focused on heuristics, which still play a major role in many practical AI applications, in the new millennium AI theory has finally become a full-fledged formal science, with important optimality results for embodied agents living in unknown environments, obtained through a combination of theory a la Goedel and probability theory. Here we look back at important milestones of AI history, mention essential recent results, and speculate about what we may expect from the next 25 years, emphasizing the significance of the ongoing dramatic hardware speedups, and discussing Goedel-inspired, self-referential, self-improving universal problem solvers.
2006: Celebrating 75 years of AI - History and Outlook: the Next 25 Years
411
In this paper we extend the new family of (quantitative) Belief Conditioning Rules (BCR) recently developed in the Dezert-Smarandache Theory (DSmT) to their qualitative counterpart for belief revision. Since the revision of quantitative as well as qualitative belief assignment given the occurrence of a new event (the conditioning constraint) can be done in many possible ways, we present here only what we consider as the most appealing Qualitative Belief Conditioning Rules (QBCR) which allow to revise the belief directly with words and linguistic labels and thus avoids the introduction of ad-hoc translations of quantitative beliefs into quantitative ones for solving the problem.
Qualitative Belief Conditioning Rules (QBCR)
412
Many systems can be described in terms of networks of discrete elements and their various relationships to one another. A semantic network, or multi-relational network, is a directed labeled graph consisting of a heterogeneous set of entities connected by a heterogeneous set of relationships. Semantic networks serve as a promising general-purpose modeling substrate for complex systems. Various standardized formats and tools are now available to support practical, large-scale semantic network models. First, the Resource Description Framework (RDF) offers a standardized semantic network data model that can be further formalized by ontology modeling languages such as RDF Schema (RDFS) and the Web Ontology Language (OWL). Second, the recent introduction of highly performant triple-stores (i.e. semantic network databases) allows semantic network models on the order of $10^9$ edges to be efficiently stored and manipulated. RDF and its related technologies are currently used extensively in the domains of computer science, digital library science, and the biological sciences. This article will provide an introduction to RDF/RDFS/OWL and an examination of its suitability to model discrete element complex systems.
Using RDF to Model the Structure and Process of Systems
413
This paper deals with enriched qualitative belief functions for reasoning under uncertainty and for combining information expressed in natural language through linguistic labels. In this work, two possible enrichments (quantitative and/or qualitative) of linguistic labels are considered and operators (addition, multiplication, division, etc) for dealing with them are proposed and explained. We denote them $qe$-operators, $qe$ standing for "qualitative-enriched" operators. These operators can be seen as a direct extension of the classical qualitative operators ($q$-operators) proposed recently in the Dezert-Smarandache Theory of plausible and paradoxist reasoning (DSmT). $q$-operators are also justified in details in this paper. The quantitative enrichment of linguistic label is a numerical supporting degree in $[0,\infty)$, while the qualitative enrichment takes its values in a finite ordered set of linguistic values. Quantitative enrichment is less precise than qualitative enrichment, but it is expected more close with what human experts can easily provide when expressing linguistic labels with supporting degrees. Two simple examples are given to show how the fusion of qualitative-enriched belief assignments can be done.
Enrichment of Qualitative Beliefs for Reasoning under Uncertainty
414
We try to perform geometrization of psychology by representing mental states, <<ideas>>, by points of a metric space, <<mental space>>. Evolution of ideas is described by dynamical systems in metric mental space. We apply the mental space approach for modeling of flows of unconscious and conscious information in the human brain. In a series of models, Models 1-4, we consider cognitive systems with increasing complexity of psychological behavior determined by structure of flows of ideas. Since our models are in fact models of the AI-type, one immediately recognizes that they can be used for creation of AI-systems, which we call psycho-robots, exhibiting important elements of human psyche. Creation of such psycho-robots may be useful improvement of domestic robots. At the moment domestic robots are merely simple working devices (e.g. vacuum cleaners or lawn mowers) . However, in future one can expect demand in systems which be able not only perform simple work tasks, but would have elements of human self-developing psyche. Such AI-psyche could play an important role both in relations between psycho-robots and their owners as well as between psycho-robots. Since the presence of a huge numbers of psycho-complexes is an essential characteristic of human psychology, it would be interesting to model them in the AI-framework.
Toward Psycho-robots
415
In this paper we study cellular automata (CAs) that perform the computational Majority task. This task is a good example of what the phenomenon of emergence in complex systems is. We take an interest in the reasons that make this particular fitness landscape a difficult one. The first goal is to study the landscape as such, and thus it is ideally independent from the actual heuristics used to search the space. However, a second goal is to understand the features a good search technique for this particular problem space should possess. We statistically quantify in various ways the degree of difficulty of searching this landscape. Due to neutrality, investigations based on sampling techniques on the whole landscape are difficult to conduct. So, we go exploring the landscape from the top. Although it has been proved that no CA can perform the task perfectly, several efficient CAs for this task have been found. Exploiting similarities between these CAs and symmetries in the landscape, we define the Olympus landscape which is regarded as the ''heavenly home'' of the best local optima known (blok). Then we measure several properties of this subspace. Although it is easier to find relevant CAs in this subspace than in the overall landscape, there are structural reasons that prevent a searcher from finding overfitted CAs in the Olympus. Finally, we study dynamics and performance of genetic algorithms on the Olympus in order to confirm our analysis and to find efficient CAs for the Majority problem with low computational cost.
Fitness landscape of the cellular automata majority problem: View from the Olympus
416
This paper introduces the concept of fitness cloud as an alternative way to visualize and analyze search spaces than given by the geographic notion of fitness landscape. It is argued that the fitness cloud concept overcomes several deficiencies of the landscape representation. Our analysis is based on the correlation between fitness of solutions and fitnesses of nearest solutions according to some neighboring. We focus on the behavior of local search heuristics, such as hill climber, on the well-known NK fitness landscape. In both cases the fitness vs. fitness correlation is shown to be related to the epistatic parameter K.
Local search heuristics: Fitness Cloud versus Fitness Landscape
417
This theoretical work defines the measure of autocorrelation of evolvability in the context of neutral fitness landscape. This measure has been studied on the classical MAX-SAT problem. This work highlight a new characteristic of neutral fitness landscapes which allows to design new adapted metaheuristic.
Measuring the Evolvability Landscape to study Neutrality
418
This paper describes a system capable of semi-automatically filling an XML template from free texts in the clinical domain (practice guidelines). The XML template includes semantic information not explicitly encoded in the text (pairs of conditions and actions/recommendations). Therefore, there is a need to compute the exact scope of conditions over text sequences expressing the required actions. We present in this paper the rules developed for this task. We show that the system yields good performance when applied to the analysis of French practice guidelines.
From Texts to Structured Documents: The Case of Health Practice Guidelines
419
We develop a general framework for MAP estimation in discrete and Gaussian graphical models using Lagrangian relaxation techniques. The key idea is to reformulate an intractable estimation problem as one defined on a more tractable graph, but subject to additional constraints. Relaxing these constraints gives a tractable dual problem, one defined by a thin graph, which is then optimized by an iterative procedure. When this iterative optimization leads to a consistent estimate, one which also satisfies the constraints, then it corresponds to an optimal MAP estimate of the original model. Otherwise there is a ``duality gap'', and we obtain a bound on the optimal solution. Thus, our approach combines convex optimization with dynamic programming techniques applicable for thin graphs. The popular tree-reweighted max-product (TRMP) method may be seen as solving a particular class of such relaxations, where the intractable graph is relaxed to a set of spanning trees. We also consider relaxations to a set of small induced subgraphs, thin subgraphs (e.g. loops), and a connected tree obtained by ``unwinding'' cycles. In addition, we propose a new class of multiscale relaxations that introduce ``summary'' variables. The potential benefits of such generalizations include: reducing or eliminating the ``duality gap'' in hard problems, reducing the number or Lagrange multipliers in the dual problem, and accelerating convergence of the iterative optimization procedure.
Lagrangian Relaxation for MAP Estimation in Graphical Models
420
This paper addresses a method to analyze the covert social network foundation hidden behind the terrorism disaster. It is to solve a node discovery problem, which means to discover a node, which functions relevantly in a social network, but escaped from monitoring on the presence and mutual relationship of nodes. The method aims at integrating the expert investigator's prior understanding, insight on the terrorists' social network nature derived from the complex graph theory, and computational data processing. The social network responsible for the 9/11 attack in 2001 is used to execute simulation experiment to evaluate the performance of the method.
Analyzing covert social network foundation behind terrorism disaster
421
Methods to solve a node discovery problem for a social network are presented. Covert nodes refer to the nodes which are not observable directly. They transmit the influence and affect the resulting collaborative activities among the persons in a social network, but do not appear in the surveillance logs which record the participants of the collaborative activities. Discovering the covert nodes is identifying the suspicious logs where the covert nodes would appear if the covert nodes became overt. The performance of the methods is demonstrated with a test dataset generated from computationally synthesized networks and a real organization.
Node discovery problem for a social network
422
An empty spot refers to an empty hard-to-fill space which can be found in the records of the social interaction, and is the clue to the persons in the underlying social network who do not appear in the records. This contribution addresses a problem to predict relevant empty spots in social interaction. Homogeneous and inhomogeneous networks are studied as a model underlying the social interaction. A heuristic predictor function approach is presented as a new method to address the problem. Simulation experiment is demonstrated over a homogeneous network. A test data in the form of baskets is generated from the simulated communication. Precision to predict the empty spots is calculated to demonstrate the performance of the presented approach.
Predicting relevant empty spots in social interaction
423
To appear in Theory and Practice of Logic Programming (TPLP), 2008. We are researching the interaction between the rule and the ontology layers of the Semantic Web, by comparing two options: 1) using OWL and its rule extension SWRL to develop an integrated ontology/rule language, and 2) layering rules on top of an ontology with RuleML and OWL. Toward this end, we are developing the SWORIER system, which enables efficient automated reasoning on ontologies and rules, by translating all of them into Prolog and adding a set of general rules that properly capture the semantics of OWL. We have also enabled the user to make dynamic changes on the fly, at run time. This work addresses several of the concerns expressed in previous work, such as negation, complementary classes, disjunctive heads, and cardinality, and it discusses alternative approaches for dealing with inconsistencies in the knowledge base. In addition, for efficiency, we implemented techniques called extensionalization, avoiding reanalysis, and code minimization.
Translating OWL and Semantic Web Rules into Prolog: Moving Toward Description Logic Programs
424
We consider hexagonal cellular automata with immediate cell neighbourhood and three cell-states. Every cell calculates its next state depending on the integral representation of states in its neighbourhood, i.e. how many neighbours are in each one state. We employ evolutionary algorithms to breed local transition functions that support mobile localizations (gliders), and characterize sets of the functions selected in terms of quasi-chemical systems. Analysis of the set of functions evolved allows to speculate that mobile localizations are likely to emerge in the quasi-chemical systems with limited diffusion of one reagent, a small number of molecules is required for amplification of travelling localizations, and reactions leading to stationary localizations involve relatively equal amount of quasi-chemical species. Techniques developed can be applied in cascading signals in nature-inspired spatially extended computing devices, and phenomenological studies and classification of non-linear discrete systems.
Evolving localizations in reaction-diffusion cellular automata
425
We describe decomposition during search (DDS), an integration of And/Or tree search into propagation-based constraint solvers. The presented search algorithm dynamically decomposes sub-problems of a constraint satisfaction problem into independent partial problems, avoiding redundant work. The paper discusses how DDS interacts with key features that make propagation-based solvers successful: constraint propagation, especially for global constraints, and dynamic search heuristics. We have implemented DDS for the Gecode constraint programming library. Two applications, solution counting in graph coloring and protein structure prediction, exemplify the benefits of DDS in practice.
Decomposition During Search for Propagation-Based Constraint Solvers
426
A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines.
Universal Intelligence: A Definition of Machine Intelligence
427
Although the definition and measurement of intelligence is clearly of fundamental importance to the field of artificial intelligence, no general survey of definitions and tests of machine intelligence exists. Indeed few researchers are even aware of alternatives to the Turing test and its many derivatives. In this paper we fill this gap by providing a short survey of the many tests of machine intelligence that have been proposed.
Tests of Machine Intelligence
428
We consider an agent interacting with an unknown environment. The environment is a function which maps natural numbers to natural numbers; the agent's set of hypotheses about the environment contains all such functions which are computable and compatible with a finite set of known input-output pairs, and the agent assigns a positive probability to each such hypothesis. We do not require that this probability distribution be computable, but it must be bounded below by a positive computable function. The agent has a utility function on outputs from the environment. We show that if this utility function is bounded below in absolute value by an unbounded computable function, then the expected utility of any input is undefined. This implies that a computable utility function will have convergent expected utilities iff that function is bounded.
Convergence of Expected Utilities with Algorithmic Probability Distributions
429
In this paper we introduce a new selection scheme in cellular genetic algorithms (cGAs). Anisotropic Selection (AS) promotes diversity and allows accurate control of the selective pressure. First we compare this new scheme with the classical rectangular grid shapes solution according to the selective pressure: we can obtain the same takeover time with the two techniques although the spreading of the best individual is different. We then give experimental results that show to what extent AS promotes the emergence of niches that support low coupling and high cohesion. Finally, using a cGA with anisotropic selection on a Quadratic Assignment Problem we show the existence of an anisotropic optimal value for which the best average performance is observed. Further work will focus on the selective pressure self-adjustment ability provided by this new selection scheme.
Anisotropic selection in cellular genetic algorithms
430
This philosophical paper explores the relation between modern scientific simulations and the future of the universe. We argue that a simulation of an entire universe will result from future scientific activity. This requires us to tackle the challenge of simulating open-ended evolution at all levels in a single simulation. The simulation should encompass not only biological evolution, but also physical evolution (a level below) and cultural evolution (a level above). The simulation would allow us to probe what would happen if we would "replay the tape of the universe" with the same or different laws and initial conditions. We also distinguish between real-world and artificial-world modelling. Assuming that intelligent life could indeed simulate an entire universe, this leads to two tentative hypotheses. Some authors have argued that we may already be in a simulation run by an intelligent entity. Or, if such a simulation could be made real, this would lead to the production of a new universe. This last direction is argued with a careful speculative philosophical approach, emphasizing the imperative to find a solution to the heat death problem in cosmology. The reader is invited to consult Annex 1 for an overview of the logical structure of this paper. -- Keywords: far future, future of science, ALife, simulation, realization, cosmology, heat death, fine-tuning, physical eschatology, cosmological natural selection, cosmological artificial selection, artificial cosmogenesis, selfish biocosm hypothesis, meduso-anthropic principle, developmental singularity hypothesis, role of intelligent life.
The Future of Scientific Simulations: from Artificial Life to Artificial Cosmogenesis
431
In this paper, we describe a new algorithm that consists in combining an eye-tracker for minimizing the fatigue of a user during the evaluation process of Interactive Evolutionary Computation. The approach is then applied to the Interactive One-Max optimization problem.
Eye-Tracking Evolutionary Algorithm to minimize user's fatigue in IEC applied to Interactive One-Max problem
432
In this paper, I present a method to solve a node discovery problem in a networked organization. Covert nodes refer to the nodes which are not observable directly. They affect social interactions, but do not appear in the surveillance logs which record the participants of the social interactions. Discovering the covert nodes is defined as identifying the suspicious logs where the covert nodes would appear if the covert nodes became overt. A mathematical model is developed for the maximal likelihood estimation of the network behind the social interactions and for the identification of the suspicious logs. Precision, recall, and F measure characteristics are demonstrated with the dataset generated from a real organization and the computationally synthesized datasets. The performance is close to the theoretical limit for any covert nodes in the networks of any topologies and sizes if the ratio of the number of observation to the number of possible communication patterns is large.
Node discovery in a networked organization
433
In an emergency situation, the actors need an assistance allowing them to react swiftly and efficiently. In this prospect, we present in this paper a decision support system that aims to prepare actors in a crisis situation thanks to a decision-making support. The global architecture of this system is presented in the first part. Then we focus on a part of this system which is designed to represent the information of the current situation. This part is composed of a multiagent system that is made of factual agents. Each agent carries a semantic feature and aims to represent a partial part of a situation. The agents develop thanks to their interactions by comparing their semantic features using proximity measures and according to specific ontologies.
Multiagent Approach for the Representation of Information in a Decision Support System
434
A new method is presented, that can help a person become aware of his or her unconscious preferences, and convey them to others in the form of verbal explanation. The method combines the concepts of reflection, visualization, and verbalization. The method was tested in an experiment where the unconscious preferences of the subjects for various artworks were investigated. In the experiment, two lessons were learned. The first is that it helps the subjects become aware of their unconscious preferences to verbalize weak preferences as compared with strong preferences through discussion over preference diagrams. The second is that it is effective to introduce an adjustable factor into visualization to adapt to the differences in the subjects and to foster their mutual understanding.
Reflective visualization and verbalization of unconscious preference
435
This paper describes application of rough set theory, on the analysis of hydrocyclone operation. In this manner, using Self Organizing Map (SOM) as preprocessing step, best crisp granules of data are obtained. Then, using a combining of SOM and rough set theory (RST)-called SORST-, the dominant rules on the information table, obtained from laboratory tests, are extracted. Based on these rules, an approximate estimation on decision attribute is fulfilled. Finally, a brief comparison of this method with the SOM-NFIS system (briefly SONFIS) is highlighted.
Application of Rough Set Theory to Analysis of Hydrocyclone Operation
436
We are interested in the problem of multiagent systems development for risk detecting and emergency response in an uncertain and partially perceived environment. The evaluation of the current situation passes by three stages inside the multiagent system. In a first time, the situation is represented in a dynamic way. The second step, consists to characterise the situation and finally, it is compared with other similar known situations. In this paper, we present an information modelling of an observed environment, that we have applied on the RoboCupRescue Simulation System. Information coming from the environment are formatted according to a taxonomy and using semantic features. The latter are defined thanks to a fine ontology of the domain and are managed by factual agents that aim to represent dynamically the current situation.
Agent-Based Perception of an Environment in an Emergency Situation
437
In this paper, we study the influence of the selective pressure on the performance of cellular genetic algorithms. Cellular genetic algorithms are genetic algorithms where the population is embedded on a toroidal grid. This structure makes the propagation of the best so far individual slow down, and allows to keep in the population potentially good solutions. We present two selective pressure reducing strategies in order to slow down even more the best solution propagation. We experiment these strategies on a hard optimization problem, the quadratic assignment problem, and we show that there is a value for of the control parameter for both which gives the best performance. This optimal value does not find explanation on only the selective pressure, measured either by take over time and diversity evolution. This study makes us conclude that we need other tools than the sole selective pressure measures to explain the performances of cellular genetic algorithms.
On the Influence of Selection Operators on Performances in Cellular Genetic Algorithms
438
In this study, we introduce general frame of MAny Connected Intelligent Particles Systems (MACIPS). Connections and interconnections between particles get a complex behavior of such merely simple system (system in system).Contribution of natural computing, under information granulation theory, are the main topics of this spacious skeleton. Upon this clue, we organize two algorithms involved a few prominent intelligent computing and approximate reasoning methods: self organizing feature map (SOM), Neuro- Fuzzy Inference System and Rough Set Theory (RST). Over this, we show how our algorithms can be taken as a linkage of government-society interaction, where government catches various fashions of behavior: solid (absolute) or flexible. So, transition of such society, by changing of connectivity parameters (noise) from order to disorder is inferred. Add to this, one may find an indirect mapping among financial systems and eventual market fluctuations with MACIPS. Keywords: phase transition, SONFIS, SORST, many connected intelligent particles system, society-government interaction
Phase transition in SONFIS&SORST
439
Affinity propagation clustering (AP) has two limitations: it is hard to know what value of parameter 'preference' can yield an optimal clustering solution, and oscillations cannot be eliminated automatically if occur. The adaptive AP method is proposed to overcome these limitations, including adaptive scanning of preferences to search space of the number of clusters for finding the optimal clustering solution, adaptive adjustment of damping factors to eliminate oscillations, and adaptive escaping from oscillations when the damping adjustment technique fails. Experimental results on simulated and real data sets show that the adaptive AP is effective and can outperform AP in quality of clustering results.
Adaptive Affinity Propagation Clustering
440
Approximately more than 90% of all coal production in Iranian underground mines is derived directly longwall mining method. Out of seam dilution is one of the essential problems in these mines. Therefore the dilution can impose the additional cost of mining and milling. As a result, recognition of the effective parameters on the dilution has a remarkable role in industry. In this way, this paper has analyzed the influence of 13 parameters (attributed variables) versus the decision attribute (dilution value), so that using two approximate reasoning methods, namely Rough Set Theory (RST) and Self Organizing Neuro- Fuzzy Inference System (SONFIS) the best rules on our collected data sets has been extracted. The other benefit of later methods is to predict new unknown cases. So, the reduced sets (reducts) by RST have been obtained. Therefore the emerged results by utilizing mentioned methods shows that the high sensitive variables are thickness of layer, length of stope, rate of advance, number of miners, type of advancing.
Assessment of effective parameters on dilution using approximate reasoning methods in longwall mining method, Iran coal mines
441
This study, fundamentals of fuzzy block theory, and its application in assessment of stability in underground openings, has surveyed. Using fuzzy topics and inserting them in to key block theory, in two ways, fundamentals of fuzzy block theory has been presented. In indirect combining, by coupling of adaptive Neuro Fuzzy Inference System (NFIS) and classic block theory, we could extract possible damage parts around a tunnel. In direct solution, some principles of block theory, by means of different fuzzy facets theory, were rewritten.
Toward Fuzzy block theory
442
This paper describes application of information granulation theory, on the analysis of hydrocyclone perforamance. In this manner, using a combining of Self Organizing Map (SOM) and Neuro-Fuzzy Inference System (NFIS), crisp and fuzzy granules are obtained(briefly called SONFIS). Balancing of crisp granules and sub fuzzy granules, within non fuzzy information (initial granulation), is rendered in an open-close iteration. Using two criteria, "simplicity of rules "and "adaptive threoshold error level", stability of algorithm is guaranteed. Validation of the proposed method, on the data set of the hydrocyclone is rendered.
Analysis of hydrocyclone performance based on information granulation theory
443
In everyday life it happens that a person has to reason about what other people think and how they behave, in order to achieve his goals. In other words, an individual may be required to adapt his behaviour by reasoning about the others' mental state. In this paper we focus on a knowledge representation language derived from logic programming which both supports the representation of mental states of individual communities and provides each with the capability of reasoning about others' mental states and acting accordingly. The proposed semantics is shown to be translatable into stable model semantics of logic programs with aggregates.
Logic programming with social features
444
Many social Web sites allow users to publish content and annotate with descriptive metadata. In addition to flat tags, some social Web sites have recently began to allow users to organize their content and metadata hierarchically. The social photosharing site Flickr, for example, allows users to group related photos in sets, and related sets in collections. The social bookmarking site Del.icio.us similarly lets users group related tags into bundles. Although the sites themselves don't impose any constraints on how these hierarchies are used, individuals generally use them to capture relationships between concepts, most commonly the broader/narrower relations. Collective annotation of content with hierarchical relations may lead to an emergent classification system, called a folksonomy. While some researchers have explored using tags as evidence for learning folksonomies, we believe that hierarchical relations described above offer a high-quality source of evidence for this task. We propose a simple approach to aggregate shallow hierarchies created by many distinct Flickr users into a common folksonomy. Our approach uses statistics to determine if a particular relation should be retained or discarded. The relations are then woven together into larger hierarchies. Although we have not carried out a detailed quantitative evaluation of the approach, it looks very promising since it generates very reasonable, non-trivial hierarchies.
Constructing Folksonomies from User-specified Relations on Flickr
445
We analyze the style and structure of story narrative using the case of film scripts. The practical importance of this is noted, especially the need to have support tools for television movie writing. We use the Casablanca film script, and scripts from six episodes of CSI (Crime Scene Investigation). For analysis of style and structure, we quantify various central perspectives discussed in McKee's book, "Story: Substance, Structure, Style, and the Principles of Screenwriting". Film scripts offer a useful point of departure for exploration of the analysis of more general narratives. Our methodology, using Correspondence Analysis, and hierarchical clustering, is innovative in a range of areas that we discuss. In particular this work is groundbreaking in taking the qualitative analysis of McKee and grounding this analysis in a quantitative and algorithmic framework.
The Structure of Narrative: the Case of Film Scripts
446
In the last year more than 70,000 people have been brought to the UK hospitals with serious injuries. Each time a clinician has to urgently take a patient through a screening procedure to make a reliable decision on the trauma treatment. Typically, such procedure comprises around 20 tests; however the condition of a trauma patient remains very difficult to be tested properly. What happens if these tests are ambiguously interpreted, and information about the severity of the injury will come misleading? The mistake in a decision can be fatal: using a mild treatment can put a patient at risk of dying from posttraumatic shock, while using an overtreatment can also cause death. How can we reduce the risk of the death caused by unreliable decisions? It has been shown that probabilistic reasoning, based on the Bayesian methodology of averaging over decision models, allows clinicians to evaluate the uncertainty in decision making. Based on this methodology, in this paper we aim at selecting the most important screening tests, keeping a high performance. We assume that the probabilistic reasoning within the Bayesian methodology allows us to discover new relationships between the screening tests and uncertainty in decisions. In practice, selection of the most informative tests can also reduce the cost of a screening procedure in trauma care centers. In our experiments we use the UK Trauma data to compare the efficiency of the proposed technique in terms of the performance. We also compare the uncertainty in decisions in terms of entropy.
Feature Selection for Bayesian Evaluation of Trauma Death Risk
447
We present in this article a new evaluation method for classification and segmentation of textured images in uncertain environments. In uncertain environments, real classes and boundaries are known with only a partial certainty given by the experts. Most of the time, in many presented papers, only classification or only segmentation are considered and evaluated. Here, we propose to take into account both the classification and segmentation results according to the certainty given by the experts. We present the results of this method on a fusion of classifiers of sonar images for a seabed characterization.
Fusion for Evaluation of Image Classification in Uncertain Environments
448
The investigation of the terrorist attack is a time-critical task. The investigators have a limited time window to diagnose the organizational background of the terrorists, to run down and arrest the wire-pullers, and to take an action to prevent or eradicate the terrorist attack. The intuitive interface to visualize the intelligence data set stimulates the investigators' experience and knowledge, and aids them in decision-making for an immediately effective action. This paper presents a computational method to analyze the intelligence data set on the collective actions of the perpetrators of the attack, and to visualize it into the form of a social network diagram which predicts the positions where the wire-pullers conceals themselves.
Intuitive visualization of the intelligence for the run-down of terrorist wire-pullers
449
This paper describes application of information granulation theory, on the design of rock engineering flowcharts. Firstly, an overall flowchart, based on information granulation theory has been highlighted. Information granulation theory, in crisp (non-fuzzy) or fuzzy format, can take into account engineering experiences (especially in fuzzy shape-incomplete information or superfluous), or engineering judgments, in each step of designing procedure, while the suitable instruments modeling are employed. In this manner and to extension of soft modeling instruments, using three combinations of Self Organizing Map (SOM), Neuro-Fuzzy Inference System (NFIS), and Rough Set Theory (RST) crisp and fuzzy granules, from monitored data sets are obtained. The main underlined core of our algorithms are balancing of crisp(rough or non-fuzzy) granules and sub fuzzy granules, within non fuzzy information (initial granulation) upon the open-close iterations. Using different criteria on balancing best granules (information pockets), are obtained. Validations of our proposed methods, on the data set of in-situ permeability in rock masses in Shivashan dam, Iran have been highlighted.
Rock mechanics modeling based on soft granulation theory
450
Knowledge-based economy forces companies in the nation to group together as a cluster in order to maintain their competitiveness in the world market. The cluster development relies on two key success factors which are knowledge sharing and collaboration between the actors in the cluster. Thus, our study tries to propose knowledge management system to support knowledge management activities within the cluster. To achieve the objectives of this study, ontology takes a very important role in knowledge management process in various ways; such as building reusable and faster knowledge-bases, better way for representing the knowledge explicitly. However, creating and representing ontology create difficulties to organization due to the ambiguity and unstructured of source of knowledge. Therefore, the objectives of this paper are to propose the methodology to create and represent ontology for the organization development by using knowledge engineering approach. The handicraft cluster in Thailand is used as a case study to illustrate our proposed methodology.
An Ontology-based Knowledge Management System for Industry Clusters
451
Crisis response poses many of the most difficult information technology in crisis management. It requires information and communication-intensive efforts, utilized for reducing uncertainty, calculating and comparing costs and benefits, and managing resources in a fashion beyond those regularly available to handle routine problems. In this paper, we explore the benefits of artificial intelligence technologies in crisis response. This paper discusses the role of artificial intelligence technologies; namely, robotics, ontology and semantic web, and multi-agent systems in crisis response.
The Role of Artificial Intelligence Technologies in Crisis Response
452
We present and discuss a mixed conjunctive and disjunctive rule, a generalization of conflict repartition rules, and a combination of these two rules. In the belief functions theory one of the major problem is the conflict repartition enlightened by the famous Zadeh's example. To date, many combination rules have been proposed in order to solve a solution to this problem. Moreover, it can be important to consider the specificity of the responses of the experts. Since few year some unification rules are proposed. We have shown in our previous works the interest of the proportional conflict redistribution rule. We propose here a mixed combination rule following the proportional conflict redistribution rule modified by a discounting procedure. This rule generalizes many combination rules.
Toward a combination rule to deal with partial conflict and specificity in belief functions theory
453
In this chapter, we present and discuss a new generalized proportional conflict redistribution rule. The Dezert-Smarandache extension of the Demster-Shafer theory has relaunched the studies on the combination rules especially for the management of the conflict. Many combination rules have been proposed in the last few years. We study here different combination rules and compare them in terms of decision on didactic example and on generated data. Indeed, in real applications, we need a reliable decision and it is the final results that matter. This chapter shows that a fine proportional conflict redistribution rule must be preferred for the combination in the belief function theory.
A new generalization of the proportional conflict redistribution rule stable in terms of decision
454
These last years, there were many studies on the problem of the conflict coming from information combination, especially in evidence theory. We can summarise the solutions for manage the conflict into three different approaches: first, we can try to suppress or reduce the conflict before the combination step, secondly, we can manage the conflict in order to give no influence of the conflict in the combination step, and then take into account the conflict in the decision step, thirdly, we can take into account the conflict in the combination step. The first approach is certainly the better, but not always feasible. It is difficult to say which approach is the best between the second and the third. However, the most important is the produced results in applications. We propose here a new combination rule that distributes the conflict proportionally on the element given this conflict. We compare these different combination rules on real data in Sonar imagery and Radar target classification.
Une nouvelle règle de combinaison répartissant le conflit - Applications en imagerie Sonar et classification de cibles Radar
455
When implementing a propagator for a constraint, one must decide about variants: When implementing min, should one also implement max? Should one implement linear equations both with and without coefficients? Constraint variants are ubiquitous: implementing them requires considerable (if not prohibitive) effort and decreases maintainability, but will deliver better performance. This paper shows how to use variable views, previously introduced for an implementation architecture, to derive perfect propagator variants. A model for views and derived propagators is introduced. Derived propagators are proved to be indeed perfect in that they inherit essential properties such as correctness and domain and bounds consistency. Techniques for systematically deriving propagators such as transformation, generalization, specialization, and channeling are developed for several variable domains. We evaluate the massive impact of derived propagators. Without derived propagators, Gecode would require 140000 rather than 40000 lines of code for propagators.
Perfect Derived Propagators
456
A serious defect with the Halpern-Pearl (HP) definition of causality is repaired by combining a theory of causality with a theory of defaults. In addition, it is shown that (despite a claim to the contrary) a cause according to the HP condition need not be a single conjunct. A definition of causality motivated by Wright's NESS test is shown to always hold for a single conjunct. Moreover, conditions that hold for all the examples considered by HP are given that guarantee that causality according to (this version) of the NESS test is equivalent to the HP definition.
Defaults and Normality in Causal Structures
457
The textured images' classification assumes to consider the images in terms of area with the same texture. In uncertain environment, it could be better to take an imprecise decision or to reject the area corresponding to an unlearning class. Moreover, on the areas that are the classification units, we can have more than one texture. These considerations allows us to develop a belief decision model permitting to reject an area as unlearning and to decide on unions and intersections of learning classes. The proposed approach finds all its justification in an application of seabed characterization from sonar images, which contributes to an illustration.
Belief decision support and reject for textured images characterization
458
We study two aspects of information semantics: (i) the collection of all relationships, (ii) tracking and spotting anomaly and change. The first is implemented by endowing all relevant information spaces with a Euclidean metric in a common projected space. The second is modelled by an induced ultrametric. A very general way to achieve a Euclidean embedding of different information spaces based on cross-tabulation counts (and from other input data formats) is provided by Correspondence Analysis. From there, the induced ultrametric that we are particularly interested in takes a sequential - e.g. temporal - ordering of the data into account. We employ such a perspective to look at narrative, "the flow of thought and the flow of language" (Chafe). In application to policy decision making, we show how we can focus analysis in a small number of dimensions.
The Correspondence Analysis Platform for Uncovering Deep Structure in Data and Information
459
In this paper we extend Inagaki Weighted Operators fusion rule (WO) in information fusion by doing redistribution of not only the conflicting mass, but also of masses of non-empty intersections, that we call Double Weighted Operators (DWO). Then we propose a new fusion rule Class of Proportional Redistribution of Intersection Masses (CPRIM), which generates many interesting particular fusion rules in information fusion. Both formulas are presented for any number of sources of information. An application and comparison with other fusion rules are given in the last section.
Extension of Inagaki General Weighted Operators and A New Fusion Rule Class of Proportional Redistribution of Intersection Masses
460
In this chapter, we propose a new practical codification of the elements of the Venn diagram in order to easily manipulate the focal elements. In order to reduce the complexity, the eventual constraints must be integrated in the codification at the beginning. Hence, we only consider a reduced hyper power set $D_r^\Theta$ that can be $2^\Theta$ or $D^\Theta$. We describe all the steps of a general belief function framework. The step of decision is particularly studied, indeed, when we can decide on intersections of the singletons of the discernment space no actual decision functions are easily to use. Hence, two approaches are proposed, an extension of previous one and an approach based on the specificity of the elements on which to decide. The principal goal of this chapter is to provide practical codes of a general belief function framework for the researchers and users needing the belief function theory.
Implementing general belief function framework with a practical codification for low complexity
461
In this paper, we propose in Dezert-Smarandache Theory (DSmT) framework, a new probabilistic transformation, called DSmP, in order to build a subjective probability measure from any basic belief assignment defined on any model of the frame of discernment. Several examples are given to show how the DSmP transformation works and we compare it to main existing transformations proposed in the literature so far. We show the advantages of DSmP over classical transformations in term of Probabilistic Information Content (PIC). The direct extension of this transformation for dealing with qualitative belief assignments is also presented.
A new probabilistic transformation of belief mass assignment
462
We discuss metacognitive modelling as an enhancement to cognitive modelling and computing. Metacognitive control mechanisms should enable AI systems to self-reflect, reason about their actions, and to adapt to new situations. In this respect, we propose implementation details of a knowledge taxonomy and an augmented data mining life cycle which supports a live integration of obtained models.
On Introspection, Metacognitive Control and Augmented Data Mining Live Cycles
463
Each cognitive science tries to understand a set of cognitive behaviors. The structuring of knowledge of this nature's aspect is far from what it can be expected about a science. Until now universal standard consistently describing the set of cognitive behaviors has not been found, and there are many questions about the cognitive behaviors for which only there are opinions of members of the scientific community. This article has three proposals. The first proposal is to raise to the scientific community the necessity of unified the cognitive behaviors. The second proposal is claim the application of the Newton's reasoning rules about nature of his book, Philosophiae Naturalis Principia Mathematica, to the cognitive behaviors. The third is to propose a scientific theory, currently developing, that follows the rules established by Newton to make sense of nature, and could be the theory to explain all the cognitive behaviors.
Hacia una teoria de unificacion para los comportamientos cognitivos
464
In this article we review standard null-move pruning and introduce our extended version of it, which we call verified null-move pruning. In verified null-move pruning, whenever the shallow null-move search indicates a fail-high, instead of cutting off the search from the current node, the search is continued with reduced depth. Our experiments with verified null-move pruning show that on average, it constructs a smaller search tree with greater tactical strength in comparison to standard null-move pruning. Moreover, unlike standard null-move pruning, which fails badly in zugzwang positions, verified null-move pruning manages to detect most zugzwangs and in such cases conducts a re-search to obtain the correct result. In addition, verified null-move pruning is very easy to implement, and any standard null-move pruning program can use verified null-move pruning by modifying only a few lines of code.
Verified Null-Move Pruning
465
We extend Knuth's 16 Boolean binary logic operators to fuzzy logic and neutrosophic logic binary operators. Then we generalize them to n-ary fuzzy logic and neutrosophic logic operators using the smarandache codification of the Venn diagram and a defined vector neutrosophic law. In such way, new operators in neutrosophic logic/set/probability are built.
n-ary Fuzzy Logic and Neutrosophic Logic Operators
466
Various local search approaches have recently been applied to machine scheduling problems under multiple objectives. Their foremost consideration is the identification of the set of Pareto optimal alternatives. An important aspect of successfully solving these problems lies in the definition of an appropriate neighbourhood structure. Unclear in this context remains, how interdependencies within the fitness landscape affect the resolution of the problem. The paper presents a study of neighbourhood search operators for multiple objective flow shop scheduling. Experiments have been carried out with twelve different combinations of criteria. To derive exact conclusions, small problem instances, for which the optimal solutions are known, have been chosen. Statistical tests show that no single neighbourhood operator is able to equally identify all Pareto optimal alternatives. Significant improvements however have been obtained by hybridising the solution algorithm using a randomised variable neighbourhood search technique.
Randomised Variable Neighbourhood Search for Multi Objective Optimisation
467
The paper describes the proposition and application of a local search metaheuristic for multi-objective optimization problems. It is based on two main principles of heuristic search, intensification through variable neighborhoods, and diversification through perturbations and successive iterations in favorable regions of the search space. The concept is successfully tested on permutation flow shop scheduling problems under multiple objectives. While the obtained results are encouraging in terms of their quality, another positive attribute of the approach is its' simplicity as it does require the setting of only very few parameters. The implementation of the Pareto Iterated Local Search metaheuristic is based on the MOOPPS computer system of local search heuristics for multi-objective scheduling which has been awarded the European Academic Software Award 2002 in Ronneby, Sweden (http://www.easa-award.net/, http://www.bth.se/llab/easa_2002.nsf)
Foundations of the Pareto Iterated Local Search Metaheuristic
468
The article describes an investigation of the effectiveness of genetic algorithms for multi-objective combinatorial optimization (MOCO) by presenting an application for the vehicle routing problem with soft time windows. The work is motivated by the question, if and how the problem structure influences the effectiveness of different configurations of the genetic algorithm. Computational results are presented for different classes of vehicle routing problems, varying in their coverage with time windows, time window size, distribution and number of customers. The results are compared with a simple, but effective local search approach for multi-objective combinatorial optimization problems.
A Computational Study of Genetic Crossover Operators for Multi-Objective Vehicle Routing Problem with Soft Time Windows
469
The talk describes a general approach of a genetic algorithm for multiple objective optimization problems. A particular dominance relation between the individuals of the population is used to define a fitness operator, enabling the genetic algorithm to adress even problems with efficient, but convex-dominated alternatives. The algorithm is implemented in a multilingual computer program, solving vehicle routing problems with time windows under multiple objectives. The graphical user interface of the program shows the progress of the genetic algorithm and the main parameters of the approach can be easily modified. In addition to that, the program provides powerful decision support to the decision maker. The software has proved it's excellence at the finals of the European Academic Software Award EASA, held at the Keble college/ University of Oxford/ Great Britain.
Genetic Algorithms for multiple objective vehicle routing
470
The article presents a framework for the resolution of rich vehicle routing problems which are difficult to address with standard optimization techniques. We use local search on the basis on variable neighborhood search for the construction of the solutions, but embed the techniques in a flexible framework that allows the consideration of complex side constraints of the problem such as time windows, multiple depots, heterogeneous fleets, and, in particular, multiple optimization criteria. In order to identify a compromise alternative that meets the requirements of the decision maker, an interactive procedure is integrated in the resolution of the problem, allowing the modification of the preference information articulated by the decision maker. The framework is prototypically implemented in a computer system. First results of test runs on multiple depot vehicle routing problems with time windows are reported.
A framework for the interactive resolution of multi-objective vehicle routing problems
471
The integration of fuzzy set theory and fuzzy logic into scheduling is a rather new aspect with growing importance for manufacturing applications, resulting in various unsolved aspects. In the current paper, we investigate an improved local search technique for fuzzy scheduling problems with fitness plateaus, using a multi criteria formulation of the problem. We especially address the problem of changing job priorities over time as studied at the Sherwood Press Ltd, a Nottingham based printing company, who is a collaborator on the project.
Improving Local Search for Fuzzy Scheduling Problems
472
The article proposes a heuristic approximation approach to the bin packing problem under multiple objectives. In addition to the traditional objective of minimizing the number of bins, the heterogeneousness of the elements in each bin is minimized, leading to a biobjective formulation of the problem with a tradeoff between the number of bins and their heterogeneousness. An extension of the Best-Fit approximation algorithm is presented to solve the problem. Experimental investigations have been carried out on benchmark instances of different size, ranging from 100 to 1000 items. Encouraging results have been obtained, showing the applicability of the heuristic approach to the described problem.
Bin Packing Under Multiple Objectives - a Heuristic Approximation Approach
473
The article presents a local search approach for the solution of timetabling problems in general, with a particular implementation for competition track 3 of the International Timetabling Competition 2007 (ITC 2007). The heuristic search procedure is based on Threshold Accepting to overcome local optima. A stochastic neighborhood is proposed and implemented, randomly removing and reassigning events from the current solution. The overall concept has been incrementally obtained from a series of experiments, which we describe in each (sub)section of the paper. In result, we successfully derived a potential candidate solution approach for the finals of track 3 of the ITC 2007.
An application of the Threshold Accepting metaheuristic for curriculum based course timetabling
474
The paper presents a study of local search heuristics in general and variable neighborhood search in particular for the resolution of an assignment problem studied in the practical work of universities. Here, students have to be assigned to scientific topics which are proposed and supported by members of staff. The problem involves the optimization under given preferences of students which may be expressed when applying for certain topics. It is possible to observe that variable neighborhood search leads to superior results for the tested problem instances. One instance is taken from an actual case, while others have been generated based on the real world data to support the analysis with a deeper analysis. An extension of the problem has been formulated by integrating a second objective function that simultaneously balances the workload of the members of staff while maximizing utility of the students. The algorithmic approach has been prototypically implemented in a computer system. One important aspect in this context is the application of the research work to problems of other scientific institutions, and therefore the provision of decision support functionalities.
Variable Neighborhood Search for the University Lecturer-Student Assignment Problem
475
We introduce an extended tableau calculus for answer set programming (ASP). The proof system is based on the ASP tableaux defined in [Gebser&Schaub, ICLP 2006], with an added extension rule. We investigate the power of Extended ASP Tableaux both theoretically and empirically. We study the relationship of Extended ASP Tableaux with the Extended Resolution proof system defined by Tseitin for sets of clauses, and separate Extended ASP Tableaux from ASP Tableaux by giving a polynomial-length proof for a family of normal logic programs P_n for which ASP Tableaux has exponential-length minimal proofs with respect to n. Additionally, Extended ASP Tableaux imply interesting insight into the effect of program simplification on the lengths of proofs in ASP. Closely related to Extended ASP Tableaux, we empirically investigate the effect of redundant rules on the efficiency of ASP solving. To appear in Theory and Practice of Logic Programming (TPLP).
Extended ASP tableaux and rule redundancy in normal logic programs
476
In this paper, a Gaifman-Shapiro-style module architecture is tailored to the case of Smodels programs under the stable model semantics. The composition of Smodels program modules is suitably limited by module conditions which ensure the compatibility of the module system with stable models. Hence the semantics of an entire Smodels program depends directly on stable models assigned to its modules. This result is formalized as a module theorem which truly strengthens Lifschitz and Turner's splitting-set theorem for the class of Smodels programs. To streamline generalizations in the future, the module theorem is first proved for normal programs and then extended to cover Smodels programs using a translation from the latter class of programs to the former class. Moreover, the respective notion of module-level equivalence, namely modular equivalence, is shown to be a proper congruence relation: it is preserved under substitutions of modules that are modularly equivalent. Principles for program decomposition are also addressed. The strongly connected components of the respective dependency graph can be exploited in order to extract a module structure when there is no explicit a priori knowledge about the modules of a program. The paper includes a practical demonstration of tools that have been developed for automated (de)composition of Smodels programs. To appear in Theory and Practice of Logic Programming.
Achieving compositionality of the stable model semantics for Smodels programs
477
Most research related to unithood were conducted as part of a larger effort for the determination of termhood. Consequently, novelties are rare in this small sub-field of term extraction. In addition, existing work were mostly empirically motivated and derived. We propose a new probabilistically-derived measure, independent of any influences of termhood, that provides dedicated measures to gather linguistic evidence from parsed text and statistical evidence from Google search engine for the measurement of unithood. Our comparative study using 1,825 test cases against an existing empirically-derived function revealed an improvement in terms of precision, recall and accuracy.
Determining the Unithood of Word Sequences using a Probabilistic Approach
478
Most works related to unithood were conducted as part of a larger effort for the determination of termhood. Consequently, the number of independent research that study the notion of unithood and produce dedicated techniques for measuring unithood is extremely small. We propose a new approach, independent of any influences of termhood, that provides dedicated measures to gather linguistic evidence from parsed text and statistical evidence from Google search engine for the measurement of unithood. Our evaluations revealed a precision and recall of 98.68% and 91.82% respectively with an accuracy at 95.42% in measuring the unithood of 1005 test cases.
Determining the Unithood of Word Sequences using Mutual Information and Independence Measure
479
An increasing number of approaches for ontology engineering from text are gearing towards the use of online sources such as company intranet and the World Wide Web. Despite such rise, not much work can be found in aspects of preprocessing and cleaning dirty texts from online sources. This paper presents an enhancement of an Integrated Scoring for Spelling error correction, Abbreviation expansion and Case restoration (ISSAC). ISSAC is implemented as part of a text preprocessing phase in an ontology engineering system. New evaluations performed on the enhanced ISSAC using 700 chat records reveal an improved accuracy of 98% as compared to 96.5% and 71% based on the use of only basic ISSAC and of Aspell, respectively.
Enhanced Integrated Scoring for Cleaning Dirty Texts
480
We present a domain-independent algorithm that computes macros in a novel way. Our algorithm computes macros "on-the-fly" for a given set of states and does not require previously learned or inferred information, nor prior domain knowledge. The algorithm is used to define new domain-independent tractable classes of classical planning that are proved to include \emph{Blocksworld-arm} and \emph{Towers of Hanoi}.
On-the-fly Macros
481
In this study, we reproduce two new hybrid intelligent systems, involve three prominent intelligent computing and approximate reasoning methods: Self Organizing feature Map (SOM), Neruo-Fuzzy Inference System and Rough Set Theory (RST),called SONFIS and SORST. We show how our algorithms can be construed as a linkage of government-society interactions, where government catches various states of behaviors: solid (absolute) or flexible. So, transition of society, by changing of connectivity parameters (noise) from order to disorder is inferred.
Modeling of Social Transitions Using Intelligent Systems
482
The paper presents the investigation and implementation of the relationship between diversity and the performance of multiple classifiers on classification accuracy. The study is critical as to build classifiers that are strong and can generalize better. The parameters of the neural network within the committee were varied to induce diversity; hence structural diversity is the focus for this study. The hidden nodes and the activation function are the parameters that were varied. The diversity measures that were adopted from ecology such as Shannon and Simpson were used to quantify diversity. Genetic algorithm is used to find the optimal ensemble by using the accuracy as the cost function. The results observed shows that there is a relationship between structural diversity and accuracy. It is observed that the classification accuracy of an ensemble increases as the diversity increases. There was an increase of 3%-6% in the classification accuracy.
Relationship between Diversity and Perfomance of Multiple Classifiers for Decision Support
483
The paper presents an exponential pheromone deposition rule to modify the basic ant system algorithm which employs constant deposition rule. A stability analysis using differential equation is carried out to find out the values of parameters that make the ant system dynamics stable for both kinds of deposition rule. A roadmap of connected cities is chosen as the problem environment where the shortest route between two given cities is required to be discovered. Simulations performed with both forms of deposition approach using Elitist Ant System model reveal that the exponential deposition approach outperforms the classical one by a large extent. Exhaustive experiments are also carried out to find out the optimum setting of different controlling parameters for exponential deposition approach and an empirical relationship between the major controlling parameters of the algorithm and some features of problem environment.
Balancing Exploration and Exploitation by an Elitist Ant System with Exponential Pheromone Deposition Rule
484
This article presents a unique design for a parser using the Ant Colony Optimization algorithm. The paper implements the intuitive thought process of human mind through the activities of artificial ants. The scheme presented here uses a bottom-up approach and the parsing program can directly use ambiguous or redundant grammars. We allocate a node corresponding to each production rule present in the given grammar. Each node is connected to all other nodes (representing other production rules), thereby establishing a completely connected graph susceptible to the movement of artificial ants. Each ant tries to modify this sentential form by the production rule present in the node and upgrades its position until the sentential form reduces to the start symbol S. Successful ants deposit pheromone on the links that they have traversed through. Eventually, the optimum path is discovered by the links carrying maximum amount of pheromone concentration. The design is simple, versatile, robust and effective and obviates the calculation of the above mentioned sets and precedence relation tables. Further advantages of our scheme lie in i) ascertaining whether a given string belongs to the language represented by the grammar, and ii) finding out the shortest possible path from the given string to the start symbol S in case multiple routes exist.
A Novel Parser Design Algorithm Based on Artificial Ants
485
The paper presents an exponential pheromone deposition approach to improve the performance of classical Ant System algorithm which employs uniform deposition rule. A simplified analysis using differential equations is carried out to study the stability of basic ant system dynamics with both exponential and constant deposition rules. A roadmap of connected cities, where the shortest path between two specified cities are to be found out, is taken as a platform to compare Max-Min Ant System model (an improved and popular model of Ant System algorithm) with exponential and constant deposition rules. Extensive simulations are performed to find the best parameter settings for non-uniform deposition approach and experiments with these parameter settings revealed that the above approach outstripped the traditional one by a large extent in terms of both solution quality and convergence time.
Extension of Max-Min Ant System with Exponential Pheromone Deposition Rule
486
We address here two major challenges presented by dynamic data mining: 1) the stability challenge: we have implemented a rigorous incremental density-based clustering algorithm, independent from any initial conditions and ordering of the data-vectors stream, 2) the cognitive challenge: we have implemented a stringent selection process of association rules between clusters at time t-1 and time t for directly generating the main conclusions about the dynamics of a data-stream. We illustrate these points with an application to a two years and 2600 documents scientific information database.
Document stream clustering: experimenting an incremental algorithm and AR-based tools for highlighting dynamic trends
487
Data-stream clustering is an ever-expanding subdomain of knowledge extraction. Most of the past and present research effort aims at efficient scaling up for the huge data repositories. Our approach focuses on qualitative improvement, mainly for "weak signals" detection and precise tracking of topical evolutions in the framework of information watch - though scalability is intrinsically guaranteed in a possibly distributed implementation. Our GERMEN algorithm exhaustively picks up the whole set of density peaks of the data at time t, by identifying the local perturbations induced by the current document vector, such as changing cluster borders, or new/vanishing clusters. Optimality yields from the uniqueness 1) of the density landscape for any value of our zoom parameter, 2) of the cluster allocation operated by our border propagation rule. This results in a rigorous independence from the data presentation ranking or any initialization parameter. We present here as a first step the only assessment of a static view resulting from one year of the CNRS/INIST Pascal database in the field of geotechnics.
Classification dynamique d'un flux documentaire : une évaluation statique préalable de l'algorithme GERMEN
488
This paper gives an introduction to this issue, and presents the framework and the main steps of the Rosa project. Four teams of researchers, agronomists, computer scientists, psychologists and linguists were involved during five years within this project that aimed at the development of a knowledge based system. The purpose of the Rosa system is the modelling and the comparison of farm spatial organizations. It relies on a formalization of agronomical knowledge and thus induces a joint knowledge building process involving both the agronomists and the computer scientists. The paper describes the steps of the modelling process as well as the filming procedures set up by the psychologists and linguists in order to make explicit and to analyze the underlying knowledge building process.
Étude longitudinale d'une procédure de modélisation de connaissances en matière de gestion du territoire agricole
489
Collaborative tagging systems, such as Delicious, CiteULike, and others, allow users to annotate resources, e.g., Web pages or scientific papers, with descriptive labels called tags. The social annotations contributed by thousands of users, can potentially be used to infer categorical knowledge, classify documents or recommend new relevant information. Traditional text inference methods do not make best use of social annotation, since they do not take into account variations in individual users' perspectives and vocabulary. In a previous work, we introduced a simple probabilistic model that takes interests of individual annotators into account in order to find hidden topics of annotated resources. Unfortunately, that approach had one major shortcoming: the number of topics and interests must be specified a priori. To address this drawback, we extend the model to a fully Bayesian framework, which offers a way to automatically estimate these numbers. In particular, the model allows the number of interests and topics to change as suggested by the structure of the data. We evaluate the proposed model in detail on the synthetic and real-world data by comparing its performance to Latent Dirichlet Allocation on the topic extraction task. For the latter evaluation, we apply the model to infer topics of Web resources from social annotations obtained from Delicious in order to discover new resources similar to a specified one. Our empirical results demonstrate that the proposed model is a promising method for exploiting social knowledge contained in user-generated annotations.
Modeling Social Annotation: a Bayesian Approach
490
Airport gate assignment is of great importance in airport operations. In this paper, we study the Airport Gate Assignment Problem (AGAP), propose a new model and implement the model with Optimization Programming language (OPL). With the objective to minimize the number of conflicts of any two adjacent aircrafts assigned to the same gate, we build a mathematical model with logical constraints and the binary constraints, which can provide an efficient evaluation criterion for the Airlines to estimate the current gate assignment. To illustrate the feasibility of the model we construct experiments with the data obtained from Continental Airlines, Houston Gorge Bush Intercontinental Airport IAH, which indicate that our model is both energetic and effective. Moreover, we interpret experimental results, which further demonstrate that our proposed model can provide a powerful tool for airline companies to estimate the efficiency of their current work of gate assignment.
Airport Gate Assignment: New Model and Implementation
491
This paper investigates the use of different Artificial Intelligence methods to predict the values of several continuous variables from a Steam Generator. The objective was to determine how the different artificial intelligence methods performed in making predictions on the given dataset. The artificial intelligence methods evaluated were Neural Networks, Support Vector Machines, and Adaptive Neuro-Fuzzy Inference Systems. The types of neural networks investigated were Multi-Layer Perceptions, and Radial Basis Function. Bayesian and committee techniques were applied to these neural networks. Each of the AI methods considered was simulated in Matlab. The results of the simulations showed that all the AI methods were capable of predicting the Steam Generator data reasonably accurately. However, the Adaptive Neuro-Fuzzy Inference system out performed the other methods in terms of accuracy and ease of implementation, while still achieving a fast execution time as well as a reasonable training time.
Artificial Intelligence Techniques for Steam Generator Modelling
492
Theoretical analysis of machine intelligence (MI) is useful for defining a common platform in both theoretical and applied artificial intelligence (AI). The goal of this paper is to set canonical definitions that can assist pragmatic research in both strong and weak AI. Described epistemological features of machine intelligence include relationship between intelligent behavior, intelligent and unintelligent machine characteristics, observable and unobservable entities and classification of intelligence. The paper also establishes algebraic definitions of efficiency and accuracy of MI tests as their quality measure. The last part of the paper addresses the learning process with respect to the traditional epistemology and the epistemology of MI described here. The proposed views on MI positively correlate to the Hegelian monistic epistemology and contribute towards amalgamating idealistic deliberations with the AI theory, particularly in a local frame of reference.
Elementary epistemological features of machine intelligence
493
Answer set programming (ASP) is a logic programming paradigm that can be used to solve complex combinatorial search problems. Aggregates are an ASP construct that plays an important role in many applications. Defining a satisfactory semantics of aggregates turned out to be a difficult problem, and in this paper we propose a new approach, based on an analogy between aggregates and propositional connectives. First, we extend the definition of an answer set/stable model to cover arbitrary propositional theories; then we define aggregates on top of them both as primitive constructs and as abbreviations for formulas. Our definition of an aggregate combines expressiveness and simplicity, and it inherits many theorems about programs with nested expressions, such as theorems about strong equivalence and splitting.
Logic programs with propositional connectives and aggregates
494
Neural networks are powerful tools for classification and regression in static environments. This paper describes a technique for creating an ensemble of neural networks that adapts dynamically to changing conditions. The model separates the input space into four regions and each network is given a weight in each region based on its performance on samples from that region. The ensemble adapts dynamically by constantly adjusting these weights based on the current performance of the networks. The data set used is a collection of financial indicators with the goal of predicting the future platinum price. An ensemble with no weightings does not improve on the naive estimate of no weekly change; our weighting algorithm gives an average percentage error of 63% for twenty weeks of prediction.
Prediction of Platinum Prices Using Dynamically Weighted Mixture of Experts
495
Health Practice Guideliens are supposed to unify practices and propose recommendations to physicians. This paper describes GemFrame, a system capable of semi-automatically filling an XML template from free texts in the clinical domain. The XML template includes semantic information not explicitly encoded in the text (pairs of conditions and ac-tions/recommendations). Therefore, there is a need to compute the exact scope of condi-tions over text sequences expressing the re-quired actions. We present a system developped for this task. We show that it yields good performance when applied to the analysis of French practice guidelines. We conclude with a precise evaluation of the tool.
Analyse et structuration automatique des guides de bonnes pratiques cliniques : essai d'évaluation
496
The need for domain ontologies in mission critical applications such as risk management and hazard identification is becoming more and more pressing. Most research on ontology learning conducted in the academia remains unrealistic for real-world applications. One of the main problems is the dependence on non-incremental, rare knowledge and textual resources, and manually-crafted patterns and rules. This paper reports work in progress aiming to address such undesirable dependencies during ontology construction. Initial experiments using a working prototype of the system revealed promising potentials in automatically constructing high-quality domain ontologies using real-world texts.
Automatic Construction of Lightweight Domain Ontologies for Chemical Engineering Risk Management
497
We introduce novel results for approximate inference on planar graphical models using the loop calculus framework. The loop calculus (Chertkov and Chernyak, 2006) allows to express the exact partition function of a graphical model as a finite sum of terms that can be evaluated once the belief propagation (BP) solution is known. In general, full summation over all correction terms is intractable. We develop an algorithm for the approach presented in (Certkov et al., 2008) which represents an efficient truncation scheme on planar graphs and a new representation of the series in terms of Pfaffians of matrices. We analyze the performance of the algorithm for the partition function approximation for models with binary variables and pairwise interactions on grids and other planar graphs. We study in detail both the loop series and the equivalent Pfaffian series and show that the first term of the Pfaffian series for the general, intractable planar model, can provide very accurate approximations. The algorithm outperforms previous truncation schemes of the loop series and is competitive with other state-of-the-art methods for approximate inference.
Approximate inference on planar graphs using Loop Calculus and Belief Propagation
498
When a considerable number of mutations have no effects on fitness values, the fitness landscape is said neutral. In order to study the interplay between neutrality, which exists in many real-world applications, and performances of metaheuristics, it is useful to design landscapes which make it possible to tune precisely neutral degree distribution. Even though many neutral landscape models have already been designed, none of them are general enough to create landscapes with specific neutral degree distributions. We propose three steps to design such landscapes: first using an algorithm we construct a landscape whose distribution roughly fits the target one, then we use a simulated annealing heuristic to bring closer the two distributions and finally we affect fitness values to each neutral network. Then using this new family of fitness landscapes we are able to highlight the interplay between deceptiveness and neutrality.
Deceptiveness and Neutrality - the ND family of fitness landscapes
499