text
stringlengths
17
3.36M
source
stringlengths
3
333
__index_level_0__
int64
0
518k
Human decisional processes result from the employment of selected quantities of relevant information, generally synthesized from environmental incoming data and stored memories. Their main goal is the production of an appropriate and adaptive response to a cognitive or behavioral task. Different strategies of response production can be adopted, among which haphazard trials, formation of mental schemes and heuristics. In this paper, we propose a model of Boolean neural network that incorporates these strategies by recurring to global optimization strategies during the learning session. The model characterizes as well the passage from an unstructured/chaotic attractor neural network typical of data-driven processes to a faster one, forward-only and representative of schema-driven processes. Moreover, a simplified version of the Iowa Gambling Task (IGT) is introduced in order to test the model. Our results match with experimental data and point out some relevant knowledge coming from psychological domain.
Decisional Processes with Boolean Neural Network: the Emergence of Mental Schemes
600
Internet and expert systems have offered new ways of sharing and distributing knowledge, but there is a lack of researches in the area of web based expert systems. This paper introduces a development of a web-based expert system for the regulations of civil service in the Kingdom of Saudi Arabia named as RCSES. It is the first time to develop such system (application of civil service regulations) as well the development of it using web based approach. The proposed system considers 17 regulations of the civil service system. The different phases of developing the RCSES system are presented, as knowledge acquiring and selection, ontology and knowledge representations using XML format. XML Rule-based knowledge sources and the inference mechanisms were implemented using ASP.net technique. An interactive tool for entering the ontology and knowledge base, and the inferencing was built. It gives the ability to use, modify, update, and extend the existing knowledge base in an easy way. The knowledge was validated by experts in the domain of civil service regulations, and the proposed RCSES was tested, verified, and validated by different technical users and the developers staff. The RCSES system is compared with other related web based expert systems, that comparison proved the goodness, usability, and high performance of RCSES.
Web-Based Expert System for Civil Service Regulations: RCSES
601
Many engineering optimization problems can be considered as linear programming problems where all or some of the parameters involved are linguistic in nature. These can only be quantified using fuzzy sets. The aim of this paper is to solve a fuzzy linear programming problem in which the parameters involved are fuzzy quantities with logistic membership functions. To explore the applicability of the method a numerical example is considered to determine the monthly production planning quotas and profit of a home textile group.
Application of a Fuzzy Programming Technique to Production Planning in the Textile Industry
602
Mamdani Fuzzy Model is an important technique in Computational Intelligence (CI) study. This paper presents an implementation of a supervised learning method based on membership function training in the context of Mamdani fuzzy models. Specifically, auto zoom function of a digital camera is modelled using Mamdani technique. The performance of control method is verified through a series of simulation and numerical results are provided as illustrations.
The Application of Mamdani Fuzzy Model for Auto Zoom Function of a Digital Camera
603
In this book we introduce a new procedure called \alpha-Discounting Method for Multi-Criteria Decision Making (\alpha-D MCDM), which is as an alternative and extension of Saaty Analytical Hierarchy Process (AHP). It works for any number of preferences that can be transformed into a system of homogeneous linear equations. A degree of consistency (and implicitly a degree of inconsistency) of a decision-making problem are defined. \alpha-D MCDM is afterwards generalized to a set of preferences that can be transformed into a system of linear and or non-linear homogeneous and or non-homogeneous equations and or inequalities. The general idea of \alpha-D MCDM is to assign non-null positive parameters \alpha_1, \alpha_2, and so on \alpha_p to the coefficients in the right-hand side of each preference that diminish or increase them in order to transform the above linear homogeneous system of equations which has only the null-solution, into a system having a particular non-null solution. After finding the general solution of this system, the principles used to assign particular values to all parameters \alpha is the second important part of \alpha-D, yet to be deeper investigated in the future. In the current book we propose the Fairness Principle, i.e. each coefficient should be discounted with the same percentage (we think this is fair: not making any favoritism or unfairness to any coefficient), but the reader can propose other principles. For consistent decision-making problems with pairwise comparisons, \alpha-Discounting Method together with the Fairness Principle give the same result as AHP. But for weak inconsistent decision-making problem, \alpha-Discounting together with the Fairness Principle give a different result from AHP. Many consistent, weak inconsistent, and strong inconsistent examples are given in this book.
$α$-Discounting Multi-Criteria Decision Making ($α$-D MCDM)
604
This paper proposes a design for a system to generate constraint solvers that are specialised for specific problem models. It describes the design in detail and gives preliminary experimental results showing the feasibility and effectiveness of the approach.
Dominion -- A constraint solver generator
605
Machine Consciousness is the study of consciousness in a biological, philosophical, mathematical and physical perspective and designing a model that can fit into a programmable system architecture. Prime objective of the study is to make the system architecture behave consciously like a biological model does. Present work has developed a feasible definition of consciousness, that characterizes consciousness with four parameters i.e., parasitic, symbiotic, self referral and reproduction. Present work has also developed a biologically inspired consciousness architecture that has following layers: quantum layer, cellular layer, organ layer and behavioral layer and traced the characteristics of consciousness at each layer. Finally, the work has estimated physical and algorithmic architecture to devise a system that can behave consciously.
Logical Evaluation of Consciousness: For Incorporating Consciousness into Machine Architecture
606
To study the communication between information systems, Wang et al. [C. Wang, C. Wu, D. Chen, Q. Hu, and C. Wu, Communicating between information systems, Information Sciences 178 (2008) 3228-3239] proposed two concepts of type-1 and type-2 consistent functions. Some properties of such functions and induced relation mappings have been investigated there. In this paper, we provide an improvement of the aforementioned work by disclosing the symmetric relationship between type-1 and type-2 consistent functions. We present more properties of consistent functions and induced relation mappings and improve upon several deficient assertions in the original work. In particular, we unify and extend type-1 and type-2 consistent functions into the so-called neighborhood-consistent functions. This provides a convenient means for studying the communication between information systems based on various neighborhoods.
Some improved results on communication between information systems
607
Recently, Wang et al. discussed the properties of fuzzy information systems under homomorphisms in the paper [C. Wang, D. Chen, L. Zhu, Homomorphisms between fuzzy information systems, Applied Mathematics Letters 22 (2009) 1045-1050], where homomorphisms are based upon the concepts of consistent functions and fuzzy relation mappings. In this paper, we classify consistent functions as predecessor-consistent and successor-consistent, and then proceed to present more properties of consistent functions. In addition, we improve some characterizations of fuzzy relation mappings provided by Wang et al.
Homomorphisms between fuzzy information systems revisited
608
In this paper, research on AI based modeling technique to optimize development of new alloys with necessitated improvements in properties and chemical mixture over existing alloys as per functional requirements of product is done. The current research work novels AI in lieu of predictions to establish association between material and product customary. Advanced computational simulation techniques like CFD, FEA interrogations are made viable to authenticate product dynamics in context to experimental investigations. Accordingly, the current research is focused towards binding relationships between material design and product design domains. The input to feed forward back propagation prediction network model constitutes of material design features. Parameters relevant to product design strategies are furnished as target outputs. The outcomes of ANN shows good sign of correlation between material and product design domains. The study enriches a new path to illustrate material factors at the time of new product development.
Establishment of Relationships between Material Design and Product Design Domains by Hybrid FEM-ANN Technique
609
Currently, criminals profile (CP) is obtained from investigators or forensic psychologists interpretation, linking crime scene characteristics and an offenders behavior to his or her characteristics and psychological profile. This paper seeks an efficient and systematic discovery of nonobvious and valuable patterns between variables from a large database of solved cases via a probabilistic network (PN) modeling approach. The PN structure can be used to extract behavioral patterns and to gain insight into what factors influence these behaviors. Thus, when a new case is being investigated and the profile variables are unknown because the offender has yet to be identified, the observed crime scene variables are used to infer the unknown variables based on their connections in the structure and the corresponding numerical (probabilistic) weights. The objective is to produce a more systematic and empirical approach to profiling, and to use the resulting PN model as a decision tool.
Modeling of Human Criminal Behavior using Probabilistic Networks
610
Constraint programming can definitely be seen as a model-driven paradigm. The users write programs for modeling problems. These programs are mapped to executable models to calculate the solutions. This paper focuses on efficient model management (definition and transformation). From this point of view, we propose to revisit the design of constraint-programming systems. A model-driven architecture is introduced to map solving-independent constraint models to solving-dependent decision models. Several important questions are examined, such as the need for a visual highlevel modeling language, and the quality of metamodeling techniques to implement the transformations. A main result is the s-COMMA platform that efficiently implements the chain from modeling to solving constraint problems
Model-Driven Constraint Programming
611
An important challenge in constraint programming is to rewrite constraint models into executable programs calculat- ing the solutions. This phase of constraint processing may require translations between constraint programming lan- guages, transformations of constraint representations, model optimizations, and tuning of solving strategies. In this paper, we introduce a pivot metamodel describing the common fea- tures of constraint models including different kinds of con- straints, statements like conditionals and loops, and other first-class elements like object classes and predicates. This metamodel is general enough to cope with the constructions of many languages, from object-oriented modeling languages to logic languages, but it is independent from them. The rewriting operations manipulate metamodel instances apart from languages. As a consequence, the rewriting operations apply whatever languages are selected and they are able to manage model semantic information. A bridge is created between the metamodel space and languages using parsing techniques. Tools from the software engineering world can be useful to implement this framework.
Rewriting Constraint Models with Metamodels
612
Transforming constraint models is an important task in re- cent constraint programming systems. User-understandable models are defined during the modeling phase but rewriting or tuning them is manda- tory to get solving-efficient models. We propose a new architecture al- lowing to define bridges between any (modeling or solver) languages and to implement model optimizations. This architecture follows a model- driven approach where the constraint modeling process is seen as a set of model transformations. Among others, an interesting feature is the def- inition of transformations as concept-oriented rules, i.e. based on types of model elements where the types are organized into a hierarchy called a metamodel.
Using ATL to define advanced and flexible constraint model transformations
613
The methodology of Bayesian Model Averaging (BMA) is applied for assessment of newborn brain maturity from sleep EEG. In theory this methodology provides the most accurate assessments of uncertainty in decisions. However, the existing BMA techniques have been shown providing biased assessments in the absence of some prior information enabling to explore model parameter space in details within a reasonable time. The lack in details leads to disproportional sampling from the posterior distribution. In case of the EEG assessment of brain maturity, BMA results can be biased because of the absence of information about EEG feature importance. In this paper we explore how the posterior information about EEG features can be used in order to reduce a negative impact of disproportional sampling on BMA performance. We use EEG data recorded from sleeping newborns to test the efficiency of the proposed BMA technique.
Feature Importance in Bayesian Assessment of Newborn Brain Maturity from EEG
614
In this paper we describe an original computational model for solving different types of Distributed Constraint Satisfaction Problems (DCSP). The proposed model is called Controller-Agents for Constraints Solving (CACS). This model is intended to be used which is an emerged field from the integration between two paradigms of different nature: Multi-Agent Systems (MAS) and the Constraint Satisfaction Problem paradigm (CSP) where all constraints are treated in central manner as a black-box. This model allows grouping constraints to form a subset that will be treated together as a local problem inside the controller. Using this model allows also handling non-binary constraints easily and directly so that no translating of constraints into binary ones is needed. This paper presents the implementation outlines of a prototype of DCSP solver, its usage methodology and overview of the CACS application for timetabling problems.
A new model for solution of complex distributed constrained problems
615
Model transformations operate on models conforming to precisely defined metamodels. Consequently, it often seems relatively easy to chain them: the output of a transformation may be given as input to a second one if metamodels match. However, this simple rule has some obvious limitations. For instance, a transformation may only use a subset of a metamodel. Therefore, chaining transformations appropriately requires more information. We present here an approach that automatically discovers more detailed information about actual chaining constraints by statically analyzing transformations. The objective is to provide developers who decide to chain transformations with more data on which to base their choices. This approach has been successfully applied to the case of a library of endogenous transformations. They all have the same source and target metamodel but have some hidden chaining constraints. In such a case, the simple metamodel matching rule given above does not provide any useful information.
Automatically Discovering Hidden Transformation Chaining Constraints
616
This article presents the results of the research carried out on the development of a medical diagnostic system applied to the Acute Bacterial Meningitis, using the Case Based Reasoning methodology. The research was focused on the implementation of the adaptation stage, from the integration of Case Based Reasoning and Rule Based Expert Systems. In this adaptation stage we use a higher level RBC that stores and allows reutilizing change experiences, combined with a classic rule-based inference engine. In order to take into account the most evident clinical situation, a pre-diagnosis stage is implemented using a rule engine that, given an evident situation, emits the corresponding diagnosis and avoids the complete process.
Integration of Rule Based Expert Systems and Case Based Reasoning in an Acute Bacterial Meningitis Clinical Decision Support System
617
Recent advancement in web services plays an important role in business to business and business to consumer interaction. Discovery mechanism is not only used to find a suitable service but also provides collaboration between service providers and consumers by using standard protocols. A static web service discovery mechanism is not only time consuming but requires continuous human interaction. This paper proposed an efficient dynamic web services discovery mechanism that can locate relevant and updated web services from service registries and repositories with timestamp based on indexing value and categorization for faster and efficient discovery of service. The proposed prototype focuses on quality of service issues and introduces concept of local cache, categorization of services, indexing mechanism, CSP (Constraint Satisfaction Problem) solver, aging and usage of translator. Performance of proposed framework is evaluated by implementing the algorithm and correctness of our method is shown. The results of proposed framework shows greater performance and accuracy in dynamic discovery mechanism of web services resolving the existing issues of flexibility, scalability, based on quality of service, and discovers updated and most relevant services with ease of usage.
Indexer Based Dynamic Web Services Discovery
618
Fuzzy Description Logics (DLs) are a family of logics which allow the representation of (and the reasoning with) structured knowledge affected by vagueness. Although most of the not very expressive crisp DLs, such as ALC, enjoy the Finite Model Property (FMP), this is not the case once we move into the fuzzy case. In this paper we show that if we allow arbitrary knowledge bases, then the fuzzy DLs ALC under Lukasiewicz and Product fuzzy logics do not verify the FMP even if we restrict to witnessed models; in other words, finite satisfiability and witnessed satisfiability are different for arbitrary knowledge bases. The aim of this paper is to point out the failure of FMP because it affects several algorithms published in the literature for reasoning under fuzzy ALC.
On the Failure of the Finite Model Property in some Fuzzy Description Logics
619
The basic aim of our study is to give a possible model for handling uncertain information. This model is worked out in the framework of DATALOG. At first the concept of fuzzy Datalog will be summarized, then its extensions for intuitionistic- and interval-valued fuzzy logic is given and the concept of bipolar fuzzy Datalog is introduced. Based on these ideas the concept of multivalued knowledge-base will be defined as a quadruple of any background knowledge; a deduction mechanism; a connecting algorithm, and a function set of the program, which help us to determine the uncertainty levels of the results. At last a possible evaluation strategy is given.
A multivalued knowledge-base model
620
RefereeToolbox is a java package implementing combination operators for fusing evidences. It is downloadable from: http://refereefunction.fredericdambreville.com/releases RefereeToolbox is based on an interpretation of the fusion rules by means of Referee Functions. This approach implies a dissociation between the definition of the combination and its actual implementation, which is common to all referee-based combinations. As a result, RefereeToolbox is designed with the aim to be generic and evolutive.
Release ZERO.0.1 of package RefereeToolbox
621
LEXSYS, (Legume Expert System) was a project conceived at IITA (International Institute of Tropical Agriculture) Ibadan Nigeria. It was initiated by the COMBS (Collaborative Group on Maize-Based Systems Research in the 1990. It was meant for a general framework for characterizing on-farm testing for technology design for sustainable cereal-based cropping system. LEXSYS is not a true expert system as the name would imply, but simply a user-friendly information system. This work is an attempt to give a formal representation of the existing system and then present areas where intelligent agent can be applied.
LEXSYS: Architecture and Implication for Intelligent Agent systems
622
Computing value of information (VOI) is a crucial task in various aspects of decision-making under uncertainty, such as in meta-reasoning for search; in selecting measurements to make, prior to choosing a course of action; and in managing the exploration vs. exploitation tradeoff. Since such applications typically require numerous VOI computations during a single run, it is essential that VOI be computed efficiently. We examine the issue of anytime estimation of VOI, as frequently it suffices to get a crude estimate of the VOI, thus saving considerable computational resources. As a case study, we examine VOI estimation in the measurement selection problem. Empirical evaluation of the proposed scheme in this domain shows that computational resources can indeed be significantly reduced, at little cost in expected rewards achieved in the overall decision problem.
Rational Value of Information Estimation for Measurement Selection
623
Formalism based on GA is an alternative to distributed representation models developed so far --- Smolensky's tensor product, Holographic Reduced Representations (HRR) and Binary Spatter Code (BSC). Convolutions are replaced by geometric products, interpretable in terms of geometry which seems to be the most natural language for visualization of higher concepts. This paper recalls the main ideas behind the GA model and investigates recognition test results using both inner product and a clipped version of matrix representation. The influence of accidental blade equality on recognition is also studied. Finally, the efficiency of the GA model is compared to that of previously developed models.
Geometric Algebra Model of Distributed Representations
624
We present in this paper some examples of how to compute by hand the PCR5 fusion rule for three sources, so the reader will better understand its mechanism. We also take into consideration the importance of sources, which is different from the classical discounting of sources.
Importance of Sources using the Repeated Fusion Method and the Proportional Conflict Redistribution Rules #5 and #6
625
Terrorism has led to many problems in Thai societies, not only property damage but also civilian casualties. Predicting terrorism activities in advance can help prepare and manage risk from sabotage by these activities. This paper proposes a framework focusing on event classification in terrorism domain using fuzzy inference systems (FISs). Each FIS is a decision-making model combining fuzzy logic and approximate reasoning. It is generated in five main parts: the input interface, the fuzzification interface, knowledge base unit, decision making unit and output defuzzification interface. Adaptive neuro-fuzzy inference system (ANFIS) is a FIS model adapted by combining the fuzzy logic and neural network. The ANFIS utilizes automatic identification of fuzzy logic rules and adjustment of membership function (MF). Moreover, neural network can directly learn from data set to construct fuzzy logic rules and MF implemented in various applications. FIS settings are evaluated based on two comparisons. The first evaluation is the comparison between unstructured and structured events using the same FIS setting. The second comparison is the model settings between FIS and ANFIS for classifying structured events. The data set consists of news articles related to terrorism events in three southern provinces of Thailand. The experimental results show that the classification performance of the FIS resulting from structured events achieves satisfactory accuracy and is better than the unstructured events. In addition, the classification of structured events using ANFIS gives higher performance than the events using only FIS in the prediction of terrorism events.
Terrorism Event Classification Using Fuzzy Inference Systems
626
Most of the web user's requirements are search or navigation time and getting correctly matched result. These constrains can be satisfied with some additional modules attached to the existing search engines and web servers. This paper proposes that powerful architecture for search engines with the title of Probabilistic Semantic Web Mining named from the methods used. With the increase of larger and larger collection of various data resources on the World Wide Web (WWW), Web Mining has become one of the most important requirements for the web users. Web servers will store various formats of data including text, image, audio, video etc., but servers can not identify the contents of the data. These search techniques can be improved by adding some special techniques including semantic web mining and probabilistic analysis to get more accurate results. Semantic web mining technique can provide meaningful search of data resources by eliminating useless information with mining process. In this technique web servers will maintain Meta information of each and every data resources available in that particular web server. This will help the search engine to retrieve information that is relevant to user given input string. This paper proposing the idea of combing these two techniques Semantic web mining and Probabilistic analysis for efficient and accurate search results of web mining. SPF can be calculated by considering both semantic accuracy and syntactic accuracy of data with the input string. This will be the deciding factor for producing results.
Probabilistic Semantic Web Mining Using Artificial Neural Analysis
627
The Nystrom method is an efficient technique to speed up large-scale learning applications by generating low-rank approximations. Crucial to the performance of this technique is the assumption that a matrix can be well approximated by working exclusively with a subset of its columns. In this work we relate this assumption to the concept of matrix coherence and connect matrix coherence to the performance of the Nystrom method. Making use of related work in the compressed sensing and the matrix completion literature, we derive novel coherence-based bounds for the Nystrom method in the low-rank setting. We then present empirical results that corroborate these theoretical bounds. Finally, we present more general empirical results for the full-rank setting that convincingly demonstrate the ability of matrix coherence to measure the degree to which information can be extracted from a subset of columns.
Matrix Coherence and the Nystrom Method
628
We define the concept of an internal symmetry. This is a symmety within a solution of a constraint satisfaction problem. We compare this to solution symmetry, which is a mapping between different solutions of the same problem. We argue that we may be able to exploit both types of symmetry when finding solutions. We illustrate the potential of exploiting internal symmetries on two benchmark domains: Van der Waerden numbers and graceful graphs. By identifying internal symmetries we are able to extend the state of the art in both cases.
Symmetry within Solutions
629
We study propagation algorithms for the conjunction of two AllDifferent constraints. Solutions of an AllDifferent constraint can be seen as perfect matchings on the variable/value bipartite graph. Therefore, we investigate the problem of finding simultaneous bipartite matchings. We present an extension of the famous Hall theorem which characterizes when simultaneous bipartite matchings exists. Unfortunately, finding such matchings is NP-hard in general. However, we prove a surprising result that finding a simultaneous matching on a convex bipartite graph takes just polynomial time. Based on this theoretical result, we provide the first polynomial time bound consistency algorithm for the conjunction of two AllDifferent constraints. We identify a pathological problem on which this propagator is exponentially faster compared to existing propagators. Our experiments show that this new propagator can offer significant benefits over existing methods.
Propagating Conjunctions of AllDifferent Constraints
630
The successful execution of a construction project is heavily impacted by making the right decision during tendering processes. Managing tender procedures is very complex and uncertain involving coordination of many tasks and individuals with different priorities and objectives. Bias and inconsistent decision are inevitable if the decision-making process is totally depends on intuition, subjective judgement or emotion. In making transparent decision and healthy competition tendering, there exists a need for flexible guidance tool for decision support. Aim of this paper is to give a review on current practices of Decision Support Systems (DSS) technology in construction tendering processes. Current practices of general tendering processes as applied to the most countries in different regions such as United States, Europe, Middle East and Asia are comprehensively discussed. Applications of Web-based tendering processes is also summarised in terms of its properties. Besides that, a summary of Decision Support System (DSS) components is included in the next section. Furthermore, prior researches on implementation of DSS approaches in tendering processes are discussed in details. Current issues arise from both of paper-based and Web-based tendering processes are outlined. Finally, conclusion is included at the end of this paper.
Decision Support Systems (DSS) in Construction Tendering Processes
631
The need for integration of ontologies with nonmonotonic rules has been gaining importance in a number of areas, such as the Semantic Web. A number of researchers addressed this problem by proposing a unified semantics for hybrid knowledge bases composed of both an ontology (expressed in a fragment of first-order logic) and nonmonotonic rules. These semantics have matured over the years, but only provide solutions for the static case when knowledge does not need to evolve. In this paper we take a first step towards addressing the dynamics of hybrid knowledge bases. We focus on knowledge updates and, considering the state of the art of belief update, ontology update and rule update, we show that current solutions are only partial and difficult to combine. Then we extend the existing work on ABox updates with rules, provide a semantics for such evolving hybrid knowledge bases and study its basic properties. To the best of our knowledge, this is the first time that an update operator is proposed for hybrid knowledge bases.
Towards Closed World Reasoning in Dynamic Open Worlds (Extended Version)
632
On the basis of an analysis of previous research, we present a generalized approach for measuring the difference of plans with an exemplary application to machine scheduling. Our work is motivated by the need for such measures, which are used in dynamic scheduling and planning situations. In this context, quantitative approaches are needed for the assessment of the robustness and stability of schedules. Obviously, any `robustness' or `stability' of plans has to be defined w. r. t. the particular situation and the requirements of the human decision maker. Besides the proposition of an instability measure, we therefore discuss possibilities of obtaining meaningful information from the decision maker for the implementation of the introduced approach.
On the comparison of plans: Proposition of an instability measure for dynamic machine scheduling
633
We define an inference system to capture explanations based on causal statements, using an ontology in the form of an IS-A hierarchy. We first introduce a simple logical language which makes it possible to express that a fact causes another fact and that a fact explains another fact. We present a set of formal inference patterns from causal statements to explanation statements. We introduce an elementary ontology which gives greater expressiveness to the system while staying close to propositional reasoning. We provide an inference system that captures the patterns discussed, firstly in a purely propositional framework, then in a datalog (limited predicate) framework.
Ontology-based inference for causal explanation
634
We report (to our knowledge) the first evaluation of Constraint Satisfaction as a computational framework for solving closest string problems. We show that careful consideration of symbol occurrences can provide search heuristics that provide several orders of magnitude speedup at and above the optimal distance. We also report (to our knowledge) the first analysis and evaluation -- using any technique -- of the computational difficulties involved in the identification of all closest strings for a given input set. We describe algorithms for web-scale distributed solution of closest string problems, both purely based on AI backtrack search and also hybrid numeric-AI methods.
The Exact Closest String Problem as a Constraint Satisfaction Problem
635
We consider the problem of jointly training structured models for extraction from sources whose instances enjoy partial overlap. This has important applications like user-driven ad-hoc information extraction on the web. Such applications present new challenges in terms of the number of sources and their arbitrary pattern of overlap not seen by earlier collective training schemes applied on two sources. We present an agreement-based learning framework and alternatives within it to trade-off tractability, robustness to noise, and extent of agreement. We provide a principled scheme to discover low-noise agreement sets in unlabeled data across the sources. Through extensive experiments over 58 real datasets, we establish that our method of additively rewarding agreement over maximal segments of text provides the best trade-offs, and also scores over alternatives such as collective inference, staged training, and multi-view learning.
Joint Structured Models for Extraction from Overlapping Sources
636
A technique to study the dynamics of solving of a research task is suggested. The research task was based on specially developed software Right- Wrong Responder (RWR), with the participants having to reveal the response logic of the program. The participants interacted with the program in the form of a semi-binary dialogue, which implies the feedback responses of only two kinds - "right" or "wrong". The technique has been applied to a small pilot group of volunteer participants. Some of them have successfully solved the task (solvers) and some have not (non-solvers). In the beginning of the work, the solvers did more wrong moves than non-solvers, and they did less wrong moves closer to the finish of the work. A phase portrait of the work both in solvers and non-solvers showed definite cycles that may correspond to sequences of partially true hypotheses that may be formulated by the participants during the solving of the task.
An approach to visualize the course of solving of a research task in humans
637
This paper constructively proves the existence of an effective procedure generating a computable (total) function that is not contained in any given effectively enumerable set of such functions. The proof implies the existence of machines that process informal concepts such as computable (total) functions beyond the limits of any given Turing machine or formal system, that is, these machines can, in a certain sense, "compute" function values beyond these limits. We call these machines creative. We argue that any "intelligent" machine should be capable of processing informal concepts such as computable (total) functions, that is, it should be creative. Finally, we introduce hypotheses on creative machines which were developed on the basis of theoretical investigations and experiments with computer programs. The hypotheses say that machine intelligence is the execution of a self-developing procedure starting from any universal programming language and any input.
Informal Concepts in Machines
638
Mountain river torrents and snow avalanches generate human and material damages with dramatic consequences. Knowledge about natural phenomenona is often lacking and expertise is required for decision and risk management purposes using multi-disciplinary quantitative or qualitative approaches. Expertise is considered as a decision process based on imperfect information coming from more or less reliable and conflicting sources. A methodology mixing the Analytic Hierarchy Process (AHP), a multi-criteria aid-decision method, and information fusion using Belief Function Theory is described. Fuzzy Sets and Possibilities theories allow to transform quantitative and qualitative criteria into a common frame of discernment for decision in Dempster-Shafer Theory (DST ) and Dezert-Smarandache Theory (DSmT) contexts. Main issues consist in basic belief assignments elicitation, conflict identification and management, fusion rule choices, results validation but also in specific needs to make a difference between importance and reliability and uncertainty in the fusion process.
A two-step fusion process for multi-criteria decision applied to natural hazards in mountains
639
A lot of mathematical knowledge has been formalized and stored in repositories by now: different mathematical theorems and theories have been taken into consideration and included in mathematical repositories. Applications more distant from pure mathematics, however --- though based on these theories --- often need more detailed knowledge about the underlying theories. In this paper we present an example Mizar formalization from the area of electrical engineering focusing on stability theory which is based on complex analysis. We discuss what kind of special knowledge is necessary here and which amount of this knowledge is included in existing repositories.
On Building a Knowledge Base for Stability Theory
640
It is hypothesized that creativity arises from the self-mending capacity of an internal model of the world, or worldview. The uniquely honed worldview of a creative individual results in a distinctive style that is recognizable within and across domains. It is further hypothesized that creativity is domaingeneral in the sense that there exist multiple avenues by which the distinctiveness of one's worldview can be expressed. These hypotheses were tested using art students and creative writing students. Art students guessed significantly above chance both which painting was done by which of five famous artists, and which artwork was done by which of their peers. Similarly, creative writing students guessed significantly above chance both which passage was written by which of five famous writers, and which passage was written by which of their peers. These findings support the hypothesis that creative style is recognizable. Moreover, creative writing students guessed significantly above chance which of their peers produced particular works of art, supporting the hypothesis that creative style is recognizable not just within but across domains.
Recognizability of Individual Creative Style Within and Across Domains: Preliminary Studies
641
Approximate dynamic programming has been used successfully in a large variety of domains, but it relies on a small set of provided approximation features to calculate solutions reliably. Large and rich sets of features can cause existing algorithms to overfit because of a limited number of samples. We address this shortcoming using $L_1$ regularization in approximate linear programming. Because the proposed method can automatically select the appropriate richness of features, its performance does not degrade with an increasing number of features. These results rely on new and stronger sampling bounds for regularized approximate linear programs. We also propose a computationally efficient homotopy method. The empirical evaluation of the approach shows that the proposed method performs well on simple MDPs and standard benchmark problems.
Feature Selection Using Regularization in Approximate Linear Programs for Markov Decision Processes
642
We discuss how to use a Genetic Regulatory Network as an evolutionary representation to solve a typical GP reinforcement problem, the pole balancing. The network is a modified version of an Artificial Regulatory Network proposed a few years ago, and the task could be solved only by finding a proper way of connecting inputs and outputs to the network. We show that the representation is able to generalize well over the problem domain, and discuss the performance of different models of this kind.
Evolving Genes to Balance a Pole
643
Programs to solve so-called constraint problems are complex pieces of software which require many design decisions to be made more or less arbitrarily by the implementer. These decisions affect the performance of the finished solver significantly. Once a design decision has been made, it cannot easily be reversed, although a different decision may be more appropriate for a particular problem. We investigate using machine learning to make these decisions automatically depending on the problem to solve with the alldifferent constraint as an example. Our system is capable of making non-trivial, multi-level decisions that improve over always making a default choice.
Using machine learning to make constraint solver implementation decisions
644
In this paper the author presents a kind of Soft Computing Technique, mainly an application of fuzzy set theory of Prof. Zadeh [16], on a problem of Medical Experts Systems. The choosen problem is on design of a physician's decision model which can take crisp as well as fuzzy data as input, unlike the traditional models. The author presents a mathematical model based on fuzzy set theory for physician aided evaluation of a complete representation of information emanating from the initial interview including patient past history, present symptoms, and signs observed upon physical examination and results of clinical and diagnostic tests.
A Soft Computing Model for Physicians' Decision Process
645
An important problem in computational social choice theory is the complexity of undesirable behavior among agents, such as control, manipulation, and bribery in election systems. These kinds of voting strategies are often tempting at the individual level but disastrous for the agents as a whole. Creating election systems where the determination of such strategies is difficult is thus an important goal. An interesting set of elections is that of scoring protocols. Previous work in this area has demonstrated the complexity of misuse in cases involving a fixed number of candidates, and of specific election systems on unbounded number of candidates such as Borda. In contrast, we take the first step in generalizing the results of computational complexity of election misuse to cases of infinitely many scoring protocols on an unbounded number of candidates. Interesting families of systems include $k$-approval and $k$-veto elections, in which voters distinguish $k$ candidates from the candidate set. Our main result is to partition the problems of these families based on their complexity. We do so by showing they are polynomial-time computable, NP-hard, or polynomial-time equivalent to another problem of interest. We also demonstrate a surprising connection between manipulation in election systems and some graph theory problems.
The Complexity of Manipulating $k$-Approval Elections
646
In the last two decades, a number of methods have been proposed for forecasting based on fuzzy time series. Most of the fuzzy time series methods are presented for forecasting of car road accidents. However, the forecasting accuracy rates of the existing methods are not good enough. In this paper, we compared our proposed new method of fuzzy time series forecasting with existing methods. Our method is based on means based partitioning of the historical data of car road accidents. The proposed method belongs to the kth order and time-variant methods. The proposed method can get the best forecasting accuracy rate for forecasting the car road accidents than the existing methods.
Inaccuracy Minimization by Partioning Fuzzy Data Sets - Validation of Analystical Methodology
647
In this paper, a new learning algorithm for adaptive network intrusion detection using naive Bayesian classifier and decision tree is presented, which performs balance detections and keeps false positives at acceptable level for different types of network attacks, and eliminates redundant attributes as well as contradictory examples from training data that make the detection model complex. The proposed algorithm also addresses some difficulties of data mining such as handling continuous attribute, dealing with missing attribute values, and reducing noise in training data. Due to the large volumes of security audit data as well as the complex and dynamic properties of intrusion behaviours, several data miningbased intrusion detection techniques have been applied to network-based traffic data and host-based data in the last decades. However, there remain various issues needed to be examined towards current intrusion detection systems (IDS). We tested the performance of our proposed algorithm with existing learning algorithms by employing on the KDD99 benchmark intrusion detection dataset. The experimental results prove that the proposed algorithm achieved high detection rates (DR) and significant reduce false positives (FP) for different types of network intrusions using limited computational resources.
Combining Naive Bayes and Decision Tree for Adaptive Intrusion Detection
648
This paper presents a combination of several automated reasoning and proof presentation tools with the Mizar system for formalization of mathematics. The combination forms an online service called MizAR, similar to the SystemOnTPTP service for first-order automated reasoning. The main differences to SystemOnTPTP are the use of the Mizar language that is oriented towards human mathematicians (rather than the pure first-order logic used in SystemOnTPTP), and setting the service in the context of the large Mizar Mathematical Library of previous theorems,definitions, and proofs (rather than the isolated problems that are solved in SystemOnTPTP). These differences poses new challenges and new opportunities for automated reasoning and for proof presentation tools. This paper describes the overall structure of MizAR, and presents the automated reasoning systems and proof presentation tools that are combined to make MizAR a useful mathematical service.
Automated Reasoning and Presentation Support for Formalizing Mathematics in Mizar
649
Structured and semi-structured data describing entities, taxonomies and ontologies appears in many domains. There is a huge interest in integrating structured information from multiple sources; however integrating structured data to infer complex common structures is a difficult task because the integration must aggregate similar structures while avoiding structural inconsistencies that may appear when the data is combined. In this work, we study the integration of structured social metadata: shallow personal hierarchies specified by many individual users on the SocialWeb, and focus on inferring a collection of integrated, consistent taxonomies. We frame this task as an optimization problem with structural constraints. We propose a new inference algorithm, which we refer to as Relational Affinity Propagation (RAP) that extends affinity propagation (Frey and Dueck 2007) by introducing structural constraints. We validate the approach on a real-world social media dataset, collected from the photosharing website Flickr. Our empirical results show that our proposed approach is able to construct deeper and denser structures compared to an approach using only the standard affinity propagation algorithm.
Integrating Structured Metadata with Relational Affinity Propagation
650
The paper offers a mathematical formalization of the Turing test. This formalization makes it possible to establish the conditions under which some Turing machine will pass the Turing test and the conditions under which every Turing machine (or every Turing machine of the special class) will fail the Turing test.
A Formalization of the Turing Test
651
Many social Web sites allow users to annotate the content with descriptive metadata, such as tags, and more recently to organize content hierarchically. These types of structured metadata provide valuable evidence for learning how a community organizes knowledge. For instance, we can aggregate many personal hierarchies into a common taxonomy, also known as a folksonomy, that will aid users in visualizing and browsing social content, and also to help them in organizing their own content. However, learning from social metadata presents several challenges, since it is sparse, shallow, ambiguous, noisy, and inconsistent. We describe an approach to folksonomy learning based on relational clustering, which exploits structured metadata contained in personal hierarchies. Our approach clusters similar hierarchies using their structure and tag statistics, then incrementally weaves them into a deeper, bushier tree. We study folksonomy learning using social metadata extracted from the photo-sharing site Flickr, and demonstrate that the proposed approach addresses the challenges. Moreover, comparing to previous work, the approach produces larger, more accurate folksonomies, and in addition, scales better.
Growing a Tree in the Forest: Constructing Folksonomies by Integrating Structured Metadata
652
Symmetry is an important feature of many constraint programs. We show that any problem symmetry acting on a set of symmetry breaking constraints can be used to break symmetry. Different symmetries pick out different solutions in each symmetry class. This simple but powerful idea can be used in a number of different ways. We describe one application within model restarts, a search technique designed to reduce the conflict between symmetry breaking and the branching heuristic. In model restarts, we restart search periodically with a random symmetry of the symmetry breaking constraints. Experimental results show that this symmetry breaking technique is effective in practice on some standard benchmark problems.
Symmetries of Symmetry Breaking Constraints
653
We propose automatically learning probabilistic Hierarchical Task Networks (pHTNs) in order to capture a user's preferences on plans, by observing only the user's behavior. HTNs are a common choice of representation for a variety of purposes in planning, including work on learning in planning. Our contributions are (a) learning structure and (b) representing preferences. In contrast, prior work employing HTNs considers learning method preconditions (instead of structure) and representing domain physics or search control knowledge (rather than preferences). Initially we will assume that the observed distribution of plans is an accurate representation of user preference, and then generalize to the situation where feasibility constraints frequently prevent the execution of preferred plans. In order to learn a distribution on plans we adapt an Expectation-Maximization (EM) technique from the discipline of (probabilistic) grammar induction, taking the perspective of task reductions as productions in a context-free grammar over primitive actions. To account for the difference between the distributions of possible and preferred plans we subsequently modify this core EM technique, in short, by rescaling its input.
Learning Probabilistic Hierarchical Task Networks to Capture User Preferences
654
Brain-Like Stochastic Search (BLiSS) refers to this task: given a family of utility functions U(u,A), where u is a vector of parameters or task descriptors, maximize or minimize U with respect to u, using networks (Option Nets) which input A and learn to generate good options u stochastically. This paper discusses why this is crucial to brain-like intelligence (an area funded by NSF) and to many applications, and discusses various possibilities for network design and training. The appendix discusses recent research, relations to work on stochastic optimization in operations research, and relations to engineering-based approaches to understanding neocortex.
Brain-Like Stochastic Search: A Research Challenge and Funding Opportunity
655
We introduce a framework for representing a variety of interesting problems as inference over the execution of probabilistic model programs. We represent a "solution" to such a problem as a guide program which runs alongside the model program and influences the model program's random choices, leading the model program to sample from a different distribution than from its priors. Ideally the guide program influences the model program to sample from the posteriors given the evidence. We show how the KL- divergence between the true posterior distribution and the distribution induced by the guided model program can be efficiently estimated (up to an additive constant) by sampling multiple executions of the guided model program. In addition, we show how to use the guide program as a proposal distribution in importance sampling to statistically prove lower bounds on the probability of the evidence and on the probability of a hypothesis and the evidence. We can use the quotient of these two bounds as an estimate of the conditional probability of the hypothesis given the evidence. We thus turn the inference problem into a heuristic search for better guide programs.
Variational Program Inference
656
The basic unit of meaning on the Semantic Web is the RDF statement, or triple, which combines a distinct subject, predicate and object to make a definite assertion about the world. A set of triples constitutes a graph, to which they give a collective meaning. It is upon this simple foundation that the rich, complex knowledge structures of the Semantic Web are built. Yet the very expressiveness of RDF, by inviting comparison with real-world knowledge, highlights a fundamental shortcoming, in that RDF is limited to statements of absolute fact, independent of the context in which a statement is asserted. This is in stark contrast with the thoroughly context-sensitive nature of human thought. The model presented here provides a particularly simple means of contextualizing an RDF triple by associating it with related statements in the same graph. This approach, in combination with a notion of graph similarity, is sufficient to select only those statements from an RDF graph which are subjectively most relevant to the context of the requesting process.
The Dilated Triple
657
In this Information system age many organizations consider information system as their weapon to compete or gain competitive advantage or give the best services for non profit organizations. Game Information System as combining Information System and game is breakthrough to achieve organizations' performance. The Game Information System will run the Information System with game and how game can be implemented to run the Information System. Game is not only for fun and entertainment, but will be a challenge to combine fun and entertainment with Information System. The Challenge to run the information system with entertainment, deliver the entertainment with information system all at once. Game information system can be implemented in many sectors as like the information system itself but in difference's view. A view of game which people can joy and happy and do their transaction as a fun things.
Game Information System
658
In order to get strategic positioning for competition in business organization, the information system must be ahead in this information age where the information as one of the weapons to win the competition and in the right hand the information will become a right bullet. The information system with the information technology support isn't enough if just only on internet or implemented with internet technology. The growth of information technology as tools for helping and making people easy to use must be accompanied by wanting to make fun and happy when they make contact with the information technology itself. Basically human like to play, since childhood human have been playing, free and happy and when human grow up they can't play as much as when human was in their childhood. We have to develop the information system which is not perform information system itself but can help human to explore their natural instinct for playing, making fun and happiness when they interact with the information system. Virtual information system is the way to present playing and having fun atmosphere on working area.
Virtual information system on working area
659
Earthquake DSS is an information technology environment which can be used by government to sharpen, make faster and better the earthquake mitigation decision. Earthquake DSS can be delivered as E-government which is not only for government itself but in order to guarantee each citizen's rights for education, training and information about earthquake and how to overcome the earthquake. Knowledge can be managed for future use and would become mining by saving and maintain all the data and information about earthquake and earthquake mitigation in Indonesia. Using Web technology will enhance global access and easy to use. Datawarehouse as unNormalized database for multidimensional analysis will speed the query process and increase reports variation. Link with other Disaster DSS in one national disaster DSS, link with other government information system and international will enhance the knowledge and sharpen the reports.
Indonesian Earthquake Decision Support System
660
Markov decision processes (MDPs) are widely used for modeling decision-making problems in robotics, automated control, and economics. Traditional MDPs assume that the decision maker (DM) knows all states and actions. However, this may not be true in many situations of interest. We define a new framework, MDPs with unawareness (MDPUs) to deal with the possibilities that a DM may not be aware of all possible actions. We provide a complete characterization of when a DM can learn to play near-optimally in an MDPU, and give an algorithm that learns to play near-optimally when it is possible to do so, as efficiently as possible. In particular, we characterize when a near-optimal solution can be found in polynomial time.
MDPs with Unawareness
661
Existing value function approximation methods have been successfully used in many applications, but they often lack useful a priori error bounds. We propose a new approximate bilinear programming formulation of value function approximation, which employs global optimization. The formulation provides strong a priori guarantees on both robust and expected policy loss by minimizing specific norms of the Bellman residual. Solving a bilinear program optimally is NP-hard, but this is unavoidable because the Bellman-residual minimization itself is NP-hard. We describe and analyze both optimal and approximate algorithms for solving bilinear programs. The analysis shows that this algorithm offers a convergent generalization of approximate policy iteration. We also briefly analyze the behavior of bilinear programming algorithms under incomplete samples. Finally, we demonstrate that the proposed approach can consistently minimize the Bellman residual on simple benchmark problems.
Global Optimization for Value Function Approximation
662
Different notions of equivalence, such as the prominent notions of strong and uniform equivalence, have been studied in Answer-Set Programming, mainly for the purpose of identifying programs that can serve as substitutes without altering the semantics, for instance in program optimization. Such semantic comparisons are usually characterized by various selections of models in the logic of Here-and-There (HT). For uniform equivalence however, correct characterizations in terms of HT-models can only be obtained for finite theories, respectively programs. In this article, we show that a selection of countermodels in HT captures uniform equivalence also for infinite theories. This result is turned into coherent characterizations of the different notions of equivalence by countermodels, as well as by a mixture of HT-models and countermodels (so-called equivalence interpretations). Moreover, we generalize the so-called notion of relativized hyperequivalence for programs to propositional theories, and apply the same methodology in order to obtain a semantic characterization which is amenable to infinite settings. This allows for a lifting of the results to first-order theories under a very general semantics given in terms of a quantified version of HT. We thus obtain a general framework for the study of various notions of equivalence for theories under answer-set semantics. Moreover, we prove an expedient property that allows for a simplified treatment of extended signatures, and provide further results for non-ground logic programs. In particular, uniform equivalence coincides under open and ordinary answer-set semantics, and for finite non-ground programs under these semantics, also the usual characterization of uniform equivalence in terms of maximal and total HT-models of the grounding is correct, even for infinite domains, when corresponding ground programs are infinite.
A General Framework for Equivalences in Answer-Set Programming by Countermodels in the Logic of Here-and-There
663
Human disease diagnosis is a complicated process and requires high level of expertise. Any attempt of developing a web-based expert system dealing with human disease diagnosis has to overcome various difficulties. This paper describes a project work aiming to develop a web-based fuzzy expert system for diagnosing human diseases. Now a days fuzzy systems are being used successfully in an increasing number of application areas; they use linguistic rules to describe systems. This research project focuses on the research and development of a web-based clinical tool designed to improve the quality of the exchange of health information between health care professionals and patients. Practitioners can also use this web-based tool to corroborate diagnosis. The proposed system is experimented on various scenarios in order to evaluate it's performance. In all the cases, proposed system exhibits satisfactory results.
Human Disease Diagnosis Using a Fuzzy Expert System
664
In the area of computer science focusing on creating machines that can engage on behaviors that humans consider intelligent. The ability to create intelligent machines has intrigued humans since ancient times and today with the advent of the computer and 50 years of research into various programming techniques, the dream of smart machines is becoming a reality. Researchers are creating systems which can mimic human thought, understand speech, beat the best human chessplayer, and countless other feats never before possible. Ability of the human to estimate the information is most brightly shown in using of natural languages. Using words of a natural language for valuation qualitative attributes, for example, the person pawns uncertainty in form of vagueness in itself estimations. Vague sets, vague judgments, vague conclusions takes place there and then, where and when the reasonable subject exists and also is interested in something. The vague sets theory has arisen as the answer to an illegibility of language the reasonable subject speaks. Language of a reasonable subject is generated by vague events which are created by the reason and which are operated by the mind. The theory of vague sets represents an attempt to find such approximation of vague grouping which would be more convenient, than the classical theory of sets in situations where the natural language plays a significant role. Such theory has been offered by known American mathematician Gau and Buehrer .In our paper we are describing how vagueness of linguistic variables can be solved by using the vague set theory.This paper is mainly designed for one of directions of the eventology (the theory of the random vague events), which has arisen within the limits of the probability theory and which pursue the unique purpose to describe eventologically a movement of reason.
Vagueness of Linguistic variable
665
Ontologies usually suffer from the semantic heterogeneity when simultaneously used in information sharing, merging, integrating and querying processes. Therefore, the similarity identification between ontologies being used becomes a mandatory task for all these processes to handle the problem of semantic heterogeneity. In this paper, we propose an efficient technique for similarity measurement between two ontologies. The proposed technique identifies all candidate pairs of similar concepts without omitting any similar pair. The proposed technique can be used in different types of operations on ontologies such as merging, mapping and aligning. By analyzing its results a reasonable improvement in terms of completeness, correctness and overall quality of the results has been found.
An Efficient Technique for Similarity Identification between Ontologies
666
Many formal languages have been proposed to express or represent Ontologies, including RDF, RDFS, DAML+OIL and OWL. Most of these languages are based on XML syntax, but with various terminologies and expressiveness. Therefore, choosing a language for building an Ontology is the main step. The main point of choosing language to represent Ontology is based mainly on what the Ontology will represent or be used for. That language should have a range of quality support features such as ease of use, expressive power, compatibility, sharing and versioning, internationalisation. This is because different kinds of knowledge-based applications need different language features. The main objective of these languages is to add semantics to the existing information on the web. The aims of this paper is to provide a good knowledge of existing language and understanding of these languages and how could be used.
The State of the Art: Ontology Web-Based Languages: XML Based
667
Semantic Web is actually an extension of the current one in that it represents information more meaningfully for humans and computers alike. It enables the description of contents and services in machine-readable form, and enables annotating, discovering, publishing, advertising and composing services to be automated. It was developed based on Ontology, which is considered as the backbone of the Semantic Web. In other words, the current Web is transformed from being machine-readable to machine-understandable. In fact, Ontology is a key technique with which to annotate semantics and provide a common, comprehensible foundation for resources on the Semantic Web. Moreover, Ontology can provide a common vocabulary, a grammar for publishing data, and can supply a semantic description of data which can be used to preserve the Ontologies and keep them ready for inference. This paper provides basic concepts of web services and the Semantic Web, defines the structure and the main applications of ontology, and provides many relevant terms are explained in order to provide a basic understanding of ontologies.
Understanding Semantic Web and Ontologies: Theory and Applications
668
Finding the structure of a graphical model has been received much attention in many fields. Recently, it is reported that the non-Gaussianity of data enables us to identify the structure of a directed acyclic graph without any prior knowledge on the structure. In this paper, we propose a novel non-Gaussianity based algorithm for more general type of models; chain graphs. The algorithm finds an ordering of the disjoint subsets of variables by iteratively evaluating the independence between the variable subset and the residuals when the remaining variables are regressed on those. However, its computational cost grows exponentially according to the number of variables. Therefore, we further discuss an efficient approximate approach for applying the algorithm to large sized graphs. We illustrate the algorithm with artificial and real-world datasets.
GroupLiNGAM: Linear non-Gaussian acyclic models for sets of variables
669
Notions of core, support and inversion of a soft set have been defined and studied. Soft approximations are soft sets developed through core and support, and are used for granulating the soft space. Membership structure of a soft set has been probed in and many interesting properties presented. The mathematical apparatus developed so far in this paper yields a detailed analysis of two works viz. [N. Cagman, S. Enginoglu, Soft set theory and uni-int decision making, European Jr. of Operational Research (article in press, available online 12 May 2010)] and [N. Cagman, S. Enginoglu, Soft matrix theory and its decision making, Computers and Mathematics with Applications 59 (2010) 3308 - 3314.]. We prove (Theorem 8.1) that uni-int method of Cagman is equivalent to a core-support expression which is computationally far less expansive than uni-int. This also highlights some shortcomings in Cagman's uni-int method and thus motivates us to improve the method. We first suggest an improvement in uni-int method and then present a new conjecture to solve the optimum choice problem given by Cagman and Enginoglu. Our Example 8.6 presents a case where the optimum choice is intuitively clear yet both uni-int methods (Cagman's and our improved one) give wrong answer but the new conjecture solves the problem correctly.
Soft Approximations and uni-int Decision Making
670
In recent years there has been growing interest in solutions for the delivery of clinical care for the elderly, due to the large increase in aging population. Monitoring a patient in his home environment is necessary to ensure continuity of care in home settings, but, to be useful, this activity must not be too invasive for patients and a burden for caregivers. We prototyped a system called SINDI (Secure and INDependent lIving), focused on i) collecting a limited amount of data about the person and the environment through Wireless Sensor Networks (WSN), and ii) inferring from these data enough information to support caregivers in understanding patients' well being and in predicting possible evolutions of their health. Our hierarchical logic-based model of health combines data from different sources, sensor data, tests results, common-sense knowledge and patient's clinical profile at the lower level, and correlation rules between health conditions across upper levels. The logical formalization and the reasoning process are based on Answer Set Programming. The expressive power of this logic programming paradigm makes it possible to reason about health evolution even when the available information is incomplete and potentially incoherent, while declarativity simplifies rules specification by caregivers and allows automatic encoding of knowledge. This paper describes how these issues have been targeted in the application scenario of the SINDI system.
Reasoning Support for Risk Prediction and Prevention in Independent Living
671
Iris recognition technology, used to identify individuals by photographing the iris of their eye, has become popular in security applications because of its ease of use, accuracy, and safety in controlling access to high-security areas. Fusion of multiple algorithms for biometric verification performance improvement has received considerable attention. The proposed method combines the zero-crossing 1 D wavelet Euler number, and genetic algorithm based for feature extraction. The output from these three algorithms is normalized and their score are fused to decide whether the user is genuine or imposter. This new strategies is discussed in this paper, in order to compute a multimodal combined score.
Improving Iris Recognition Accuracy By Score Based Fusion Method
672
We study decompositions of the global NVALUE constraint. Our main contribution is theoretical: we show that there are propagators for global constraints like NVALUE which decomposition can simulate with the same time complexity but with a much greater space complexity. This suggests that the benefit of a global propagator may often not be in saving time but in saving space. Our other theoretical contribution is to show for the first time that range consistency can be enforced on NVALUE with the same worst-case time complexity as bound consistency. Finally, the decompositions we study are readily encoded as linear inequalities. We are therefore able to use them in integer linear programs.
Decomposition of the NVALUE constraint
673
Symmetry can be used to help solve many problems. For instance, Einstein's famous 1905 paper ("On the Electrodynamics of Moving Bodies") uses symmetry to help derive the laws of special relativity. In artificial intelligence, symmetry has played an important role in both problem representation and reasoning. I describe recent work on using symmetry to help solve constraint satisfaction problems. Symmetries occur within individual solutions of problems as well as between different solutions of the same problem. Symmetry can also be applied to the constraints in a problem to give new symmetric constraints. Reasoning about symmetry can speed up problem solving, and has led to the discovery of new results in both graph and number theory.
Symmetry within and between solutions
674
We propose an online form of the cake cutting problem. This models situations where players arrive and depart during the process of dividing a resource. We show that well known fair division procedures like cut-and-choose and the Dubins-Spanier moving knife procedure can be adapted to apply to such online problems. We propose some desirable properties that online cake cutting procedures might possess like online forms of proportionality and envy-freeness, and identify which properties are in fact possessed by the different online cake procedures.
Online Cake Cutting
675
The stable marriage problem has a wide variety of practical applications, ranging from matching resident doctors to hospitals, to matching students to schools, or more generally to any two-sided market. We consider a useful variation of the stable marriage problem, where the men and women express their preferences using a preference list with ties over a subset of the members of the other sex. Matchings are permitted only with people who appear in these preference lists. In this setting, we study the problem of finding a stable matching that marries as many people as possible. Stability is an envy-free notion: no man and woman who are not married to each other would both prefer each other to their partners or to being single. This problem is NP-hard. We tackle this problem using local search, exploiting properties of the problem to reduce the size of the neighborhood and to make local moves efficiently. Experimental results show that this approach is able to solve large problems, quickly returning stable matchings of large and often optimal size.
Local search for stable marriage problems with ties and incomplete lists
676
Frequent Episode Discovery framework is a popular framework in Temporal Data Mining with many applications. Over the years many different notions of frequencies of episodes have been proposed along with different algorithms for episode discovery. In this paper we present a unified view of all such frequency counting algorithms. We present a generic algorithm such that all current algorithms are special cases of it. This unified view allows one to gain insights into different frequencies and we present quantitative relationships among different frequencies. Our unified view also helps in obtaining correctness proofs for various algorithms as we show here. We also point out how this unified view helps us to consider generalization of the algorithm so that they can discover episodes with general partial orders.
A unified view of Automata-based algorithms for Frequent Episode Discovery
677
The stable marriage (SM) problem has a wide variety of practical applications, ranging from matching resident doctors to hospitals, to matching students to schools, or more generally to any two-sided market. In the classical formulation, n men and n women express their preferences (via a strict total order) over the members of the other sex. Solving a SM problem means finding a stable marriage where stability is an envy-free notion: no man and woman who are not married to each other would both prefer each other to their partners or to being single. We consider both the classical stable marriage problem and one of its useful variations (denoted SMTI) where the men and women express their preferences in the form of an incomplete preference list with ties over a subset of the members of the other sex. Matchings are permitted only with people who appear in these lists, an we try to find a stable matching that marries as many people as possible. Whilst the SM problem is polynomial to solve, the SMTI problem is NP-hard. We propose to tackle both problems via a local search approach, which exploits properties of the problems to reduce the size of the neighborhood and to make local moves efficiently. We evaluate empirically our algorithm for SM problems by measuring its runtime behaviour and its ability to sample the lattice of all possible stable marriages. We evaluate our algorithm for SMTI problems in terms of both its runtime behaviour and its ability to find a maximum cardinality stable marriage.For SM problems, the number of steps of our algorithm grows only as O(nlog(n)), and that it samples very well the set of all stable marriages. It is thus a fair and efficient approach to generate stable marriages.Furthermore, our approach for SMTI problems is able to solve large problems, quickly returning stable matchings of large and often optimal size despite the NP-hardness of this problem.
Local search for stable marriage problems
678
From the advent of the application of satellite imagery to land cover mapping, one of the growing areas of research interest has been in the area of image classification. Image classifiers are algorithms used to extract land cover information from satellite imagery. Most of the initial research has focussed on the development and application of algorithms to better existing and emerging classifiers. In this paper, a paradigm shift is proposed whereby a committee of classifiers is used to determine the final classification output. Two of the key components of an ensemble system are that there should be diversity among the classifiers and that there should be a mechanism through which the results are combined. In this paper, the members of the ensemble system include: Linear SVM, Gaussian SVM and Quadratic SVM. The final output was determined through a simple majority vote of the individual classifiers. From the results obtained it was observed that the final derived map generated by an ensemble system can potentially improve on the results derived from the individual classifiers making up the ensemble system. The ensemble system classification accuracy was, in this case, better than the linear and quadratic SVM result. It was however less than that of the RBF SVM. Areas for further research could focus on improving the diversity of the ensemble system used in this research.
An svm multiclassifier approach to land cover mapping
679
A general method is given for revising degrees of belief and arriving at consistent decisions about a system of logically constrained issues. In contrast to other works about belief revision, here the constraints are assumed to be fixed. The method has two variants, dual of each other, whose revised degrees of belief are respectively above and below the original ones. The upper [resp. lower] revised degrees of belief are uniquely characterized as the lowest [resp. highest] ones that are invariant by a certain max-min [resp. min-max] operation determined by the logical constraints. In both variants, making balance between the revised degree of belief of a proposition and that of its negation leads to decisions that are ensured to be consistent with the logical constraints. These decisions are ensured to agree with the majority criterion as applied to the original degrees of belief whenever this gives a consistent result. They are also also ensured to satisfy a property of respect for unanimity about any particular issue, as well as a property of monotonicity with respect to the original degrees of belief. The application of the method to certain special domains comes down to well established or increasingly accepted methods, such as the single-link method of cluster analysis and the method of paths in preferential voting.
A general method for deciding about logically constrained issues
680
Strategic Environmental Assessment is a procedure aimed at introducing systematic assessment of the environmental effects of plans and programs. This procedure is based on the so-called coaxial matrices that define dependencies between plan activities (infrastructures, plants, resource extractions, buildings, etc.) and positive and negative environmental impacts, and dependencies between these impacts and environmental receptors. Up to now, this procedure is manually implemented by environmental experts for checking the environmental effects of a given plan or program, but it is never applied during the plan/program construction. A decision support system, based on a clear logic semantics, would be an invaluable tool not only in assessing a single, already defined plan, but also during the planning process in order to produce an optimized, environmentally assessed plan and to study possible alternative scenarios. We propose two logic-based approaches to the problem, one based on Constraint Logic Programming and one on Probabilistic Logic Programming that could be, in the future, conveniently merged to exploit the advantages of both. We test the proposed approaches on a real energy plan and we discuss their limitations and advantages.
Logic-Based Decision Support for Strategic Environmental Assessment
681
Hybrid MKNF knowledge bases are one of the most prominent tightly integrated combinations of open-world ontology languages with closed-world (non-monotonic) rule paradigms. The definition of Hybrid MKNF is parametric on the description logic (DL) underlying the ontology language, in the sense that non-monotonic rules can extend any decidable DL language. Two related semantics have been defined for Hybrid MKNF: one that is based on the Stable Model Semantics for logic programs and one on the Well-Founded Semantics (WFS). Under WFS, the definition of Hybrid MKNF relies on a bottom-up computation that has polynomial data complexity whenever the DL language is tractable. Here we define a general query-driven procedure for Hybrid MKNF that is sound with respect to the stable model-based semantics, and sound and complete with respect to its WFS variant. This procedure is able to answer a slightly restricted form of conjunctive queries, and is based on tabled rule evaluation extended with an external oracle that captures reasoning within the ontology. Such an (abstract) oracle receives as input a query along with knowledge already derived, and replies with a (possibly empty) set of atoms, defined in the rules, whose truth would suffice to prove the initial query. With appropriate assumptions on the complexity of the abstract oracle, the general procedure maintains the data complexity of the WFS for Hybrid MKNF knowledge bases. To illustrate this approach, we provide a concrete oracle for EL+, a fragment of the light-weight DL EL++. Such an oracle has practical use, as EL++ is the language underlying OWL 2 EL, which is part of the W3C recommendations for the Semantic Web, and is tractable for reasoning tasks such as subsumption. We show that query-driven Hybrid MKNF preserves polynomial data complexity when using the EL+ oracle and WFS.
Query-driven Procedures for Hybrid MKNF Knowledge Bases
682
Answer set programming - the most popular problem solving paradigm based on logic programs - has been recently extended to support uninterpreted function symbols. All of these approaches have some limitation. In this paper we propose a class of programs called FP2 that enjoys a different trade-off between expressiveness and complexity. FP2 programs enjoy the following unique combination of properties: (i) the ability of expressing predicates with infinite extensions; (ii) full support for predicates with arbitrary arity; (iii) decidability of FP2 membership checking; (iv) decidability of skeptical and credulous stable model reasoning for call-safe queries. Odd cycles are supported by composing FP2 programs with argument restricted programs.
A decidable subclass of finitary programs
683
This paper models a decision support system to predict the occurance of suicide attack in a given collection of cities. The system comprises two parts. First part analyzes and identifies the factors which affect the prediction. Admitting incomplete information and use of linguistic terms by experts, as two characteristic features of this peculiar prediction problem we exploit the Theory of Fuzzy Soft Sets. Hence the Part 2 of the model is an algorithm vz. FSP which takes the assessment of factors given in Part 1 as its input and produces a possibility profile of cities likely to receive the accident. The algorithm is of O(2^n) complexity. It has been illustrated by an example solved in detail. Simulation results for the algorithm have been presented which give insight into the strengths and weaknesses of FSP. Three different decision making measures have been simulated and compared in our discussion.
Predicting Suicide Attacks: A Fuzzy Soft Set Approach
684
An approach to the revision of logic programs under the answer set semantics is presented. For programs P and Q, the goal is to determine the answer sets that correspond to the revision of P by Q, denoted P * Q. A fundamental principle of classical (AGM) revision, and the one that guides the approach here, is the success postulate. In AGM revision, this stipulates that A is in K * A. By analogy with the success postulate, for programs P and Q, this means that the answer sets of Q will in some sense be contained in those of P * Q. The essential idea is that for P * Q, a three-valued answer set for Q, consisting of positive and negative literals, is first determined. The positive literals constitute a regular answer set, while the negated literals make up a minimal set of naf literals required to produce the answer set from Q. These literals are propagated to the program P, along with those rules of Q that are not decided by these literals. The approach differs from work in update logic programs in two main respects. First, we ensure that the revising logic program has higher priority, and so we satisfy the success postulate; second, for the preference implicit in a revision P * Q, the program Q as a whole takes precedence over P, unlike update logic programs, since answer sets of Q are propagated to P. We show that a core group of the AGM postulates are satisfied, as are the postulates that have been proposed for update logic programs.
A Program-Level Approach to Revising Logic Programs under the Answer Set Semantics
685
We study the problem of coalitional manipulation in elections using the unweighted Borda rule. We provide empirical evidence of the manipulability of Borda elections in the form of two new greedy manipulation algorithms based on intuitions from the bin-packing and multiprocessor scheduling domains. Although we have not been able to show that these algorithms beat existing methods in the worst-case, our empirical evaluation shows that they significantly outperform the existing method and are able to find optimal manipulations in the vast majority of the randomly generated elections that we tested. These empirical results provide further evidence that the Borda rule provides little defense against coalitional manipulation.
An Empirical Study of Borda Manipulation
686
Autonomous planetary vehicles, also known as rovers, are small autonomous vehicles equipped with a variety of sensors used to perform exploration and experiments on a planet's surface. Rovers work in a partially unknown environment, with narrow energy/time/movement constraints and, typically, small computational resources that limit the complexity of on-line planning and scheduling, thus they represent a great challenge in the field of autonomous vehicles. Indeed, formal models for such vehicles usually involve hybrid systems with nonlinear dynamics, which are difficult to handle by most of the current planning algorithms and tools. Therefore, when offline planning of the vehicle activities is required, for example for rovers that operate without a continuous Earth supervision, such planning is often performed on simplified models that are not completely realistic. In this paper we show how the UPMurphi model checking based planning tool can be used to generate resource-optimal plans to control the engine of an autonomous planetary vehicle, working directly on its hybrid model and taking into account several safety constraints, thus achieving very accurate results.
Resource-Optimal Planning For An Autonomous Planetary Vehicle
687
This paper presents the solution about the threat of a VBIED (Vehicle-Born Improvised Explosive Device) obtained with the DSmT (Dezert-Smarandache Theory). This problem has been proposed recently to the authors by Simon Maskell and John Lavery as a typical illustrative example to try to compare the different approaches for dealing with uncertainty for decision-making support. The purpose of this paper is to show in details how a solid justified solution can be obtained from DSmT approach and its fusion rules thanks to a proper modeling of the belief functions involved in this problem.
Threat assessment of a possible Vehicle-Born Improvised Explosive Device using DSmT
688
A key factor that can dramatically reduce the search space during constraint solving is the criterion under which the variable to be instantiated next is selected. For this purpose numerous heuristics have been proposed. Some of the best of such heuristics exploit information about failures gathered throughout search and recorded in the form of constraint weights, while others measure the importance of variable assignments in reducing the search space. In this work we experimentally evaluate the most recent and powerful variable ordering heuristics, and new variants of them, over a wide range of benchmarks. Results demonstrate that heuristics based on failures are in general more efficient. Based on this, we then derive new revision ordering heuristics that exploit recorded failures to efficiently order the propagation list when arc consistency is maintained during search. Interestingly, in addition to reducing the number of constraint checks and list operations, these heuristics are also able to cut down the size of the explored search tree.
Evaluating and Improving Modern Variable and Revision Ordering Strategies in CSPs
689
The two standard branching schemes for CSPs are d-way and 2-way branching. Although it has been shown that in theory the latter can be exponentially more effective than the former, there is a lack of empirical evidence showing such differences. To investigate this, we initially make an experimental comparison of the two branching schemes over a wide range of benchmarks. Experimental results verify the theoretical gap between d-way and 2-way branching as we move from a simple variable ordering heuristic like dom to more sophisticated ones like dom/ddeg. However, perhaps surprisingly, experiments also show that when state-of-the-art variable ordering heuristics like dom/wdeg are used then d-way can be clearly more efficient than 2-way branching in many cases. Motivated by this observation, we develop two generic heuristics that can be applied at certain points during search to decide whether 2-way branching or a restricted version of 2-way branching, which is close to d-way branching, will be followed. The application of these heuristics results in an adaptive branching scheme. Experiments with instantiations of the two generic heuristics confirm that search with adaptive branching outperforms search with a fixed branching scheme on a wide range of problems.
Adaptive Branching for Constraint Satisfaction Problems
690
Event-driven automation of reactive functionalities for complex event processing is an urgent need in today's distributed service-oriented architectures and Web-based event-driven environments. An important problem to be addressed is how to correctly and efficiently capture and process the event-based behavioral, reactive logic embodied in reaction rules, and combining this with other conditional decision logic embodied, e.g., in derivation rules. This paper elaborates a homogeneous integration approach that combines derivation rules, reaction rules and other rule types such as integrity constraints into the general framework of logic programming, the industrial-strength version of declarative programming. We describe syntax and semantics of the language, implement a distributed web-based middleware using enterprise service technologies and illustrate its adequacy in terms of expressiveness, efficiency and scalability through examples extracted from industrial use cases. The developed reaction rule language provides expressive features such as modular ID-based updates with support for external imports and self-updates of the intensional and extensional knowledge bases, transactions including integrity testing and roll-backs of update transition paths. It also supports distributed complex event processing, event messaging and event querying via efficient and scalable enterprise middleware technologies and event/action reasoning based on an event/action algebra implemented by an interval-based event calculus variant as a logic inference formalism.
A Homogeneous Reaction Rule Language for Complex Event Processing
691
The four intensive problems to the software rose by the software industry .i.e., User System Communication / Human Machine Interface, Meta Data extraction, Information processing & management and Data representation are discussed in this research paper. To contribute in the field we have proposed and described an intelligent semantic oriented agent based search engine including the concepts of intelligent graphical user interface, natural language based information processing, data management and data reconstruction for the final user end information representation.
Semantic Oriented Agent based Approach towards Engineering Data Management, Web Information Retrieval and User System Communication Problems
692
Web development is a challenging research area for its creativity and complexity. The existing raised key challenge in web technology technologic development is the presentation of data in machine read and process able format to take advantage in knowledge based information extraction and maintenance. Currently it is not possible to search and extract optimized results using full text queries because there is no such mechanism exists which can fully extract the semantic from full text queries and then look for particular knowledge based information.
An Agent based Approach towards Metadata Extraction, Modelling and Information Retrieval over the Web
693
In order to study the communication between information systems, Gong and Xiao [Z. Gong and Z. Xiao, Communicating between information systems based on including degrees, International Journal of General Systems 39 (2010) 189--206] proposed the concept of general relation mappings based on including degrees. Some properties and the extension for fuzzy information systems of the general relation mappings have been investigated there. In this paper, we point out by counterexamples that several assertions (Lemma 3.1, Lemma 3.2, Theorem 4.1, and Theorem 4.3) in the aforementioned work are not true in general.
A note on communicating between information systems based on including degrees
694
World Wide Web (WWW) is the most popular global information sharing and communication system consisting of three standards .i.e., Uniform Resource Identifier (URL), Hypertext Transfer Protocol (HTTP) and Hypertext Mark-up Language (HTML). Information is provided in text, image, audio and video formats over the web by using HTML which is considered to be unconventional in defining and formalizing the meaning of the context...
Role of Ontology in Semantic Web Development
695
Recent research has highlighted the practical benefits of subjective interestingness measures, which quantify the novelty or unexpectedness of a pattern when contrasted with any prior information of the data miner (Silberschatz and Tuzhilin, 1995; Geng and Hamilton, 2006). A key challenge here is the formalization of this prior information in a way that lends itself to the definition of an interestingness subjective measure that is both meaningful and practical. In this paper, we outline a general strategy of how this could be achieved, before working out the details for a use case that is important in its own right. Our general strategy is based on considering prior information as constraints on a probabilistic model representing the uncertainty about the data. More specifically, we represent the prior information by the maximum entropy (MaxEnt) distribution subject to these constraints. We briefly outline various measures that could subsequently be used to contrast patterns with this MaxEnt model, thus quantifying their subjective interestingness.
Maximum entropy models and subjective interestingness: an application to tiles in binary databases
696
We examine the practicality for a user of using Answer Set Programming (ASP) for representing logical formalisms. Our example is a formalism aiming at capturing causal explanations from causal information. We show the naturalness and relative efficiency of this translation job. We are interested in the ease for writing an ASP program. Limitations of the earlier systems made that in practice, the ``declarative aspect'' was more theoretical than practical. We show how recent improvements in working ASP systems facilitate the translation.
A formalism for causal explanations with an Answer Set Programming translation
697
Knowledge Management is a global process in companies. It includes all the processes that allow capitalization, sharing and evolution of the Knowledge Capital of the firm, generally recognized as a critical resource of the organization. Several approaches have been defined to capitalize knowledge but few of them study how to learn from this knowledge. We present in this paper an approach that helps to enhance learning from profession knowledge in an organisation. We apply our approach on knitting industry.
Learning from Profession Knowledge: Application on Knitting
698
Constraint solvers are complex pieces of software which require many design decisions to be made by the implementer based on limited information. These decisions affect the performance of the finished solver significantly. Once a design decision has been made, it cannot easily be reversed, although a different decision may be more appropriate for a particular problem. We investigate using machine learning to make these decisions automatically depending on the problem to solve. We use the alldifferent constraint as a case study. Our system is capable of making non-trivial, multi-level decisions that improve over always making a default choice and can be implemented as part of a general-purpose constraint solver.
Machine learning for constraint solver design -- A case study for the alldifferent constraint
699