text stringlengths 17 3.36M | source stringlengths 3 333 | __index_level_0__ int64 0 518k |
|---|---|---|
In this paper, we study a natural extension of Multi-Layer Perceptrons (MLP) to functional inputs. We show that fundamental results for classical MLP can be extended to functional MLP. We obtain universal approximation results that show the expressive power of functional MLP is comparable to that of numerical MLP. We obtain consistency results which imply that the estimation of optimal parameters for functional MLP is statistically well defined. We finally show on simulated and real world data that the proposed model performs in a very satisfactory way. | Functional Multi-Layer Perceptron: a Nonlinear Tool for Functional Data
Analysis | 5,900 |
The field of algorithmic self-assembly is concerned with the design and analysis of self-assembly systems from a computational perspective, that is, from the perspective of mathematical problems whose study may give insight into the natural processes through which elementary objects self-assemble into more complex ones. One of the main problems of algorithmic self-assembly is the minimum tile set problem (MTSP), which asks for a collection of types of elementary objects (called tiles) to be found for the self-assembly of an object having a pre-established shape. Such a collection is to be as concise as possible, thus minimizing supply diversity, while satisfying a set of stringent constraints having to do with the termination and other properties of the self-assembly process from its tile types. We present a study of what we think is the first practical approach to MTSP. Our study starts with the introduction of an evolutionary heuristic to tackle MTSP and includes results from extensive experimentation with the heuristic on the self-assembly of simple objects in two and three dimensions. The heuristic we introduce combines classic elements from the field of evolutionary computation with a problem-specific variant of Pareto dominance into a multi-objective approach to MTSP. | Optimization of supply diversity for the self-assembly of simple objects
in two and three dimensions | 5,901 |
Cellular Simultaneous Recurrent Neural Network (SRN) has been shown to be a function approximator more powerful than the MLP. This means that the complexity of MLP would be prohibitively large for some problems while SRN could realize the desired mapping with acceptable computational constraints. The speed of training of complex recurrent networks is crucial to their successful application. Present work improves the previous results by training the network with extended Kalman filter (EKF). We implemented a generic Cellular SRN and applied it for solving two challenging problems: 2D maze navigation and a subset of the connectedness problem. The speed of convergence has been improved by several orders of magnitude in comparison with the earlier results in the case of maze navigation, and superior generalization has been demonstrated in the case of connectedness. The implications of this improvements are discussed. | Beyond Feedforward Models Trained by Backpropagation: a Practical
Training Tool for a More Efficient Universal Approximator | 5,902 |
This issue discusses the fault-trajectory approach suitability for fault diagnosis on analog networks. Recent works have shown promising results concerning a method based on this concept for ATPG for diagnosing faults on analog networks. Such method relies on evolutionary techniques, where a generic algorithm (GA) is coded to generate a set of optimum frequencies capable to disclose faults. | Fault-Trajectory Approach for Fault Diagnosis on Analog Circuits | 5,903 |
State estimation is necessary in diagnosing anomalies in Water Demand Systems (WDS). In this paper we present a neural network performing such a task. State estimation is performed by using optimization, which tries to reconcile all the available information. Quantification of the uncertainty of the input data (telemetry measures and demand predictions) can be achieved by means of robust estate estimation. Using a mathematical model of the network, fuzzy estimated states for anomalous states of the network can be obtained. They are used to train a neural network capable of assessing WDS anomalies associated with particular sets of measurements. | Estimation of fuzzy anomalies in Water Distribution Systems | 5,904 |
Many kinds of Evolutionary Algorithms (EAs) have been described in the literature since the last 30 years. However, though most of them share a common structure, no existing software package allows the user to actually shift from one model to another by simply changing a few parameters, e.g. in a single window of a Graphical User Interface. This paper presents GUIDE, a Graphical User Interface for DREAM Experiments that, among other user-friendly features, unifies all kinds of EAs into a single panel, as far as evolution parameters are concerned. Such a window can be used either to ask for one of the well known ready-to-use algorithms, or to very easily explore new combinations that have not yet been studied. Another advantage of grouping all necessary elements to describe virtually all kinds of EAs is that it creates a fantastic pedagogic tool to teach EAs to students and newcomers to the field. | GUIDE: Unifying Evolutionary Engines through a Graphical User Interface | 5,905 |
Can intelligence optimise Digital Ecosystems? How could a distributed intelligence interact with the ecosystem dynamics? Can the software components that are part of genetic selection be intelligent in themselves, as in an adaptive technology? We consider the effect of a distributed intelligence mechanism on the evolutionary and ecological dynamics of our Digital Ecosystem, which is the digital counterpart of a biological ecosystem for evolving software services in a distributed network. We investigate Neural Networks and Support Vector Machine for the learning based pattern recognition functionality of our distributed intelligence. Simulation results imply that the Digital Ecosystem performs better with the application of a distributed intelligence, marginally more effectively when powered by Support Vector Machine than Neural Networks, and suggest that it can contribute to optimising the operation of our Digital Ecosystem. | Digital Ecosystems: Optimisation by a Distributed Intelligence | 5,906 |
Stability is perhaps one of the most desirable features of any engineered system, given the importance of being able to predict its response to various environmental conditions prior to actual deployment. Engineered systems are becoming ever more complex, approaching the same levels of biological ecosystems, and so their stability becomes ever more important, but taking on more and more differential dynamics can make stability an ever more elusive property. The Chli-DeWilde definition of stability views a Multi-Agent System as a discrete time Markov chain with potentially unknown transition probabilities. With a Multi-Agent System being considered stable when its state, a stochastic process, has converged to an equilibrium distribution, because stability of a system can be understood intuitively as exhibiting bounded behaviour. We investigate an extension to include Multi-Agent Systems with evolutionary dynamics, focusing on the evolving agent populations of our Digital Ecosystem. We then built upon this to construct an entropy-based definition for the degree of instability (entropy of the limit probabilities), which was later used to perform a stability analysis. The Digital Ecosystem is considered to investigate the stability of an evolving agent population through simulations, for which the results were consistent with the original Chli-DeWilde definition. | Digital Ecosystems: Stability of Evolving Agent Populations | 5,907 |
We view Digital Ecosystems to be the digital counterparts of biological ecosystems, exploiting the self-organising properties of biological ecosystems, which are considered to be robust, self-organising and scalable architectures that can automatically solve complex, dynamic problems. Digital Ecosystems are a novel optimisation technique where the optimisation works at two levels: a first optimisation, migration of agents (representing services) which are distributed in a decentralised peer-to-peer network, operating continuously in time; this process feeds a second optimisation based on evolutionary computing that operates locally on single peers and is aimed at finding solutions to satisfy locally relevant constraints. We created an Ecosystem-Oriented Architecture of Digital Ecosystems by extending Service-Oriented Architectures with distributed evolutionary computing, allowing services to recombine and evolve over time, constantly seeking to improve their effectiveness for the user base. Individuals within our Digital Ecosystem will be applications (groups of services), created in response to user requests by using evolutionary optimisation to aggregate the services. These individuals will migrate through the Digital Ecosystem and adapt to find niches where they are useful in fulfilling other user requests for applications. Simulation results imply that the Digital Ecosystem performs better at large scales than a comparable Service-Oriented Architecture, suggesting that incorporating ideas from theoretical ecology can contribute to useful self-organising properties in digital ecosystems. | Digital Ecosystems: Evolving Service-Oriented Architectures | 5,908 |
We start with a discussion of the relevant literature, including Nature Inspired Computing as a framework in which to understand this work, and the process of biomimicry to be used in mimicking the necessary biological processes to create Digital Ecosystems. We then consider the relevant theoretical ecology in creating the digital counterpart of a biological ecosystem, including the topological structure of ecosystems, and evolutionary processes within distributed environments. This leads to a discussion of the relevant fields from computer science for the creation of Digital Ecosystems, including evolutionary computing, Multi-Agent Systems, and Service-Oriented Architectures. We then define Ecosystem-Oriented Architectures for the creation of Digital Ecosystems, imbibed with the properties of self-organisation and scalability from biological ecosystems, including a novel form of distributed evolutionary computing. | Creating a Digital Ecosystem: Service-Oriented Architectures with
Distributed Evolutionary Computing | 5,909 |
In some real world situations, linear models are not sufficient to represent accurately complex relations between input variables and output variables of a studied system. Multilayer Perceptrons are one of the most successful non-linear regression tool but they are unfortunately restricted to inputs and outputs that belong to a normed vector space. In this chapter, we propose a general recoding method that allows to use symbolic data both as inputs and outputs to Multilayer Perceptrons. The recoding is quite simple to implement and yet provides a flexible framework that allows to deal with almost all practical cases. The proposed method is illustrated on a real world data set. | Multi-Layer Perceptrons and Symbolic Data | 5,910 |
In this paper, a new implementation of the adaptation of Kohonen self-organising maps (SOM) to dissimilarity matrices is proposed. This implementation relies on the branch and bound principle to reduce the algorithm running time. An important property of this new approach is that the obtained algorithm produces exactly the same results as the standard algorithm. | Accélération des cartes auto-organisatrices sur tableau de
dissimilarités par séparation et évaluation | 5,911 |
Prediction problems from spectra are largely encountered in chemometry. In addition to accurate predictions, it is often needed to extract information about which wavelengths in the spectra contribute in an effective way to the quality of the prediction. This implies to select wavelengths (or wavelength intervals), a problem associated to variable selection. In this paper, it is shown how this problem may be tackled in the specific case of smooth (for example infrared) spectra. The functional character of the spectra (their smoothness) is taken into account through a functional variable projection procedure. Contrarily to standard approaches, the projection is performed on a basis that is driven by the spectra themselves, in order to best fit their characteristics. The methodology is illustrated by two examples of functional projection, using Independent Component Analysis and functional variable clustering, respectively. The performances on two standard infrared spectra benchmarks are illustrated. | A data-driven functional projection approach for the selection of
feature ranges in spectra with ICA or cluster analysis | 5,912 |
Self organizing maps (SOMs) are widely-used for unsupervised classification. For this application, they must be combined with some partitioning scheme that can identify boundaries between distinct regions in the maps they produce. We discuss a novel partitioning scheme for SOMs based on the Bayesian Blocks segmentation algorithm of Scargle [1998]. This algorithm minimizes a cost function to identify contiguous regions over which the values of the attributes can be represented as approximately constant. Because this cost function is well-defined and largely independent of assumptions regarding the number and structure of clusters in the original sample space, this partitioning scheme offers significant advantages over many conventional methods. Sample code is available. | Using Bayesian Blocks to Partition Self-Organizing Maps | 5,913 |
Evolutionary complexity is here measured by the number of trials/evaluations needed for evolving a logical gate in a non-linear medium. Behavioural complexity of the gates evolved is characterised in terms of cellular automata behaviour. We speculate that hierarchies of behavioural and evolutionary complexities are isomorphic up to some degree, subject to substrate specificity of evolution and the spectrum of evolution parameters. | Are complex systems hard to evolve? | 5,914 |
We apply Agent-Based Modeling and Simulation (ABMS) to investigate a set of problems in a retail context. Specifically, we are working to understand the relationship between human resource management practices and retail productivity. Despite the fact we are working within a relatively novel and complex domain, it is clear that intelligent agents do offer potential for developing organizational capabilities in the future. Our multi-disciplinary research team has worked with a UK department store to collect data and capture perceptions about operations from actors within departments. Based on this case study work, we have built a simulator that we present in this paper. We then use the simulator to gather empirical evidence regarding two specific management practices: empowerment and employee development. | A Multi-Agent Simulation of Retail Management Practices | 5,915 |
Intelligent agents offer a new and exciting way of understanding the world of work. In this paper we apply agent-based modeling and simulation to investigate a set of problems in a retail context. Specifically, we are working to understand the relationship between human resource management practices and retail productivity. Despite the fact we are working within a relatively novel and complex domain, it is clear that intelligent agents could offer potential for fostering sustainable organizational capabilities in the future. Our research so far has led us to conduct case study work with a top ten UK retailer, collecting data in four departments in two stores. Based on our case study data we have built and tested a first version of a department store simulator. In this paper we will report on the current development of our simulator which includes new features concerning more realistic data on the pattern of footfall during the day and the week, a more differentiated view of customers, and the evolution of customers over time. This allows us to investigate more complex scenarios and to analyze the impact of various management practices. | Understanding Retail Productivity by Simulating Management Practise | 5,916 |
The observation and modeling of natural Complex Systems (CSs) like the human nervous system, the evolution or the weather, allows the definition of special abilities and models reusable to solve other problems. For instance, Genetic Algorithms or Ant Colony Optimizations are inspired from natural CSs to solve optimization problems. This paper proposes the use of ant-based systems to solve various problems with a non assessing approach. This means that solutions to some problem are not evaluated. They appear as resultant structures from the activity of the system. Problems are modeled with graphs and such structures are observed directly on these graphs. Problems of Multiple Sequences Alignment and Natural Language Processing are addressed with this approach. | Problem Solving and Complex Systems | 5,917 |
Crucial to an Evolutionary Algorithm's performance is its selection scheme. We mathematically investigate the relation between polynomial rank and probabilistic tournament methods which are (respectively) generalisations of the popular linear ranking and tournament selection schemes. We show that every probabilistic tournament is equivalent to a unique polynomial rank scheme. In fact, we derived explicit operators for translating between these two types of selection. Of particular importance is that most linear and most practical quadratic rank schemes are probabilistic tournaments. | Equivalence of Probabilistic Tournament and Polynomial Ranking Selection | 5,918 |
We study in detail the fitness landscape of a difficult cellular automata computational task: the majority problem. Our results show why this problem landscape is so hard to search, and we quantify the large degree of neutrality found in various ways. We show that a particular subspace of the solution space, called the "Olympus", is where good solutions concentrate, and give measures to quantitatively characterize this subspace. | Neutral Fitness Landscape in the Cellular Automata Majority Problem | 5,919 |
The application of genetic algorithms (GAs) to many optimization problems in organizations often results in good performance and high quality solutions. For successful and efficient use of GAs, it is not enough to simply apply simple GAs (SGAs). In addition, it is necessary to find a proper representation for the problem and to develop appropriate search operators that fit well to the properties of the genotype encoding. The representation must at least be able to encode all possible solutions of an optimization problem, and genetic operators such as crossover and mutation should be applicable to it. In this paper, serial alternation strategies between two codings are formulated in the framework of dynamic change of genotype encoding in GAs for function optimization. Likewise, a new variant of GAs for difficult optimization problems denoted {\it Split-and-Merge} GA (SM-GA) is developed using a parallel implementation of an SGA and evolving a dynamic exchange of individual representation in the context of Dual Coding concept. Numerical experiments show that the evolved SM-GA significantly outperforms an SGA with static single coding. | Evolving Dynamic Change and Exchange of Genotype Encoding in Genetic
Algorithms for Difficult Optimization Problems | 5,920 |
This paper presents the Anisotropic selection scheme for cellular Genetic Algorithms (cGA). This new scheme allows to enhance diversity and to control the selective pressure which are two important issues in Genetic Algorithms, especially when trying to solve difficult optimization problems. Varying the anisotropic degree of selection allows swapping from a cellular to an island model of parallel genetic algorithm. Measures of performances and diversity have been performed on one well-known problem: the Quadratic Assignment Problem which is known to be difficult to optimize. Experiences show that, tuning the anisotropic degree, we can find the accurate trade-off between cGA and island models to optimize performances of parallel evolutionary algorithms. This trade-off can be interpreted as the suitable degree of migration among subpopulations in a parallel Genetic Algorithm. | From Cells to Islands: An unified Model of Cellular Parallel Genetic
Algorithms | 5,921 |
This effort examines the intersection of the emerging field of quantum computing and the more established field of evolutionary computation. The goal is to understand what benefits quantum computing might offer to computational intelligence and how computational intelligence paradigms might be implemented as quantum programs to be run on a future quantum computer. We critically examine proposed algorithms and methods for implementing computational intelligence paradigms, primarily focused on heuristic optimization methods including and related to evolutionary computation, with particular regard for their potential for eventual implementation on quantum computing hardware. | Prospective Algorithms for Quantum Evolutionary Computation | 5,922 |
Recently, a new method for encoding data sets in the form of "Density Codes" was proposed in the literature (Courrieu, 2006). This method allows to compare sets of points belonging to every multidimensional space, and to build shape spaces invariant to a wide variety of affine and non-affine transformations. However, this general method does not take advantage of the special properties of image data, resulting in a quite slow encoding process that makes this tool practically unusable for processing large image databases with conventional computers. This paper proposes a very simple variant of the density code method that directly works on the image function, which is thousands times faster than the original Parzen window based method, without loss of its useful properties. | Fast Density Codes for Image Data | 5,923 |
The solving of least square systems is a useful operation in neurocomputational modeling of learning, pattern matching, and pattern recognition. In these last two cases, the solution must be obtained on-line, thus the time required to solve a system in a plausible neural architecture is critical. This paper presents a recurrent network of Sigma-Pi neurons, whose solving time increases at most like the logarithm of the system size, and of its condition number, which provides plausible computation times for biological systems. | Solving Time of Least Square Systems in Sigma-Pi Unit Networks | 5,924 |
Many neural learning algorithms require to solve large least square systems in order to obtain synaptic weights. Moore-Penrose inverse matrices allow for solving such systems, even with rank deficiency, and they provide minimum-norm vectors of synaptic weights, which contribute to the regularization of the input-output mapping. It is thus of interest to develop fast and accurate algorithms for computing Moore-Penrose inverse matrices. In this paper, an algorithm based on a full rank Cholesky factorization is proposed. The resulting pseudoinverse matrices are similar to those provided by other algorithms. However the computation time is substantially shorter, particularly for large systems. | Fast Computation of Moore-Penrose Inverse Matrices | 5,925 |
We combine a refined version of two-point step-size adaptation with the covariance matrix adaptation evolution strategy (CMA-ES). Additionally, we suggest polished formulae for the learning rate of the covariance matrix and the recombination weights. In contrast to cumulative step-size adaptation or to the 1/5-th success rule, the refined two-point adaptation (TPA) does not rely on any internal model of optimality. In contrast to conventional self-adaptation, the TPA will achieve a better target step-size in particular with large populations. The disadvantage of TPA is that it relies on two additional objective function | CMA-ES with Two-Point Step-Size Adaptation | 5,926 |
In this paper the Sudoku problem is solved using stochastic search techniques and these are: Cultural Genetic Algorithm (CGA), Repulsive Particle Swarm Optimization (RPSO), Quantum Simulated Annealing (QSA) and the Hybrid method that combines Genetic Algorithm with Simulated Annealing (HGASA). The results obtained show that the CGA, QSA and HGASA are able to solve the Sudoku puzzle with CGA finding a solution in 28 seconds, while QSA finding a solution in 65 seconds and HGASA in 1.447 seconds. This is mainly because HGASA combines the parallel searching of GA with the flexibility of SA. The RPSO was found to be unable to solve the puzzle. | Stochastic Optimization Approaches for Solving Sudoku | 5,927 |
Although there are many neural network (NN) algorithms for prediction and for control, and although methods for optimal estimation (including filtering and prediction) and for optimal control in linear systems were provided by Kalman in 1960 (with nonlinear extensions since then), there has been, to my knowledge, no NN algorithm that learns either Kalman prediction or Kalman control (apart from the special case of stationary control). Here we show how optimal Kalman prediction and control (KPC), as well as system identification, can be learned and executed by a recurrent neural network composed of linear-response nodes, using as input only a stream of noisy measurement data. The requirements of KPC appear to impose significant constraints on the allowed NN circuitry and signal flows. The NN architecture implied by these constraints bears certain resemblances to the local-circuit architecture of mammalian cerebral cortex. We discuss these resemblances, as well as caveats that limit our current ability to draw inferences for biological function. It has been suggested that the local cortical circuit (LCC) architecture may perform core functions (as yet unknown) that underlie sensory, motor,and other cortical processing. It is reasonable to conjecture that such functions may include prediction, the estimation or inference of missing or noisy sensory data, and the goal-driven generation of control signals. The resemblances found between the KPC NN architecture and that of the LCC are consistent with this conjecture. | Neural network learning of optimal Kalman prediction and control | 5,928 |
A computationally method on damage detection problems in structures was conducted using neural networks. The problem that is considered in this works consists of estimating the existence, location and extent of stiffness reduction in structure which is indicated by the changes of the structural static parameters such as deflection and strain. The neural network was trained to recognize the behaviour of static parameter of the undamaged structure as well as of the structure with various possible damage extent and location which were modelled as random states. The proposed techniques were applied to detect damage in a simply supported beam. The structure was analyzed using finite-element-method (FEM) and the damage identification was conducted by a back-propagation neural network using the change of the structural strain and displacement. The results showed that using proposed method the strain is more efficient for identification of damage than the displacement. | Structural Damage Detection Using Randomized Trained Neural Networks | 5,929 |
Popular computational models of visual attention tend to neglect the influence of saccadic eye movements whereas it has been shown that the primates perform on average three of them per seconds and that the neural substrate for the deployment of attention and the execution of an eye movement might considerably overlap. Here we propose a computational model in which the deployment of attention with or without a subsequent eye movement emerges from local, distributed and numerical computations. | A computational approach to the covert and overt deployment of spatial
attention | 5,930 |
Approaches to machine intelligence based on brain models have stressed the use of neural networks for generalization. Here we propose the use of a hybrid neural network architecture that uses two kind of neural networks simultaneously: (i) a surface learning agent that quickly adapt to new modes of operation; and, (ii) a deep learning agent that is very accurate within a specific regime of operation. The two networks of the hybrid architecture perform complementary functions that improve the overall performance. The performance of the hybrid architecture has been compared with that of back-propagation perceptrons and the CC and FC networks for chaotic time-series prediction, the CATS benchmark test, and smooth function approximation. It has been shown that the hybrid architecture provides a superior performance based on the RMS error criterion. | Hybrid Neural Network Architecture for On-Line Learning | 5,931 |
Skepticism of the building block hypothesis (BBH) has previously been expressed on account of the weak theoretical foundations of this hypothesis and the anomalies in the empirical record of the simple genetic algorithm. In this paper we hone in on a more fundamental cause for skepticism--the extraordinary strength of some of the assumptions that undergird the BBH. Specifically, we focus on assumptions made about the distribution of fitness over the genome set, and argue that these assumptions are unacceptably strong. As most of these assumptions have been embraced by the designers of so-called "competent" genetic algorithms, our critique is relevant to an appraisal of such algorithms as well. | The Fundamental Problem with the Building Block Hypothesis | 5,932 |
Since the inception of genetic algorithmics the identification of computational efficiencies of the simple genetic algorithm (SGA) has been an important goal. In this paper we distinguish between a computational competency of the SGA--an efficient, but narrow computational ability--and a computational proficiency of the SGA--a computational ability that is both efficient and broad. Till date, attempts to deduce a computational proficiency of the SGA have been unsuccessful. It may, however, be possible to inductively infer a computational proficiency of the SGA from a set of related computational competencies that have been deduced. With this in mind we deduce two computational competencies of the SGA. These competencies, when considered together, point toward a remarkable computational proficiency of the SGA. This proficiency is pertinent to a general problem that is closely related to a well-known statistical problem at the cutting edge of computational genetics. | Two Remarkable Computational Competencies of the Simple Genetic
Algorithm | 5,933 |
We propose a network characterization of combinatorial fitness landscapes by adapting the notion of inherent networks proposed for energy surfaces (Doye, 2002). We use the well-known family of $NK$ landscapes as an example. In our case the inherent network is the graph where the vertices are all the local maxima and edges mean basin adjacency between two maxima. We exhaustively extract such networks on representative small NK landscape instances, and show that they are 'small-worlds'. However, the maxima graphs are not random, since their clustering coefficients are much larger than those of corresponding random graphs. Furthermore, the degree distributions are close to exponential instead of Poissonian. We also describe the nature of the basins of attraction and their relationship with the local maxima network. | A Study of NK Landscapes' Basins and Local Optima Networks | 5,934 |
We propose a network characterization of combinatorial fitness landscapes by adapting the notion of inherent networks proposed for energy surfaces. We use the well-known family of NK landscapes as an example. In our case the inherent network is the graph where the vertices represent the local maxima in the landscape, and the edges account for the transition probabilities between their corresponding basins of attraction. We exhaustively extracted such networks on representative small NK landscape instances, and performed a statistical characterization of their properties. We found that most of these network properties can be related to the search difficulty on the underlying NK landscapes with varying values of K. | The Connectivity of NK Landscapes' Basins: A Network Analysis | 5,935 |
The stucture determination of a neural network for the modelisation of a system remain the core of the problem. Within this framework, we propose a pruning algorithm of the network based on the use of the analysis of the sensitivity of the variance of all the parameters of the network. This algorithm will be tested on two examples of simulation and its performances will be compared with three other algorithms of pruning of the literature | Elagage d'un perceptron multicouches : utilisation de l'analyse de la
variance de la sensibilité des paramètres | 5,936 |
Simulation is often used to evaluate the relevance of a Directing Program of Production (PDP) or to evaluate its impact on detailed sc\'enarii of scheduling. Within this framework, we propose to reduce the complexity of a model of simulation by exploiting a multilayer perceptron. A main phase of the modeling of one system using a multilayer perceptron remains the determination of the structure of the network. We propose to compare and use various pruning algorithms in order to determine the optimal structure of the network used to reduce the complexity of the model of simulation of our case of application: a sawmill. | Sélection de la structure d'un perceptron multicouches pour la
réduction dun modèle de simulation d'une scierie | 5,937 |
This paper introduces a method to generate hierarchically modular networks with prescribed node degree list and proposes a metric to measure network modularity based on the notion of edge distance. The generated networks are used as test problems to explore the effect of modularity and degree distribution on evolutionary algorithm performance. Results from the experiments (i) confirm a previous finding that modularity increases the performance advantage of genetic algorithms over hill climbers, and (ii) support a new conjecture that test problems with modularized constraint networks having heavy-tailed right-skewed degree distributions are more easily solved than test problems with modularized constraint networks having bell-shaped normal degree distributions. | Effect of Degree Distribution on Evolutionary Search | 5,938 |
Traditional Genetic Algorithms (GAs) mating schemes select individuals for crossover independently of their genotypic or phenotypic similarities. In Nature, this behaviour is known as random mating. However, non-random schemes - in which individuals mate according to their kinship or likeness - are more common in natural systems. Previous studies indicate that, when applied to GAs, negative assortative mating (a specific type of non-random mating, also known as dissortative mating) may improve their performance (on both speed and reliability) in a wide range of problems. Dissortative mating maintains the genetic diversity at a higher level during the run, and that fact is frequently observed as an explanation for dissortative GAs ability to escape local optima traps. Dynamic problems, due to their specificities, demand special care when tuning a GA, because diversity plays an even more crucial role than it does when tackling static ones. This paper investigates the behaviour of dissortative mating GAs, namely the recently proposed Adaptive Dissortative Mating GA (ADMGA), on dynamic trap functions. ADMGA selects parents according to their Hamming distance, via a self-adjustable threshold value. The method, by keeping population diversity during the run, provides an effective means to deal with dynamic problems. Tests conducted with deceptive and nearly deceptive trap functions indicate that ADMGA is able to outperform other GAs, some specifically designed for tracking moving extrema, on a wide range of tests, being particularly effective when speed of change is not very fast. When comparing the algorithm to a previously proposed dissortative GA, results show that performance is equivalent on the majority of the experiments, but ADMGA performs better when solving the hardest instances of the test set. | Using Dissortative Mating Genetic Algorithms to Track the Extrema of
Dynamic Deceptive Functions | 5,939 |
The goal of this paper is to present the implementation of a Radial Basis Function neural network with built-in knowledge to recognize hand-written characters. The neural network includes in its architecture gates controlled by an attraction/repulsion system of coefficients. These coefficients are derived from a preprocessing stage which groups the characters according to their ascendant, central, or descendent components. The neural network is trained using data from invariant moment functions. Results are compared with those obtained using a K nearest neighbor method on the same moment data. | The use of invariant moments in hand-written character recognition | 5,940 |
An immune system inspired Artificial Immune System (AIS) algorithm is presented, and is used for the purposes of automated program verification. Relevant immunological concepts are discussed and the field of AIS is briefly reviewed. It is proposed to use this AIS algorithm for a specific automated program verification task: that of predicting shape of program invariants. It is shown that the algorithm correctly predicts program invariant shape for a variety of benchmarked programs. | An Immune System Inspired Approach to Automated Program Verification | 5,941 |
This paper extends the treatment of single-neuron memories obtained by the B-matrix approach. The spreading of the activity within the network is determined by the network's proximity matrix which represents the separations amongst the neurons through the neural pathways. | Single Neuron Memories and the Network's Proximity Matrix | 5,942 |
Simulation is useful for the evaluation of a Master Production/distribution Schedule (MPS). Also, the goal of this paper is the study of the design of a simulation model by reducing its complexity. According to theory of constraints, we want to build reduced models composed exclusively by bottlenecks and a neural network. Particularly a multilayer perceptron, is used. The structure of the network is determined by using a pruning procedure. This work focuses on the impact of discrete data on the results and compares different approaches to deal with these data. This approach is applied to sawmill internal supply chain | How deals with discrete data for the reduction of simulation models
using neural network | 5,943 |
The design and the implementation of a genetic algorithm are described. The applicability domain is on structure-activity relationships expressed as multiple linear regressions and predictor variables are from families of structure-based molecular descriptors. An experiment to compare different selection and survival strategies was designed and realized. The genetic algorithm was run using the designed experiment on a set of 206 polychlorinated biphenyls searching on structure-activity relationships having known the measured octanol-water partition coefficients and a family of molecular descriptors. The experiment shows that different selection and survival strategies create different partitions on the entire population of all possible genotypes. | A genetic algorithm for structure-activity relationships: software
implementation | 5,944 |
Artificial Neural Network is one of the most common AI application fields. This field has direct and indirect usages most sciences. The main goal of ANN is to imitate biological neural networks for solving scientific problems. But the level of parallelism is the main problem of ANN systems in comparison with biological systems. To solve this problem, we have offered a XML-based framework for implementing ANN on the Globus Toolkit Platform. Globus Toolkit is well known management software for multipurpose Grids. Using the Grid for simulating the neuron network will lead to a high degree of parallelism in the implementation of ANN. We have used the XML for improving flexibility and scalability in our framework. | XDANNG: XML based Distributed Artificial Neural Network with Globus
Toolkit | 5,945 |
Abbreviated Abstract: The objective of Evolutionary Computation is to solve practical problems (e.g. optimization, data mining) by simulating the mechanisms of natural evolution. This thesis addresses several topics related to adaptation and self-organization in evolving systems with the overall aims of improving the performance of Evolutionary Algorithms (EA), understanding its relation to natural evolution, and incorporating new mechanisms for mimicking complex biological systems. | Adaptation and Self-Organization in Evolutionary Algorithms | 5,946 |
Aluminum extrusion die manufacturing is a critical task for productive improvement and increasing potential of competition in aluminum extrusion industry. It causes to meet the efficiency not only consistent quality but also time and production cost reduction. Die manufacturing consists first of die design and process planning in order to make a die for extruding the customer's requirement products. The efficiency of die design and process planning are based on the knowledge and experience of die design and die manufacturer experts. This knowledge has been formulated into a computer system called the knowledge-based system. It can be reused to support a new die design and process planning. Such knowledge can be extracted directly from die geometry which is composed of die features. These features are stored in die feature library to be prepared for producing a new die manufacturing. Die geometry is defined according to the characteristics of the profile so we can reuse die features from the previous similar profile design cases. This paper presents the CaseXpert Process Planning System for die manufacturing based on feature based neural network technique. Die manufacturing cases in the case library would be retrieved with searching and learning method by neural network for reusing or revising it to build a die design and process planning when a new case is similar with the previous die manufacturing cases. The results of the system are dies design and machining process. The system has been successfully tested, it has been proved that the system can reduce planning time and respond high consistent plans. | A process planning system with feature based neural network search
strategy for aluminum extrusion die manufacturing | 5,947 |
Digital logic forms the functional basics of most modern electronic equipment and as such the creation of novel digital logic circuits is an active area of computer engineering research. This study demonstrates that genetic algorithms can be used to evolve functionally useful sets of logic gate interconnections to create useful digital logic circuits. The efficacy of this approach is illustrated via the evolution of AND, OR, XOR, NOR, and XNOR functionality from sets of NAND gates, thereby illustrating that evolutionary methods have the potential be applied to the design of digital electronics. | Evolution of Digital Logic Functionality via a Genetic Algorithm | 5,948 |
Algorithm::Evolutionary (A::E from now on) was introduced in 2002, after a talk in YAPC::EU in Munich. 7 years later, A::E is in its 0.67 version (past its "number of the beast" 0.666), and has been used extensively, to the point of being the foundation of much of the (computer) science being done by our research group (and, admittedly, not many others). All is not done, however; now A::E is being integrated with POE so that evolutionary algorithms (EAs) can be combined with all kinds of servers and used in client, servers, and anything in between. In this companion to the talk I will explain what evolutionary algorithms are, what they are being used for, how to do them with Perl (using these or other fine modules found in CPAN) and what evolutionary algorithms can do for Perl at large. | Still doing evolutionary algorithms with Perl | 5,949 |
Learning is the important property of Back Propagation Network (BPN) and finding the suitable weights and thresholds during training in order to improve training time as well as achieve high accuracy. Currently, data pre-processing such as dimension reduction input values and pre-training are the contributing factors in developing efficient techniques for reducing training time with high accuracy and initialization of the weights is the important issue which is random and creates paradox, and leads to low accuracy with high training time. One good data preprocessing technique for accelerating BPN classification is dimension reduction technique but it has problem of missing data. In this paper, we study current pre-training techniques and new preprocessing technique called Potential Weight Linear Analysis (PWLA) which combines normalization, dimension reduction input values and pre-training. In PWLA, the first data preprocessing is performed for generating normalized input values and then applying them by pre-training technique in order to obtain the potential weights. After these phases, dimension of input values matrix will be reduced by using real potential weights. For experiment results XOR problem and three datasets, which are SPECT Heart, SPECTF Heart and Liver disorders (BUPA) will be evaluated. Our results, however, will show that the new technique of PWLA will change BPN to new Supervised Multi Layer Feed Forward Neural Network (SMFFNN) model with high accuracy in one epoch without training cycle. Also PWLA will be able to have power of non linear supervised and unsupervised dimension reduction property for applying by other supervised multi layer feed forward neural network model in future work. | Training Process Reduction Based On Potential Weights Linear Analysis To
Accelarate Back Propagation Network | 5,950 |
Wong's diffusion network is a stochastic, zero-input Hopfield network with a Gibbs stationary distribution over a bounded, connected continuum. Previously, logarithmic thermal annealing was demonstrated for the diffusion network and digital versions of it were studied and applied to imaging. Recently, "quantum" annealed Markov chains have garnered significant attention because of their improved performance over "pure" thermal annealing. In this note, a joint quantum and thermal version of Wong's diffusion network is described and its convergence properties are studied. Different choices for "auxiliary" functions are discussed, including those of the kinetic type previously associated with quantum annealing. | A quantum diffusion network | 5,951 |
The research area of evolutionary multiobjective optimization (EMO) is reaching better understandings of the properties and capabilities of EMO algorithms, and accumulating much evidence of their worth in practical scenarios. An urgent emerging issue is that the favoured EMO algorithms scale poorly when problems have many (e.g. five or more) objectives. One of the chief reasons for this is believed to be that, in many-objective EMO search, populations are likely to be largely composed of nondominated solutions. In turn, this means that the commonly-used algorithms cannot distinguish between these for selective purposes. However, there are methods that can be used validly to rank points in a nondominated set, and may therefore usefully underpin selection in EMO search. Here we discuss and compare several such methods. Our main finding is that simple variants of the often-overlooked Average Ranking strategy usually outperform other methods tested, covering problems with 5-20 objectives and differing amounts of inter-objective correlation. | Techniques for Highly Multiobjective Optimisation: Some Nondominated
Points are Better than Others | 5,952 |
This paper reports the results of an experiment on the use of Kak's B-Matrix approach to spreading activity in a Hebbian neural network. Specifically, it concentrates on the memory retrieval from single neurons and compares the performance of the B-Matrix approach to that of the traditional approach. | Location of Single Neuron Memories in a Hebbian Network | 5,953 |
EVITA, standing for Evolutionary Inventory and Transportation Algorithm, is a two-level methodology designed to address the Inventory and Transportation Problem (ITP) in retail chains. The top level uses an evolutionary algorithm to obtain delivery patterns for each shop on a weekly basis so as to minimise the inventory costs, while the bottom level solves the Vehicle Routing Problem (VRP) for every day in order to obtain the minimum transport costs associated to a particular set of patterns. The aim of this paper is to investigate whether a multiobjective approach to this problem can yield any advantage over the previously used single objective approach. The analysis performed allows us to conclude that this is not the case and that the single objective approach is in gene- ral preferable for the ITP in the case studied. A further conclusion is that it is useful to employ a classical algorithm such as Clarke & Wright's as the seed for other metaheuristics like local search or tabu search in order to provide good results for the Vehicle Routing Problem. | Comparing Single and Multiobjective Evolutionary Approaches to the
Inventory and Transportation Problem | 5,954 |
A primary motivation for research in Digital Ecosystems is the desire to exploit the self-organising properties of natural ecosystems. Ecosystems arc thought to be robust, scalable architectures that can automatically solve complex, dynamic problems. However, the biological processes that contribute to these properties have not been made explicit in Digital Ecosystem research. Here, we introduce how biological properties contribute to the self-organising features of natural ecosystems. These properties include populations of evolving agents, a complex dynamic environment, and spatial distributions which generate local interactions. The potential for exploiting these properties in artificial systems is then considered. | Digital Business Ecosystems: Natural Science Paradigms | 5,955 |
The emission rate of minority atmospheric gases is inferred by a new approach based on neural networks. The neural network applied is the multi-layer perceptron with backpropagation algorithm for learning. The identification of these surface fluxes is an inverse problem. A comparison between the new neural-inversion and regularized inverse solution id performed. The results obtained from the neural networks are significantly better. In addition, the inversion with the neural netwroks is fster than regularized approaches, after training. | Neural-estimator for the surface emission rate of atmospheric gases | 5,956 |
This paper proposes a novel neural-network-based adaptive hybrid-reflectance three-dimensional (3-D) surface reconstruction model. The neural network combines the diffuse and specular components into a hybrid model. The proposed model considers the characteristics of each point and the variant albedo to prevent the reconstructed surface from being distorted. The neural network inputs are the pixel values of the two-dimensional images to be reconstructed. The normal vectors of the surface can then be obtained from the output of the neural network after supervised learning, where the illuminant direction does not have to be known in advance. Finally, the obtained normal vectors can be applied to integration method when reconstructing 3-D objects. Facial images were used for training in the proposed approach | NeuralNetwork Based 3D Surface Reconstruction | 5,957 |
In the interconnected power system network, instability problems are caused mainly by the low frequency oscillations of 0.2 to 2.5 Hz. The supplementary control signal in addition with AVR and high gain excitation systems are provided by means of Power System Stabilizer (PSS). Conventional power system stabilizers provide effective damping only on a particular operating point. But fuzzy based PSS provides good damping for a wide range of operating points. The bottlenecks faced in designing a fuzzy logic controller can be minimized by using appropriate optimization techniques like Genetic Algorithm, Particle Swam Optimization, Ant Colony Optimization etc.In this paper the membership functions of FLC are optimized by the new breed optimization technique called Genetic Algorithm. This design methodology is implemented on a Single Machine Infinite Bus (SMIB) system. Simulation results on SMIB show the effectiveness and robustness of the proposed PSS over a wide range of operating conditions and system configurations. | Optimal Design of Fuzzy Based Power System Stabilizer Self Tuned by
Robust Search Algorithm | 5,958 |
The official meteorological network is poor on the island of Corsica: only three sites being about 50 km apart are equipped with pyranometers which enable measurements by hourly and daily step. These sites are Ajaccio (41\degree 55'N and 8\degree 48'E, seaside), Bastia (42\degree 33'N, 9\degree 29'E, seaside) and Corte (42\degree 30'N, 9\degree 15'E average altitude of 486 meters). This lack of weather station makes difficult the predictability of PV power grid performance. This work intends to study a methodology which can predict global solar irradiation using data available from another location for daily and hourly horizon. In order to achieve this prediction, we have used Artificial Neural Network which is a popular artificial intelligence technique in the forecasting domain. A simulator has been obtained using data available for the station of Ajaccio that is the only station for which we have a lot of data: 16 years from 1972 to 1987. Then we have tested the efficiency of this simulator in two places with different geographical features: Corte, a mountainous region and Bastia, a coastal region. On daily horizon, the relocation has implied fewer errors than a "na\"ive" prediction method based on the persistence (RMSE=1468 Vs 1383Wh/m^2 to Bastia and 1325 Vs 1213Wh/m^2 to Corte). On hourly case, the results were still satisfactory, and widely better than persistence (RMSE=138.8 Vs 109.3 Wh/m^2 to Bastia and 135.1 Vs 114.7 Wh/m^2 to Corte). The last experiment was to evaluate the accuracy of our simulator on a PV power grid localized at 10 km from the station of Ajaccio. We got errors very suitable (nRMSE=27.9%, RMSE=99.0 W.h) compared to those obtained with the persistence (nRMSE=42.2%, RMSE=149.7 W.h). | Predictability of PV power grid performance on insular sites without
weather stations: use of artificial neural networks | 5,959 |
Reactive power plays an important role in supporting the real power transfer by maintaining voltage stability and system reliability. It is a critical element for a transmission operator to ensure the reliability of an electric system while minimizing the cost associated with it. The traditional objectives of reactive power dispatch are focused on the technical side of reactive support such as minimization of transmission losses. Reactive power cost compensation to a generator is based on the incurred cost of its reactive power contribution less the cost of its obligation to support the active power delivery. In this paper an efficient Particle Swarm Optimization (PSO) based reactive power optimization approach is presented. The optimal reactive power dispatch problem is a nonlinear optimization problem with several constraints. The objective of the proposed PSO is to minimize the total support cost from generators and reactive compensators. It is achieved by maintaining the whole system power loss as minimum thereby reducing cost allocation. The purpose of reactive power dispatch is to determine the proper amount and location of reactive support. Reactive Optimal Power Flow (ROPF) formulation is developed as an analysis tool and the validity of proposed method is examined using an IEEE-14 bus system. | Particle Swarm Optimization Based Reactive Power Optimization | 5,960 |
This paper reviews application of Artificial Neural Networks in Aircraft Maintenance, Repair and Overhaul (MRO). MRO solutions are designed to facilitate the authoring and delivery of maintenance and repair information to the line maintenance technicians who need to improve aircraft repair turn around time, optimize the efficiency and consistency of fleet maintenance and ensure regulatory compliance. The technical complexity of aircraft systems, especially in avionics, has increased to the point at which it poses a significant troubleshotting and repair challenge for MRO personnel. As per the existing scenario, the MRO systems in place are inefficient. In this paper, we propose the centralization and integration of the MRO database to increase its efficiency. Moreover the implementation of Artificial Neural Networks in this system can rid the system of many of its deficiencies. In order to make the system more efficient we propose to integrate all the modules so as to reduce the efficacy of repair. | Application of Artificial Neural Networks in Aircraft Maintenance,
Repair and Overhaul Solutions | 5,961 |
The stability and convergence of the neural networks are the fundamental characteristics in the Hopfield type networks. Since time delay is ubiquitous in most physical and biological systems, more attention is being made for the delayed neural networks. The inclusion of time delay into a neural model is natural due to the finite transmission time of the interactions. The stability analysis of the neural networks depends on the Lyapunov function and hence it must be constructed for the given system. In this paper we have made an attempt to establish the logarithmic stability of the impulsive delayed neural networks by constructing suitable Lyapunov function. | Existence and Global Logarithmic Stability of Impulsive Neural Networks
with Time Delay | 5,962 |
This paper describes a new method for the synthesis of planar antenna arrays using fuzzy genetic algorithms (FGAs) by optimizing phase excitation coefficients to best meet a desired radiation pattern. We present the application of a rigorous optimization technique based on fuzzy genetic algorithms (FGAs), the optimizing algorithm is obtained by adjusting control parameters of a standard version of genetic algorithm (SGAs) using a fuzzy controller (FLC) depending on the best individual fitness and the population diversity measurements (PDM). The presented optimization algorithms were previously checked on specific mathematical test function and show their superior capabilities with respect to the standard version (SGAs). A planar array with rectangular cells using a probe feed is considered. Included example using FGA demonstrates the good agreement between the desired and calculated radiation patterns than those obtained by a SGA. | Phase-Only Planar Antenna Array Synthesis with Fuzzy Genetic Algorithms | 5,963 |
The Application of Bio Inspired Algorithms to complicated Power System Stability Problems has recently attracted the researchers in the field of Artificial Intelligence. Low frequency oscillations after a disturbance in a Power system, if not sufficiently damped, can drive the system unstable. This paper provides a systematic procedure to damp the low frequency oscillations based on Bio Inspired Genetic (GA) and Particle Swarm Optimization (PSO) algorithms. The proposed controller design is based on formulating a System Damping ratio enhancement based Optimization criterion to compute the optimal controller parameters for better stability. The Novel and contrasting feature of this work is the mathematical modeling and simulation of the Synchronous generator model including the Steam Governor Turbine (GT) dynamics. To show the robustness of the proposed controller, Non linear Time domain simulations have been carried out under various system operating conditions. Also, a detailed Comparative study has been done to show the superiority of the Bio inspired algorithm based controllers over the Conventional Lead lag controller. | Implementation of an Innovative Bio Inspired GA and PSO Algorithm for
Controller design considering Steam GT Dynamics | 5,964 |
Since their conception in 1975, Genetic Algorithms have been an extremely popular approach to find exact or approximate solutions to optimization and search problems. Over the last years there has been an enhanced interest in the field with related techniques, such as grammatical evolution, being developed. Unfortunately, work on developing genetic optimizations for low-end embedded architectures hasn't embraced the same enthusiasm. This short paper tackles that situation by demonstrating how genetic algorithms can be implemented in Arduino Duemilanove, a 16 MHz open-source micro-controller, with limited computation power and storage resources. As part of this short paper, the libraries used in this implementation are released into the public domain under a GPL license. | Implementing Genetic Algorithms on Arduino Micro-Controllers | 5,965 |
Inventory management is considered to be an important field in Supply Chain Management because the cost of inventories in a supply chain accounts for about 30 percent of the value of the product. The service provided to the customer eventually gets enhanced once the efficient and effective management of inventory is carried out all through the supply chain. The precise estimation of optimal inventory is essential since shortage of inventory yields to lost sales, while excess of inventory may result in pointless storage costs. Thus the determination of the inventory to be held at various levels in a supply chain becomes inevitable so as to ensure minimal cost for the supply chain. The minimization of the total supply chain cost can only be achieved when optimization of the base stock level is carried out at each member of the supply chain. This paper deals with the problem of determination of base stock levels in a ten member serial supply chain with multiple products produced by factories using Uniform Crossover Genetic Algorithms. The complexity of the problem increases when more distribution centers and agents and multiple products were involved. These considerations leading to very complex inventory management process has been resolved in this work. | Multi Product Inventory Optimization using Uniform Crossover Genetic
Algorithm | 5,966 |
With information revolution, increased globalization and competition, supply chain has become longer and more complicated than ever before. These developments bring supply chain management to the forefront of the managements attention. Inventories are very important in a supply chain. The total investment in inventories is enormous, and the management of inventory is crucial to avoid shortages or delivery delays for the customers and serious drain on a companys financial resources. The supply chain cost increases because of the influence of lead times for supplying the stocks as well as the raw materials. Practically, the lead times will not be same through out all the periods. Maintaining abundant stocks in order to avoid the impact of high lead time increases the holding cost. Similarly, maintaining fewer stocks because of ballpark lead time may lead to shortage of stocks. This also happens in the case of lead time involved in supplying raw materials. A better optimization methodology that utilizes the Particle Swarm Optimization algorithm, one of the best optimization algorithms, is proposed to overcome the impasse in maintaining the optimal stock levels in each member of the supply chain. Taking into account the stock levels thus obtained from the proposed methodology, an appropriate stock levels to be maintained in the approaching periods that will minimize the supply chain inventory cost can be arrived at. | Efficient Inventory Optimization of Multi Product, Multiple Suppliers
with Lead Time using PSO | 5,967 |
Because of the stochastic nature of traffic requirement matrix, it is very difficult to get the optimal traffic distribution to minimize the delay even with adaptive routing protocol in a fixed connection network where capacity already defined for each link. Hence there is a requirement to define such a method, which could generate the optimal solution very quickly and efficiently. This paper presenting a new concept to provide the adaptive optimal traffic distribution for dynamic condition of traffic matrix using nature based intelligence methods. With the defined load and fixed capacity of links, average delay for packet has minimized with various variations of evolutionary programming and particle swarm optimization. Comparative study has given over their performance in terms of converging speed. Universal approximation capability, the key feature of feed forward neural network has applied to predict the flow distribution on each link to minimize the average delay for a total load available at present on the network. For any variation in the total load, the new flow distribution can be generated by neural network immediately, which could generate minimum delay in the network. With the inclusion of this information, performance of routing protocol will be improved very much. | Nature inspired artificial intelligence based adaptive traffic flow
distribution in computer network | 5,968 |
This piece of research belongs to the field of educational assessment issue based upon the cognitive multimedia theory. Considering that theory; visual and auditory material should be presented simultaneously to reinforce the retention of a mathematical learned topic, a carefully computer-assisted learning (CAL) module is designed for development of a multimedia tutorial for our suggested mathematical topic. The designed CAL module is a multimedia tutorial computer package with visual and/or auditory material. So, via suggested computer package, Multi-Sensory associative memories and classical conditioning theories are practically applicable at an educational field (a children classroom). It is noticed that comparative practical results obtained are interesting for field application of CAL package with and without associated teacher's voice. Finally, the presented study highly recommends application of a novel teaching trend aiming to improve quality of children mathematical learning performance. | On Analysis and Evaluation of Multi-Sensory Cognitive Learning of a
Mathematical Topic Using Artificial Neural Networks | 5,969 |
In this paper, researchers estimated the stock price of activated companies in Tehran (Iran) stock exchange. It is used Linear Regression and Artificial Neural Network methods and compared these two methods. In Artificial Neural Network, of General Regression Neural Network method (GRNN) for architecture is used. In this paper, first, researchers considered 10 macro economic variables and 30 financial variables and then they obtained seven final variables including 3 macro economic variables and 4 financial variables to estimate the stock price using Independent components Analysis (ICA). So, we presented an equation for two methods and compared their results which shown that artificial neural network method is more efficient than linear regression method. | The Comparison of Methods Artificial Neural Network with Linear
Regression Using Specific Variables for Prediction Stock Price in Tehran
Stock Exchange | 5,970 |
This paper extends the analogies employed in the development of quantum-inspired evolutionary algorithms by proposing quantum-inspired Hadamard walks, called QHW. A novel quantum-inspired evolutionary algorithm, called HQEA, for solving combinatorial optimization problems, is also proposed. The novelty of HQEA lies in it's incorporation of QHW Remote Search and QHW Local Search - the quantum equivalents of classical mutation and local search, that this paper defines. The intuitive reasoning behind this approach, and the exploration-exploitation balance thus occurring is explained. From the results of the experiments carried out on the 0,1-knapsack problem, HQEA performs significantly better than a conventional genetic algorithm, CGA, and two quantum-inspired evolutionary algorithms - QEA and NQEA, in terms of convergence speed and accuracy. | Superior Exploration-Exploitation Balance with Quantum-Inspired Hadamard
Walks | 5,971 |
This paper presents an application of evolutionary search procedures to artificial neural networks. Here, we can distinguish among three kinds of evolution in artificial neural networks, i.e. the evolution of connection weights, of architectures, and of learning rules. We review each kind of evolution in detail and analyse critical issues related to different evolutions. This article concentrates on finding the suitable way of using evolutionary algorithms for optimizing the artificial neural network parameters. | Neuroevolutionary optimization | 5,972 |
A general procedure of average-case performance evaluation for population dynamics such as genetic algorithms (GAs) is proposed and its validity is numerically examined. We introduce a learning algorithm of Gibbs distributions from training sets which are gene configurations (strings) generated by GA in order to figure out the statistical properties of GA from the view point of thermodynamics. The learning algorithm is constructed by means of minimization of the Kullback-Leibler information between a parametric Gibbs distribution and the empirical distribution of gene configurations. The formulation is applied to the solvable probabilistic models having multi-valley energy landscapes, namely, the spin glass chain and the Sherrington-Kirkpatrick model. By using computer simulations, we discuss the asymptotic behaviour of the effective temperature scheduling and the residual energy induced by the GA dynamics. | A Gibbs distribution that learns from GA dynamics | 5,973 |
Mobility prediction allows estimating the stability of paths in a mobile wireless Ad Hoc networks. Identifying stable paths helps to improve routing by reducing the overhead and the number of connection interruptions. In this paper, we introduce a neural network based method for mobility prediction in Ad Hoc networks. This method consists of a multi-layer and recurrent neural network using back propagation through time algorithm for training. | Mobility Prediction in Wireless Ad Hoc Networks using Neural Networks | 5,974 |
In this paper an attempt has been made to identify most important human resource factors and propose a diagnostic model based on the back-propagation and connectionist model approaches of artificial neural network (ANN). The focus of the study is on the mobile -communication industry of India. The ANN based approach is particularly important because conventional approaches (such as algorithmic) to the problem solving have their inherent disadvantages. The algorithmic approach is well-suited to the problems that are well-understood and known solution(s). On the other hand the ANNs have learning by example and processing capabilities similar to that of a human brain. ANN has been followed due to its inherent advantage over conversion algorithmic like approaches and having capabilities, training and human like intuitive decision making capabilities. Therefore, this ANN based approach is likely to help researchers and organizations to reach a better solution to the problem of managing the human resource. The study is particularly important as many studies have been carried in developed countries but there is a shortage of such studies in developing nations like India. Here, a model has been derived using connectionist-ANN approach and improved and verified via back-propagation algorithm. This suggested ANN based model can be used for testing the success and failure human factors in any of the communication Industry. Results have been obtained on the basis of connectionist model, which has been further refined by BPNN to an accuracy of 99.99%. Any company to predict failure due to HR factors can directly deploy this model. | Artificial Neural Network based Diagnostic Model For Causes of Success
and Failures | 5,975 |
The Global Positioning Systems (GPS) and Inertial Navigation System (INS) technology have attracted a considerable importance recently because of its large number of solutions serving both military as well as civilian applications. This paper aims to develop a more efficient and especially a faster method for processing the GPS signal in case of INS signal loss without losing the accuracy of the data. The conventional or usual method consists of processing data through a neural network and obtaining accurate positioning output data. The new or improved method adds selective filtering at the low-band frequency, the mid-band frequency and the high band frquency, before processing the GPS data through the neural network, so that the processing time is decreased significantly while the accuracy remains the same. | Improving GPS/INS Integration through Neural Networks | 5,976 |
We introduce a new neural architecture and an unsupervised algorithm for learning invariant representations from temporal sequence of images. The system uses two groups of complex cells whose outputs are combined multiplicatively: one that represents the content of the image, constrained to be constant over several consecutive frames, and one that represents the precise location of features, which is allowed to vary over time but constrained to be sparse. The architecture uses an encoder to extract features, and a decoder to reconstruct the input from the features. The method was applied to patches extracted from consecutive movie frames and produces orientation and frequency selective units analogous to the complex cells in V1. An extension of the method is proposed to train a network composed of units with local receptive field spread over a large image of arbitrary size. A layer of complex cells, subject to sparsity constraints, pool feature units over overlapping local neighborhoods, which causes the feature units to organize themselves into pinwheel patterns of orientation-selective receptive fields, similar to those observed in the mammalian visual cortex. A feed-forward encoder efficiently computes the feature representation of full images. | Emergence of Complex-Like Cells in a Temporal Product Network with Local
Receptive Fields | 5,977 |
We address the problem of finding patterns from multi-neuronal spike trains that give us insights into the multi-neuronal codes used in the brain and help us design better brain computer interfaces. We focus on the synchronous firings of groups of neurons as these have been shown to play a major role in coding and communication. With large electrode arrays, it is now possible to simultaneously record the spiking activity of hundreds of neurons over large periods of time. Recently, techniques have been developed to efficiently count the frequency of synchronous firing patterns. However, when the number of neurons being observed grows they suffer from the combinatorial explosion in the number of possible patterns and do not scale well. In this paper, we present a temporal data mining scheme that overcomes many of these problems. It generates a set of candidate patterns from frequent patterns of smaller size; all possible patterns are not counted. Also we count only a certain well defined subset of occurrences and this makes the process more efficient. We highlight the computational advantage that this approach offers over the existing methods through simulations. We also propose methods for assessing the statistical significance of the discovered patterns. We detect only those patterns that repeat often enough to be significant and thus be able to automatically fix the threshold for the data-mining application. Finally we discuss the usefulness of these methods for brain computer interfaces. | Efficient Discovery of Large Synchronous Events in Neural Spike Streams | 5,978 |
This paper continues on the work of the B-Matrix approach in hebbian learning proposed by Dr. Kak. It reports the results on methods of improving the memory retrieval capacity of the hebbian neural network which implements the B-Matrix approach. Previously, the approach to retrieving the memories from the network was to clamp all the individual neurons separately and verify the integrity of these memories. Here we present a network with the capability to identify the "active sites" in the network during the training phase and use these "active sites" to generate the memories retrieved from these neurons. Three methods are proposed for obtaining the update order of the network from the proximity matrix when multiple neurons are to be clamped. We then present a comparison between the new methods to the classical case and also among the methods themselves. | Active Sites model for the B-Matrix Approach | 5,979 |
This paper reports the results on methods of comparing the memory retrieval capacity of the Hebbian neural network which implements the B-Matrix approach, by using the Widrow-Hoff rule of learning. We then, extend the recently proposed Active Sites model by developing a delta rule to increase memory capacity. Also, this paper extends the binary neural network to a multi-level (non-binary) neural network. | Delta Learning Rule for the Active Sites Model | 5,980 |
This paper presents an analysis of building blocks propagation in Quantum-Inspired Genetic Algorithm, which belongs to a new class of metaheuristics drawing their inspiration from both biological evolution and unitary evolution of quantum systems. The expected number of quantum chromosomes matching a schema has been analyzed and a random variable corresponding to this issue has been introduced. The results have been compared with Simple Genetic Algorithm. Also, it has been presented how selected binary quantum chromosomes cover a domain of one-dimensional fitness function. | Building Blocks Propagation in Quantum-Inspired Genetic Algorithm | 5,981 |
With this paper, we contribute to the understanding of ant colony optimization (ACO) algorithms by formally analyzing their runtime behavior. We study simple MAX-MIN ant systems on the class of linear pseudo-Boolean functions defined on binary strings of length 'n'. Our investigations point out how the progress according to function values is stored in pheromone. We provide a general upper bound of O((n^3 \log n)/ \rho) for two ACO variants on all linear functions, where (\rho) determines the pheromone update strength. Furthermore, we show improved bounds for two well-known linear pseudo-Boolean functions called OneMax and BinVal and give additional insights using an experimental study. | Simple Max-Min Ant Systems and the Optimization of Linear Pseudo-Boolean
Functions | 5,982 |
In the paper, an evolutionary approach to test generation for functional BIST is considered. The aim of the proposed scheme is to minimize the test data volume by allowing the device's microprogram to test its logic, providing an observation structure to the system, and generating appropriate test data for the given architecture. Two methods of deriving a deterministic test set at functional level are suggested. The first method is based on the classical genetic algorithm with binary and arithmetic crossover and mutation operators. The second one uses genetic programming, where test is represented as a sequence of microoperations. In the latter case, we apply two-point crossover based on exchanging test subsequences and mutation implemented as random replacement of microoperations or operands. Experimental data of the program realization showing the efficiency of the proposed methods are presented. | Evolutionary Approach to Test Generation for Functional BIST | 5,983 |
It is now widely accepted that memristive devices are perfect candidates for the emulation of biological synapses in neuromorphic systems. This is mainly because of the fact that like the strength of synapse, memristance of the memristive device can be tuned actively (e.g., by the application of volt- age or current). In addition, it is also possible to fabricate very high density of memristive devices (comparable to the number of synapses in real biological system) through the nano-crossbar structures. However, in this paper we will show that there are some problems associated with memristive synapses (memristive devices which are playing the role of biological synapses). For example, we show that the variation rate of the memristance of memristive device depends completely on the current memristance of the device and therefore it can change significantly with time during the learning phase. This phenomenon can degrade the performance of learning methods like Spike Timing-Dependent Plasticity (STDP) and cause the corresponding neuromorphic systems to become unstable. Finally, at the end of this paper, we illustrate that using two serially connected memristive devices with different polarities as a synapse can somewhat fix the aforementioned problem. | Bottleneck of using single memristor as a synapse and its solution | 5,984 |
The recommendation to change breathing patterns from the mouth to the nose can have a significantly positive impact upon the general well being of the individual. We classify nasal and mouth breathing by using an acoustic sensor and intelligent signal processing techniques. The overall purpose is to investigate the possibility of identifying the differences in patterns between nasal and mouth breathing in order to integrate this information into a decision support system which will form the basis of a patient monitoring and motivational feedback system to recommend the change from mouth to nasal breathing. Our findings show that the breath pattern can be discriminated in certain places of the body both by visual spectrum analysis and with a Back Propagation neural network classifier. The sound file recoded from the sensor placed on the hollow in the neck shows the most promising accuracy which is as high as 90%. | Discriminating between Nasal and Mouth Breathing | 5,985 |
Neuroevolution is an active and growing research field, especially in times of increasingly parallel computing architectures. Learning methods for Artificial Neural Networks (ANN) can be divided into two groups. Neuroevolution is mainly based on Monte-Carlo techniques and belongs to the group of global search methods, whereas other methods such as backpropagation belong to the group of local search methods. ANN's comprise important symmetry properties, which can influence Monte-Carlo methods. On the other hand, local search methods are generally unaffected by these symmetries. In the literature, dealing with the symmetries is generally reported as being not effective or even yielding inferior results. In this paper, we introduce the so called Minimum Global Optimum Proximity principle derived from theoretical considerations for effective symmetry breaking, applied to offline supervised learning. Using Differential Evolution (DE), which is a popular and robust evolutionary global optimization method, we experimentally show significant global search efficiency improvements by symmetry breaking. | Artificial Neural Networks, Symmetries and Differential Evolution | 5,986 |
In this paper, Estimation of Distribution Algorithm (EDA) is used for Zone Routing Protocol (ZRP) in Mobile Ad-hoc Network (MANET) instead of Genetic Algorithm (GA). It is an evolutionary approach, and used when the network size grows and the search space increases. When the destination is outside the zone, EDA is applied to find the route with minimum cost and time. The implementation of proposed method is compared with Genetic ZRP, i.e., GZRP and the result demonstrates better performance for the proposed method. Since the method provides a set of paths to the destination, it results in load balance to the network. As both EDA and GA use random search method to reach the optimal point, the searching cost reduced significantly, especially when the number of data is large. | Performance Analysis of Estimation of Distribution Algorithm and Genetic
Algorithm in Zone Routing Protocol | 5,987 |
This research is to search for alternatives to the resolution of complex medical diagnosis where human knowledge should be apprehended in a general fashion. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic system. Our research describes a constructive neural network algorithm with backpropagation; offer an approach for the incremental construction of nearminimal neural network architectures for pattern classification. The algorithm starts with minimal number of hidden units in the single hidden layer; additional units are added to the hidden layer one at a time to improve the accuracy of the network and to get an optimal size of a neural network. Our algorithm was tested on several benchmarking classification problems including Cancer1, Heart, and Diabetes with good generalization ability. | A Constructive Algorithm for Feedforward Neural Networks for Medical
Diagnostic Reasoning | 5,988 |
Artificial neural networks (ANNs) have been successfully applied to solve a variety of classification and function approximation problems. Although ANNs can generally predict better than decision trees for pattern classification problems, ANNs are often regarded as black boxes since their predictions cannot be explained clearly like those of decision trees. This paper presents a new algorithm, called rule extraction from ANNs (REANN), to extract rules from trained ANNs for medical diagnosis problems. A standard three-layer feedforward ANN with four-phase training is the basis of the proposed algorithm. In the first phase, the number of hidden nodes in ANNs is determined automatically by a constructive algorithm. In the second phase, irrelevant connections and input nodes are removed from trained ANNs without sacrificing the predictive accuracy of ANNs. The continuous activation values of the hidden nodes are discretized by using an efficient heuristic clustering algorithm in the third phase. Finally, rules are extracted from compact ANNs by examining the discretized activation values of the hidden nodes. Extensive experimental studies on three benchmark classification problems, i.e. breast cancer, diabetes and lenses, demonstrate that REANN can generate high quality rules from ANNs, which are comparable with other methods in terms of number of rules, average number of conditions for a rule, and predictive accuracy. | An Algorithm to Extract Rules from Artificial Neural Networks for
Medical Diagnosis Problems | 5,989 |
Although backpropagation ANNs generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions cannot be explained as those of decision trees. In many applications, it is desirable to extract knowledge from trained ANNs for the users to gain a better understanding of how the networks solve the problems. A new rule extraction algorithm, called rule extraction from artificial neural networks (REANN) is proposed and implemented to extract symbolic rules from ANNs. A standard three-layer feedforward ANN is the basis of the algorithm. A four-phase training algorithm is proposed for backpropagation learning. Explicitness of the extracted rules is supported by comparing them to the symbolic rules generated by other methods. Extracted rules are comparable with other methods in terms of number of rules, average number of conditions for a rule, and predictive accuracy. Extensive experimental studies on several benchmarks classification problems, such as breast cancer, iris, diabetes, and season classification problems, demonstrate the effectiveness of the proposed approach with good generalization ability. | Extraction of Symbolic Rules from Artificial Neural Networks | 5,990 |
This research is to search for alternatives to the resolution of complex medical diagnosis where human knowledge should be apprehended in a general fashion. Successful application examples show that human diagnostic capabilities are significantly worse than the neural diagnostic system. This paper describes a modified feedforward neural network constructive algorithm (MFNNCA), a new algorithm for medical diagnosis. The new constructive algorithm with backpropagation; offer an approach for the incremental construction of near-minimal neural network architectures for pattern classification. The algorithm starts with minimal number of hidden units in the single hidden layer; additional units are added to the hidden layer one at a time to improve the accuracy of the network and to get an optimal size of a neural network. The MFNNCA was tested on several benchmarking classification problems including the cancer, heart disease and diabetes. Experimental results show that the MFNNCA can produce optimal neural network architecture with good generalization ability. | Medical diagnosis using neural network | 5,991 |
This paper describes an efficient rule generation algorithm, called rule generation from artificial neural networks (RGANN) to generate symbolic rules from ANNs. Classification rules are sought in many areas from automatic knowledge acquisition to data mining and ANN rule extraction. This is because classification rules possess some attractive features. They are explicit, understandable and verifiable by domain experts, and can be modified, extended and passed on as modular knowledge. A standard three-layer feedforward ANN is the basis of the algorithm. A four-phase training algorithm is proposed for backpropagation learning. Comparing them to the symbolic rules generated by other methods supports explicitness of the generated rules. Generated rules are comparable with other methods in terms of number of rules, average number of conditions for a rule, and predictive accuracy. Extensive experimental studies on several benchmarks classification problems, including breast cancer, wine, season, golf-playing, and lenses classification demonstrate the effectiveness of the proposed approach with good generalization ability. | RGANN: An Efficient Algorithm to Extract Rules from ANNs | 5,992 |
Neural networks (NNs) have been successfully applied to solve a variety of application problems involving classification and function approximation. Although backpropagation NNs generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions cannot be explained as those of decision trees. In many applications, it is desirable to extract knowledge from trained NNs for the users to gain a better understanding of how the networks solve the problems. An algorithm is proposed and implemented to extract symbolic rules for medical diagnosis problem. Empirical study on three benchmarks classification problems, such as breast cancer, diabetes, and lenses demonstrates that the proposed algorithm generates high quality rules from NNs comparable with other methods in terms of number of rules, average number of conditions for a rule, and predictive accuracy. | Extracting Symbolic Rules for Medical Diagnosis Problem | 5,993 |
In recent years, many neural network models have been proposed for pattern classification, function approximation and regression problems. This paper presents an approach for classifying patterns from simplified NNs. Although the predictive accuracy of ANNs is often higher than that of other methods or human experts, it is often said that ANNs are practically "black boxes", due to the complexity of the networks. In this paper, we have an attempted to open up these black boxes by reducing the complexity of the network. The factor makes this possible is the pruning algorithm. By eliminating redundant weights, redundant input and hidden units are identified and removed from the network. Using the pruning algorithm, we have been able to prune networks such that only a few input units, hidden units and connections left yield a simplified network. Experimental results on several benchmarks problems in neural networks show the effectiveness of the proposed approach with good generalization ability. | Pattern Classification using Simplified Neural Networks | 5,994 |
Artificial neural networks have been successfully applied to a variety of business application problems involving classification and regression. Although backpropagation neural networks generally predict better than decision trees do for pattern classification problems, they are often regarded as black boxes, i.e., their predictions are not as interpretable as those of decision trees. In many applications, it is desirable to extract knowledge from trained neural networks so that the users can gain a better understanding of the solution. This paper presents an efficient algorithm to extract rules from artificial neural networks. We use two-phase training algorithm for backpropagation learning. In the first phase, the number of hidden nodes of the network is determined automatically in a constructive fashion by adding nodes one after another based on the performance of the network on training data. In the second phase, the number of relevant input units of the network is determined using pruning algorithm. The pruning process attempts to eliminate as many connections as possible from the network. Relevant and irrelevant attributes of the data are distinguished during the training process. Those that are relevant will be kept and others will be automatically discarded. From the simplified networks having small number of connections and nodes we may easily able to extract symbolic rules using the proposed algorithm. Extensive experimental results on several benchmarks problems in neural networks demonstrate the effectiveness of the proposed approach with good generalization ability. | Rule Extraction using Artificial Neural Networks | 5,995 |
This paper describes an efficient algorithm REx for generating symbolic rules from artificial neural network (ANN). Classification rules are sought in many areas from automatic knowledge acquisition to data mining and ANN rule extraction. This is because classification rules possess some attractive features. They are explicit, understandable and verifiable by domain experts, and can be modified, extended and passed on as modular knowledge. REx exploits the first order information in the data and finds shortest sufficient conditions for a rule of a class that can differentiate it from patterns of other classes. It can generate concise and perfect rules in the sense that the error rate of the rules is not worse than the inconsistency rate found in the original data. An important feature of rule extraction algorithm, REx, is its recursive nature. They are concise, comprehensible, order insensitive and do not involve any weight values. Extensive experimental studies on several benchmark classification problems, such as breast cancer, iris, season, and golf-playing, demonstrate the effectiveness of the proposed approach with good generalization ability. | REx: An Efficient Rule Generator | 5,996 |
In This paper we present a genetic algorithm for the multi-pickup and delivery problem with time windows (m-PDPTW). The m-PDPTW is an optimization vehicles routing problem which must meet requests for transport between suppliers and customers satisfying precedence, capacity and time constraints. This paper purposes a brief literature review of the PDPTW, present our approach based on genetic algorithms to minimizing the total travel distance and thereafter the total travel cost, by showing that an encoding represents the parameters of each individual. | A Genetic Algorithm for the Multi-Pickup and Delivery Problem with time
windows | 5,997 |
In This paper we present a genetic algorithm for mulicriteria optimization of a multipickup and delivery problem with time windows (m-PDPTW). The m-PDPTW is an optimization vehicles routing problem which must meet requests for transport between suppliers and customers satisfying precedence, capacity and time constraints. This paper purposes a brief literature review of the PDPTW, present an approach based on genetic algorithms and Pareto dominance method to give a set of satisfying solutions to the m-PDPTW minimizing total travel cost, total tardiness time and the vehicles number. | Genetic Algorithm for Mulicriteria Optimization of a Multi-Pickup and
Delivery Problem with Time Windows | 5,998 |
The PDPTW is an optimization vehicles routing problem which must meet requests for transport between suppliers and customers satisfying precedence, capacity and time constraints. We present, in this paper, a genetic algorithm for optimization of a multi pickup and delivery problem with time windows (m-PDPTW). We purposes a brief literature review of the PDPTW, present an approach based on genetic algorithms to give a satisfying solution to the m-PDPTW minimizing the total travel cost. | Un Algorithme génétique pour le problème de ramassage et de
livraison avec fenêtres de temps à plusieurs véhicules | 5,999 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.