query
stringlengths 25
206
| positive
sequencelengths 1
5
| negative
sequencelengths 93
98
| cluster
int64 0
47
|
|---|---|---|---|
A Direct Search Method to solve Economic Dispatch Problem with Valve-Point Effect
|
[
"Identification and control of dynamic systems using recurrent fuzzy neural networks\nThis paper proposes a recurrent fuzzy neural network (RFNN) structure for identifying and controlling nonlinear dynamic systems. The RFNN is inherently a recurrent multilayered connectionist network for realizing fuzzy inference using dynamic fuzzy rules. Temporal relations are embedded in the network by adding feedback connections in the second layer of the fuzzy neural network (FNN). The RFNN expands the basic ability of the FNN to cope with temporal problems. In addition, results for the FNNfuzzy inference engine, universal approximation, and convergence analysis are extended to the RFNN. For the control problem, we present the direct and indirect adaptive control approaches using the RFNN. Based on the Lyapunov stability approach, rigorous proofs are presented to guarantee the convergence of the RFNN by choosing appropriate learning rates. Finally, the RFNN is applied in several simulations (time series prediction, identification, and control of nonlinear systems). The results confirm the effectiveness of the RFNN.",
"Genetic Fuzzy Systems - Evolutionary Tuning and Learning of Fuzzy Knowledge Bases\nIt's not surprisingly when entering this site to get the book. One of the popular books now is the genetic fuzzy systems evolutionary tuning and learning of fuzzy knowledge bases. You may be confused because you can't find the book in the book store around your city. Commonly, the popular book will be sold quickly. And when you have found the store to buy the book, it will be so hurt when you run out of it. This is why, searching for this popular book in this website will give you benefit. You will not run out of this book.",
"A modified particle swarm optimizer\nIn this paper, we introduce a new parameter, called inertia weight, into the original particle swarm optimizer. Simulations have been done to illustrate the signilicant and effective impact of this new parameter on the particle swarm optimizer.",
"A Hybrid EP and SQP for Dynamic Economic Dispatch with Nonsmooth Fuel Cost Function\nDynamic economic dispatch (DED) is one of the main functions of power generation operation and control. It determines the optimal settings of generator units with predicted load demand over a certain period of time. The objective is to operate an electric power system most economically while the system is operating within its security limits. This paper proposes a new hybrid methodology for solving DED. The proposed method is developed in such a way that a simple evolutionary programming (EP) is applied as a based level search, which can give a good direction to the optimal global region, and a local search sequential quadratic programming (SQP) is used as a fine tuning to determine the optimal solution at the final. Ten units test system with nonsmooth fuel cost function is used to illustrate the effectiveness of the proposed method compared with those obtained from EP and SQP alone.",
"A hybrid of genetic algorithm and particle swarm optimization for recurrent network design\nAn evolutionary recurrent network which automates the design of recurrent neural/fuzzy networks using a new evolutionary learning algorithm is proposed in this paper. This new evolutionary learning algorithm is based on a hybrid of genetic algorithm (GA) and particle swarm optimization (PSO), and is thus called HGAPSO. In HGAPSO, individuals in a new generation are created, not only by crossover and mutation operation as in GA, but also by PSO. The concept of elite strategy is adopted in HGAPSO, where the upper-half of the best-performing individuals in a population are regarded as elites. However, instead of being reproduced directly to the next generation, these elites are first enhanced. The group constituted by the elites is regarded as a swarm, and each elite corresponds to a particle within it. In this regard, the elites are enhanced by PSO, an operation which mimics the maturing phenomenon in nature. These enhanced elites constitute half of the population in the new generation, whereas the other half is generated by performing crossover and mutation operation on these enhanced elites. HGAPSO is applied to recurrent neural/fuzzy network design as follows. For recurrent neural network, a fully connected recurrent neural network is designed and applied to a temporal sequence production problem. For recurrent fuzzy network design, a Takagi-Sugeno-Kang-type recurrent fuzzy network is designed and applied to dynamic plant control. The performance of HGAPSO is compared to both GA and PSO in these recurrent networks design problems, demonstrating its superiority."
] |
[
"Non-linear Feedback Neural Network for Solution of Quadratic Programming Problems This paper presents a recurrent neural circuit for solving quadratic programming problems. The objective is tominimize a quadratic cost function subject to linearconstraints. The proposed circuit employs non-linearfeedback, in the form of unipolar comparators, to introducetranscendental terms in the energy function ensuring fastconvergence to the solution. The proof of validity of the energy function is also provided. The hardware complexity of the proposed circuit comparesfavorably with other proposed circuits for the same task. PSPICE simulation results arepresented for a chosen optimization problem and are foundto agree with the algebraic solution.",
"Distribution expansion planning considering reliability and security of energy using modified PSO ( Particle Swarm Optimization ) algorithm Distribution feeders and substations need to provide additional capacity to serve the growing electrical demand of customers without compromising the reliability of the electrical networks. Also, more control devices, such as DG (Distributed Generation) units are being integrated into distribution feeders. Distribution networks were not planned to host these intermittent generation units before construction of the systems. Therefore, additional distribution facilities are needed to be planned and prepared for the future growth of the electrical demand as well as the increase of network hosting capacity by DG units. This paper presents a multiobjective optimization algorithm for the MDEP (Multi-Stage Distribution Expansion Planning) in the presence of DGs using nonlinear formulations. The objective functions of the MDEP consist of minimization of costs, END (Energy-Not-Distributed), active power losses and voltage stability index based on SCC (Short Circuit Capacity). A MPSO (modified Particle Swarm Optimization) algorithm is developed and used for this multiobjective MDEP optimization. In the proposed MPSO algorithm, a new mutation method is implemented to improve the global searching ability and restrain the premature convergence to local minima. The effectiveness of the proposed method is tested on a typical 33-bus test system and results are presented.",
"An improved swarm optimized functional link artificial neural network (ISO-FLANN) for classification Multilayer perceptron (MLP) (trained with back propagation learning algorithm) takes large computational time. The complexity of the network increases as the number of layers and number of nodes in layers increases. Further, it is also very difficult to decide the number of nodes in a layer and the number of layers in the network required for solving a problem a priori. In this paper an improved particle swarm optimization (IPSO) is used to train the functional link artificial neural network (FLANN) for classification and we name it ISO-FLANN. In contrast to MLP, FLANN has less architectural complexity, easier to train, and more insight may be gained in the classification problem. Further, we rely on global classification ata mining unctional link artificial neural networks ulti-layer perception article swarm optimization mproved particle swarm optimization VM capabilities of IPSO to explore the entire weight space, which is plagued by a host of local optima. Using the functionally expanded features; FLANN overcomes the non-linear nature of problems. We believe that the combined efforts of FLANN and IPSO (IPSO + FLANN = ISO − FLANN) by harnessing their best attributes can give rise to a robust classifier. An extensive simulation study is presented to show the effectiveness of proposed classifier. Results are compared with MLP, support vector machine(SVM) with radial basis function (RBF) kernel, FLANN with gradiend descent learning and fuzzy swarm net (FSN). SN",
"An Empirical Study of Algorithms for Point-Feature Label Placement A major factor affecting the clarity of graphical displays that include text labels is the degree to which labels obscure display features (including other labels) as a result of spatial overlap. Point-feature label placement (PFLP) is the problem of placing text labels adjacent to point features on a map or diagram so as to maximize legibility. This problem occurs frequently in the production of many types of informational graphics, though it arises most often in automated cartography. In this paper we present a comprehensive treatment of the PFLP problem, viewed as a type of combinatorial optimization problem. Complexity analysis reveals that the basic PFLP problem and most interesting variants of it are NP-hard. These negative results help inform a survey of previously reported algorithms for PFLP; not surprisingly, all such algorithms either have exponential time complexity or are incomplete. To solve the PFLP problem in practice, then, we must rely on good heuristic methods. We propose two new methods, one based on a discrete form of gradient descent, the other on simulated annealing, and report on a series of empirical tests comparing these and the other known algorithms for the problem. Based on this study, the first to be conducted, we identify the best approaches as a function of available computation time.",
"Learning to trade via direct reinforcement We present methods for optimizing portfolios, asset allocations, and trading systems based on direct reinforcement (DR). In this approach, investment decision-making is viewed as a stochastic control problem, and strategies are discovered directly. We present an adaptive algorithm called recurrent reinforcement learning (RRL) for discovering investment policies. The need to build forecasting models is eliminated, and better trading performance is obtained. The direct reinforcement approach differs from dynamic programming and reinforcement algorithms such as TD-learning and Q-learning, which attempt to estimate a value function for the control problem. We find that the RRL direct reinforcement framework enables a simpler problem representation, avoids Bellman's curse of dimensionality and offers compelling advantages in efficiency. We demonstrate how direct reinforcement can be used to optimize risk-adjusted investment returns (including the differential Sharpe ratio), while accounting for the effects of transaction costs. In extensive simulation work using real financial data, we find that our approach based on RRL produces better trading strategies than systems utilizing Q-learning (a value function method). Real-world applications include an intra-daily currency trader and a monthly asset allocation system for the S&P 500 Stock Index and T-Bills.",
"Electricity Price Forecasting With Extreme Learning Machine and Bootstrapping Artificial neural networks (ANNs) have been widely applied in electricity price forecasts due to their nonlinear modeling capabilities. However, it is well known that in general, traditional training methods for ANNs such as back-propagation (BP) approach are normally slow and it could be trapped into local optima. In this paper, a fast electricity market price forecast method is proposed based on a recently emerged learning method for single hidden layer feed-forward neural networks, the extreme learning machine (ELM), to overcome these drawbacks. The new approach also has improved price intervals forecast accuracy by incorporating bootstrapping method for uncertainty estimations. Case studies based on chaos time series and Australian National Electricity Market price series show that the proposed method can effectively capture the nonlinearity from the highly volatile price data series with much less computation time compared with other methods. The results show the great potential of this proposed approach for online accurate price forecasting for the spot market prices analysis.",
"Differential Evolution-A simple and efficient adaptive scheme for global optimization over continuous spaces by A new heuristic approach for minimizing possibly nonlinear and non differentiable continuous space functions is presented. By means of an extensive testbed, which includes the De Jong functions, it will be demonstrated that the new method converges faster and with more certainty than Adaptive Simulated Annealing as well as the Annealed Nelder&Mead approach, both of which have a reputation for being very powerful. The new method requires few control variables, is robust, easy to use and lends itself very well to parallel computation. ________________________________________ 1)International Computer Science Institute, 1947 Center Street, Berkeley, CA 94704-1198, Suite 600, Fax: 510-643-7684. E-mail: storn@icsi.berkeley.edu. On leave from Siemens AG, ZFE T SN 2, OttoHahn-Ring 6, D-81739 Muenchen, Germany. Fax: 01149-636-44577, Email:rainer.storn@zfe.siemens.de. 2)836 Owl Circle, Vacaville, CA 95687, kprice@solano.community.net.",
"Budgeted Optimization with Constrained Experiments Motivated by a real-world problem, we study a novel budgeted optimization problem where the goal is to optimize an unknown function f(·) given a budget by requesting a sequence of samples from the function. In our setting, however, evaluating the function at precisely specified points is not practically possible due to prohibitive costs. Instead, we can only request constrained experiments. A constrained experiment, denoted by Q, specifies a subset of the input space for the experimenter to sample the function from. The outcome of Q includes a sampled experiment x, and its function output f(x). Importantly, as the constraints of Q become looser, the cost of fulfilling the request decreases, but the uncertainty about the location x increases. Our goal is to manage this trade-off by selecting a set of constrained experiments that best optimize f(·) within the budget. We study this problem in two different settings, the non-sequential (or batch) setting where a set of constrained experiments is selected at once, and the sequential setting where experiments are selected one at a time. We evaluate our proposed methods for both settings using synthetic and real functions. The experimental results demonstrate the efficacy of the proposed methods.",
"Aggressive driving with model predictive path integral control In this paper we present a model predictive control algorithm designed for optimizing non-linear systems subject to complex cost criteria. The algorithm is based on a stochastic optimal control framework using a fundamental relationship between the information theoretic notions of free energy and relative entropy. The optimal controls in this setting take the form of a path integral, which we approximate using an efficient importance sampling scheme. We experimentally verify the algorithm by implementing it on a Graphics Processing Unit (GPU) and apply it to the problem of controlling a fifth-scale Auto-Rally vehicle in an aggressive driving task.",
"Energy Minimization Using Multiple Supply Voltages We present a dynamic programming technique for solving the multiple supply voltage scheduling problem in both non-pipelined and functionally pipelined data-paths. The scheduling problem refers to the assignment of a supply voltage level to each operation in a data ow graph so as to minimize the average energy consumption for given computation time or throughput constraints or both. The energy model is accurate and accounts for the input pattern dependencies, re-convergent fanout induced dependencies, and the energy cost of level shifters. Experimental results show that using four supply voltage levels on a number of standard benchmarks, an average energy saving of 53% (with a computation time constraint of 1.5 times the critical path delay) can be obtained compared to using one fixed supply voltage level.",
"A Differential Covariance Matrix Adaptation Evolutionary Algorithm for real parameter optimization Hybridization in context to Evolutionary Computation (EC) aims at combining the operators and methodologies from different EC paradigms to form a single algorithm that may enjoy a statistically superior performance on a wide variety of optimization problems. In this article we propose an efficient hybrid evolutionary algorithm that embeds the difference vector-based mutation scheme, the crossover and the selection strategy of Differential Evolution (DE) into another recently developed global optimization algorithm known as Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES). CMA-ES is a stochastic method for real parameter (continuous domain) optimization of non-linear, non-convex functions. The algorithm includes adaptation of covariance matrix which is basically an alternative method of traditional Quasi-Newton method for optimization based on gradient method. The hybrid algorithm, referred by us as Differential Covariance Matrix Adaptation Evolutionary Algorithm (DCMA-EA), turns out to possess a better blending of the explorative and exploitative behaviors as compared to the original DE and original CMAES, through empirical simulations. Though CMA-ES has emerged itself as a very efficient global optimizer, its performance deteriorates when it comes to dealing with complicated fitness landscapes, especially landscapes associated with noisy, hybrid composition functions and many real world optimization problems. In order to improve the overall performance of CMA-ES, the mutation, crossover and selection operators of DE have been incorporated into CMA-ES to synthesize the hybrid algorithm DCMA-EA. We compare DCMA-EA with original DE and CMA-EA, two best known DE-variants: SaDE and JADE, and two state-of-the-art real optimizers: IPOP-CMA-ES (Restart Covariance Matrix Adaptation Evolution Strategy with increasing population size) and DMS-PSO (Dynamic Multi Swarm Particle Swarm Optimization) over a test-suite of 20 shifted, rotated, and compositional benchmark functions and also two engineering optimization problems. Our comparative study indicates that although the hybridization scheme does not impose any serious burden on DCMA-EA in terms of number of Function Evaluations (FEs), DCMA-EA still enjoys a statistically superior performance over most of the tested benchmarks and especially over the multi-modal, rotated, and compositional ones in comparison to the other algorithms considered here. 2011 Published by Elsevier Inc. y Elsevier Inc. osh), swagatam.das@isical.ac.in (S. Das), roy.subhrajit20@gmail.com (S. Roy), skminha.isl@gmail.com ganthan). 200 S. Ghosh et al. / Information Sciences 182 (2012) 199–219",
"A Particle Swarm Optimization-Based Maximum Power Point Tracking Algorithm for PV Systems Operating Under Partially Shaded Conditions A photovoltaic (PV) generation system (PGS) is becoming increasingly important as renewable energy sources due to its advantages such as absence of fuel cost, low maintenance requirement, and environmental friendliness. For large PGS, the probability for partially shaded condition (PSC) to occur is also high. Under PSC, the P-V curve of PGS exhibits multiple peaks, which reduces the effectiveness of conventional maximum power point tracking (MPPT) methods. In this paper, a particle swarm optimization (PSO)-based MPPT algorithm for PGS operating under PSC is proposed. The standard version of PSO is modified to meet the practical consideration of PGS operating under PSC. The problem formulation, design procedure, and parameter setting method which takes the hardware limitation into account are described and explained in detail. The proposed method boasts the advantages such as easy to implement, system-independent, and high tracking efficiency. To validate the correctness of the proposed method, simulation, and experimental results of a 500-W PGS will also be provided to demonstrate the effectiveness of the proposed technique.",
"On Data Integrity Attacks Against Real-Time Pricing in Energy-Based Cyber-Physical Systems In this paper, we investigate a novel real-time pricing scheme, which considers both renewable energy resources and traditional power resources and could effectively guide the participants to achieve individual welfare maximization in the system. To be specific, we develop a Lagrangian-based approach to transform the global optimization conducted by the power company into distributed optimization problems to obtain explicit energy consumption, supply, and price decisions for individual participants. Also, we show that these distributed problems derived from the global optimization by the power company are consistent with individual welfare maximization problems for end-users and traditional power plants. We also investigate and formalize the vulnerabilities of the real-time pricing scheme by considering two types of data integrity attacks: Ex-ante attacks and Ex-post attacks, which are launched by the adversary before or after the decision-making process. We systematically analyze the welfare impacts of these attacks on the real-time pricing scheme. Through a combination of theoretical analysis and performance evaluation, our data shows that the real-time pricing scheme could effectively guide the participants to achieve welfare maximization, while cyber-attacks could significantly disrupt the results of real-time pricing decisions, imposing welfare reduction on the participants.",
"Fast local search and guided local search and their application to British Telecom's workforce scheduling problem This paper reports a Fast Local Search (FLS) algorithm which helps to improve the efficiency of hill climbing and a Guided Local Search (GLS) Algorithm which is developed to help local search to escape local optima and distribute search effort. To illustrate how these algorithms work, this paper describes their application to British Telecom’s workforce scheduling problem, which is a hard real life problem. The effectiveness of FLS and GLS are demonstrated by the fact that they both out-perform all the methods applied to this problem so far, which include simulated annealing, genetic algorithms and constraint logic programming.",
"A New Discrete Particle Swarm Optimization Algorithm Particle Swarm Optimization (PSO) has been shown to perform very well on a wide range of optimization problems. One of the drawbacks to PSO is that the base algorithm assumes continuous variables. In this paper, we present a version of PSO that is able to optimize over discrete variables. This new PSO algorithm, which we call Integer and Categorical PSO (ICPSO), incorporates ideas from Estimation of Distribution Algorithms (EDAs) in that particles represent probability distributions rather than solution values, and the PSO update modifies the probability distributions. In this paper, we describe our new algorithm and compare its performance against other discrete PSO algorithms. In our experiments, we demonstrate that our algorithm outperforms comparable methods on both discrete benchmark functions and NK landscapes, a mathematical framework that generates tunable fitness landscapes for evaluating EAs.",
"Unbalanced Three-Phase Optimal Power Flow for Smart Grids Advanced distribution management system (DMS), an evolution of supervisory control and data acquisition obtained by extending its working principles from transmission to distribution, is the brain of a smart grid. Advanced DMS assesses smart functions in the distribution system and is also responsible for assessing control functions such as reactive dispatch, voltage regulation, contingency analysis, capability maximization, or line switching. Optimal power flow (OPF)-based tools can be suitably adapted to the requirements of smart distribution network and be employed in an advanced DMS framework. In this paper, the authors present a methodology for unbalanced three-phase OPF (TOPF) for DMS in a smart grid. In the formulation of the TOPF, control variables of the optimization problem are actual active load demand and reactive power outputs of microgenerators. The TOPF is based on a quasi-Newton method and makes use of an open-source three-phase unbalanced distribution load flow. Test results are presented on the IEEE 123-bus Radial Distribution Feeder test case.",
"Particle Swarm Optimization (PSO) for the constrained portfolio optimization problem 0957-4174/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.eswa.2011.02.075 ⇑ Corresponding author. Tel.: +4773597119. E-mail addresses: hanhong@stud.ntnu.no (H (Y. Wang), Kesheng.wang@ntnu.no (K. Wang), (Y. Chen). One of the most studied problems in the financial investment expert system is the intractability of portfolios. The non-linear constrained portfolio optimization problem with multi-objective functions cannot be efficiently solved using traditionally approaches. This paper presents a meta-heuristic approach to portfolio optimization problem using Particle Swarm Optimization (PSO) technique. The model is tested on various restricted and unrestricted risky investment portfolios and a comparative study with Genetic Algorithms is implemented. The PSO model demonstrates high computational efficiency in constructing optimal risky portfolios. Preliminary results show that the approach is very promising and achieves results comparable or superior with the state of the art solvers. 2011 Elsevier Ltd. All rights reserved.",
"Mining Efficient Taxi Operation Strategies From Large Scale Geo-Location Data Taxi drivers always look for strategies to locate passengers quickly and therefore increase their profit margin. In reality, the passenger seeking strategies are mostly empirical and substantially vary among taxi drivers. From the history taxi data, the top performing taxi drivers can earn 25% more than the ones with mediocre seeking strategy in the same period of time. A better strategy not only helps taxi drivers earn more with less effort, but also reduce fuel consumption and carbon emissions. It is interesting to examine the influential factors in passenger seeking strategies and find algorithms to guide taxi drivers to passenger hotspots with the right timing. With the abundant availability of history taxicab traces, the existing methods of doing taxi business have been radically changed. This paper focuses on the problem of mining efficient operation strategies from a large-scale history taxi traces collected over one year. Our approach presents generic insights into the dynamics of taxicab services with the objective of maximizing the profit margins for the concerned parties. We propose important metrics, such as trip frequency, hot spots, and taxi mileage, and provide valuable insights toward more efficient operation strategies. We analyze these metrics using techniques, such as Newton’s polynomial interpolation and Gamma distribution, to understand their dynamics. Our strategies use the real taxicab traces from the city of Changsha (P.R.China), may predict the taxi rides at different times by 90.68% per day, and increase the taxi drivers income levels up to 19.38% by controlling appropriate mileage per trip and following the route across more urban hot spots.",
"A computationally efficient limited memory CMA-ES for large scale optimization We propose a computationally efficient limited memory Covariance Matrix Adaptation Evolution Strategy for large scale optimization, which we call the LM-CMA-ES. The LM-CMA-ES is a stochastic, derivative-free algorithm for numerical optimization of non-linear, non-convex optimization problems in continuous domain. Inspired by the limited memory BFGS method of Liu and Nocedal (1989), the LM-CMA-ES samples candidate solutions according to a covariance matrix reproduced from m direction vectors selected during the optimization process. The decomposition of the covariance matrix into Cholesky factors allows to reduce the time and memory complexity of the sampling to O(mn), where $n$ is the number of decision variables. When $n$ is large (e.g., n > 1000), even relatively small values of $m$ (e.g., m=20,30) are sufficient to efficiently solve fully non-separable problems and to reduce the overall run-time.",
"A unified approach to statistical tomography using coordinate descent optimization Over the past years there has been considerable interest in statistically optimal reconstruction of cross-sectional images from tomographic data. In particular, a variety of such algorithms have been proposed for maximum a posteriori (MAP) reconstruction from emission tomographic data. While MAP estimation requires the solution of an optimization problem, most existing reconstruction algorithms take an indirect approach based on the expectation maximization (EM) algorithm. We propose a new approach to statistically optimal image reconstruction based on direct optimization of the MAP criterion. The key to this direct optimization approach is greedy pixel-wise computations known as iterative coordinate decent (ICD). We propose a novel method for computing the ICD updates, which we call ICD/Newton-Raphson. We show that ICD/Newton-Raphson requires approximately the same amount of computation per iteration as EM-based approaches, but the new method converges much more rapidly (in our experiments, typically five to ten iterations). Other advantages of the ICD/Newton-Raphson method are that it is easily applied to MAP estimation of transmission tomograms, and typical convex constraints, such as positivity, are easily incorporated.",
"Lazy Modeling of Variants of Token Swapping Problem and Multi-agent Path Finding through Combination of Satisfiability Modulo Theories and Conflict-based Search We address item relocation problems in graphs in this paper. We assume items placed in vertices of an undirected graph with at most one item per vertex. Items can be moved across edges while various constraints depending on the type of relocation problem must be satisfied. We introduce a general problem formulation that encompasses known types of item relocation problems such as multi-agent path finding (MAPF) and token swapping (TSWAP). In this formulation we express two new types of relocation problems derived from token swapping that we call token rotation (TROT) and token permutation (TPERM). Our solving approach for item relocation combines satisfiability modulo theory (SMT) with conflict-based search (CBS). We interpret CBS in the SMT framework where we start with the basic model and refine the model with a collision resolution constraint whenever a collision between items occurs in the current solution. The key difference between the standard CBS and our SMT-based modification of CBS (SMT-CBS) is that the standard CBS branches the search to resolve the collision while in SMT-CBS we iteratively add a single disjunctive collision resolution constraint. Experimental evaluation on several benchmarks shows that the SMT-CBS algorithm significantly outperforms the standard CBS. We also compared SMT-CBS with a modification of the SAT-based MDD-SAT solver that uses an eager modeling of item relocation in which all potential collisions are eliminated by constrains in advance. Experiments show that lazy approach in SMT-CBS produce fewer constraint than MDD-SAT and also achieves faster solving run-times.",
"Applying the Waek Learning Framework to Understand and Improve C4.5 There has long been a chasm between theoretical mod els of machine learning and practical machine learn ing algorithms For instance empirically successful algorithms such as C and backpropagation have not met the criteria of the PAC model and its vari ants Conversely the algorithms suggested by com putational learning theory are usually too limited in various ways to nd wide application The theoreti cal status of decision tree learning algorithms is a case in point while it has been proven that C and all reasonable variants of it fails to meet the PAC model criteria other recently proposed decision tree algo rithms that do have non trivial performance guaran tees unfortunately require membership queries Two recent developments have narrowed this gap between theory and practice not for the PAC model but for the related model known as weak learning or boosting First an algorithm called Adaboost was proposed that meets the formal criteria of the boosting model and is also competitive in practice Second the basic algorithms underlying the popular C and CART programs have also very recently been shown to meet the formal criteria of the boosting model Thus it seems plausible that the weak learning frame work may provide a setting for interaction between formal analysis and machine learning practice that is lacking in other theoretical models Our aim in this paper is to push this interaction further in light of these recent developments In par ticular we perform experiments suggested by the for mal results for Adaboost and C within the weak learning framework We concentrate on two particu larly intriguing issues First the theoretical boosting results for top down decision tree algorithms such as C suggest that a new splitting criterion may result in trees that are smaller and more accurate than those obtained using the usual information gain We con rm this suggestion experimentally Second a super cial interpretation of the theo retical results suggests that Adaboost should vastly outperform C This is not the case in practice and we argue through experimental results that the theory must be understood in terms of a measure of a boosting algorithm s behavior called its advantage sequence We compare the advantage sequences for C and Adaboost in a number of experiments We nd that these sequences have qualitatively dif ferent behavior that explains in large part the dis crepancies between empirical performance and the theoretical results Brie y we nd that although C and Adaboost are both boosting algorithms Adaboost creates successively harder ltered dis tributions while C creates successively easier ones in a sense that will be made precise",
"Tuning of PID controller for an automatic regulator voltage system using chaotic optimization approach Despite the popularity, the tuning aspect of proportional–integral-derivative (PID) controllers is a challenge for researchers and plant operators. Various controllers tuning methodologies have been proposed in the literature such as auto-tuning, self-tuning, pattern recognition, artificial intelligence, and optimization methods. Chaotic optimization algorithms as an emergent method of global optimization have attracted much attention in engineering applications. Chaotic optimization algorithms, which have the features of easy implementation, short execution time and robust mechanisms of escaping from local optimum, is a promising tool for engineering applications. In this paper, a tuning method for determining the parameters of PID control for an automatic regulator voltage (AVR) system using a chaotic optimization approach based on Lozi map is proposed. Since chaotic mapping enjoys certainty, ergodicity and the stochastic property, the proposed chaotic optimization introduces chaos mapping using Lozi map chaotic sequences which increases its convergence rate and resulting precision. Simulation results are promising and show the effectiveness of the proposed approach. Numerical simulations based on proposed PID control of an AVR system for nominal system parameters and step reference voltage input demonstrate the good performance of chaotic optimization. 2007 Elsevier Ltd. All rights reserved.",
"Tuning of PID controller based on Fruit Fly Optimization Algorithm The Proportional - Integral - Derivative (PID) controllers are one of the most popular controllers used in industry because of their remarkable effectiveness, simplicity of implementation and broad applicability. PID tuning is the key issue in the design of PID controllers and most of the tuning processes are implemented manually resulting in difficulty and time consuming. To enhance the capabilities of traditional PID parameters tuning techniques, modern heuristics approaches, such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), are employed recent years. In this paper, a novel tuning method based on Fruit Fly Optimization Algorithm (FOA) is proposed to optimize PID controller parameters. Each fruit fly's position represents a candidate solution for PID parameters. When the fruit fly swarm flies towards one location, it is treated as the evolution of each iterative swarm. After hundreds of iteration, the tuning results - the best PID controller parameters can be obtained. The main advantages of the proposed method include ease of implementation, stable convergence characteristic, large searching range, ease of transformation of such concept into program code and ease of understanding. Simulation results demonstrate that the FOA-Based optimized PID (FOA - PID) controller is with the capability of providing satisfactory closed - loop performance.",
"No Free Lunch Theorems: Limitations and Perspectives of Metaheuristics The No Free Lunch (NFL) theorems for search and optimization are reviewed and their implications for the design of metaheurist ics are discussed. The theorems state that any two search or optimization algorith ms are equivalent when their performance is averaged across all possible problems and even over subsets of problems fulfilling certain constraints. The NFL results show that if there is no assumption regarding the relation between visited and unse en arch points, efficient search and optimization is impossible. There is no wel l performing universal metaheuristic, but the heuristics must be tailored to the pr oblem class at hand using prior knowledge. In practice, it is not likely that the pr econditions of the NFL theorems are fulfilled for a problem class and thus differenc es between algorithms exist. Therefore, tailored algorithms can exploit structu re nderlying the optimization problem. Given full knowledge about the problem class, it i in theory possible to construct an optimal algorithm.",
"Lower Bounds for Finding Stationary Points of Non-Convex , Smooth High-Dimensional Functions ∗ We establish lower bounds on the complexity of finding -stationary points of smooth, non-convex, high-dimensional functions. For functions with Lipschitz continuous pth derivative, we show that all algorithms—even randomized algorithms observing arbitrarily high-order derivatives—have worst-case iteration count Ω( −(p+1)/p). Our results imply that the O( −2) convergence rate of gradient descent is unimprovable without additional assumptions (e.g. Lipschitz Hessian), and that cubic regularization of Newton’s method and pth order regularization in general are similarly optimal. Additionally, we prove that deterministic first-order methods, even applied to arbitrarily smooth functions, cannot achieve convergence rates better than O( −8/5), which is within −1/15 of the recently established Õ( −5/3) rate for accelerated gradient descent.",
"Matroids and the greedy algorithm (0) Many discrete programming algorithms are being proposed. One thing most of them have in common is that they do not work very well. Solving problems that are a priori finite, astronomically finite, is preferably a matter of finding algorithms that are s o m e h o w better than finite. These considerations prompt looking for good algorithms and trying to understand how and why at least a few combinatorial problems have them. (1) Let H be a finite (for convenience) set of real-valued vectors , x = [xj] , ] ~ E. Often all the members of H will be integer-valued; often they will all be {0, 1}-valued. The index-set, E, is any finite set of elements. Let c = [cj], ] ~ E, be any real vector on E, called the objective or weighting of E. The problem of finding a member o f H which maximizes (or minimizes) cx = iN cjx/, j ~ E, we call a loco problem or loco programming. \"Loco\" stands for \"linear-objective combinatorial\". (2) In order for a loco problem to be a completely defined problem, the way H is given must of course be specified. There are various ways to describe implicitly very large sets H so tha t it is relatively easy to determine whether or not any particular vector is a member of H. One well-known:way is in linear programming, where H is the set of extreme points of the solution-set of a given finite system, L, of linear equations and <_ type linear inequalities in the variables xj (briefly, a",
"PSO-based optimization for isolated intersections signal timings and simulation In this paper, based on the analyzing the signal control problem of single intersections, a kind of real-time optimization method with PSO (particle swarm optimization) solving signal timings for single intersection is put forward. On detecting the flow of current cycle and the cycle before, estimating the flow of next circle is proceeded. Setting least delay as performance index, we get each phase time for the next cycle. A simulation experiment for the traffic model at a four-phase intersection is also performed. The result shows that the method is efficient.",
"Dynamic Programming for Linear-Time Incremental Parsing Incremental parsing techniques such as shift-reduce have gained popularity thanks to their efficiency, but there remains a major problem: the search is greedyand only explores a tiny fraction of the whole space (even with beam search) as opposed to dynamic programming. We show that, surprisingly, dynamic programming is in fact possible for many shift-reduce parsers, by merging “equivalent” stacks based on feature values. Empirically, our algorithm yields up to a five-fold speedup over a state-of-the-art shift-reduce dependency parser with no loss in accuracy. Better search also leads to better learning, and our final parser outperforms all previously reported dependency parsers for English and Chinese, yet is much faster.",
"Automatic design of low power CMOS buffer-chain circuit using differential evolutionary algorithm and particle swarm optimization PSO and DE algorithms and its variants are used for the optimization of a buffer-chain circuit and the results of all the algorithms are compared in this literature. By testing these algorithms on different mathematical benchmark functions the best parameter values of buffer chain circuit are obtained in such a way that it reduces the error between simulated output and optimized output, hence giving the best circuit performance. Evolutionary algorithms are better in performance and speed than the classical methods. 130nm CMOS technology has been used in this work. With the help of these parameter values the circuit simulator gives the values of power consumption, symmetry, rise time and fall time, which are almost closer to the desired specification of the buffer chain circuit.",
"Golden section search over hyper-rectangle: a direct search method Abstract: This paper generalises the golden section optimal search method to higher dimensional optimisation problem. The method is applicable to a strict quasi-convex function of N-variables over an N-dimensional hyper rectangle. An algorithm is proposed in N-dimension. The algorithm is illustrated graphically in two dimensions and verified through several test functions in higher dimension using MATLAB.",
"A Novel Feature Selection Approach Based on FODPSO and SVM A novel feature selection approach is proposed to address the curse of dimensionality and reduce the redundancy of hyperspectral data. The proposed approach is based on a new binary optimization method inspired by fractional-order Darwinian particle swarm optimization (FODPSO). The overall accuracy (OA) of a support vector machine (SVM) classifier on validation samples is used as fitness values in order to evaluate the informativity of different groups of bands. In order to show the capability of the proposed method, two different applications are considered. In the first application, the proposed feature selection approach is directly carried out on the input hyperspectral data. The most informative bands selected from this step are classified by the SVM. In the second application, the main shortcoming of using attribute profiles (APs) for spectral-spatial classification is addressed. In this case, a stacked vector of the input data and an AP with all widely used attributes are created. Then, the proposed feature selection approach automatically chooses the most informative features from the stacked vector. Experimental results successfully confirm that the proposed feature selection technique works better in terms of classification accuracies and CPU processing time than other studied methods without requiring the number of desired features to be set a priori by users.",
"1 Positive Feedback as a Search Strategy A combination of distributed computation, positive feedback and constructive greedy heuristic is proposed as a new approach to stochastic optimization and problem solving. Positive feedback accounts for rapid discovery of very good solutions, distributed computation avoids premature convergence, and greedy heuristic helps the procedure to find acceptable solutions in the early stages of the search process. An application of the proposed methodology to the classical travelling salesman problem shows that the system can rapidly provide very good, if not optimal, solutions. We report on many simulation results and discuss the working of the algorithm. Some hints about how this approach can be applied to a variety of optimization problems are also given.",
"A descent modified In this paper, we propose a modified Polak–Ribière–Polyak (PRP) conjugate gradient method. An attractive property of the proposed method is that the direction generated by the method is always a descent direction for the objective function. This property is independent of the line search used. Moreover, if exact line search is used, the method reduces to the ordinary PRP method. Under appropriate conditions, we show that the modified PRP method with Armijo-type line search is globally convergent. We also present extensive preliminary numerical experiments to show the efficiency of the proposed method.",
"An Analysis of the Elastic Net Approach to the Traveling Salesman Problem This paper analyzes the elastic net approach (Durbin and Willshaw 1987) to the traveling salesman problem of finding the shortest path through a set of cities. The elastic net approach jointly minimizes the length of an arbitrary path in the plane and the distance between the path points and the cities. The tradeoff between these two requirements is controlled by a scale parameter K. A global minimum is found for large K, and is then tracked to a small value. In this paper, we show that (1) in the small K limit the elastic path passes arbitrarily close to all the cities, but that only one path point is attracted to each city, (2) in the large K limit the net lies at the center of the set of cities, and (3) at a critical value of K the energy function bifurcates. We also show that this method can be interpreted in terms of extremizing a probability distribution controlled by K. The minimum at a given K corresponds to the maximum a posteriori (MAP) Bayesian estimate of the tour under a natural statistical interpretation. The analysis presented in this paper gives us a better understanding of the behavior of the elastic net, allows us to better choose the parameters for the optimization, and suggests how to extend the underlying ideas to other domains.",
"Computationally efficient low-pass FIR filter design using Cuckoo Search with adaptive Levy step size This paper looks into efficient implementation of one dimensional low pass Finite Impulse Response filters using certain commonly used and state-of-the-art optimization techniques. Methods like Parks-McClellan (PM) equiripple design, Quantum-behaved Particle Swarm Optimization (QPSO) and Cuckoo Search Algorithm (CSA) with Levy Flight are employed and overall performance is further improved by hybridization and adaptive step size update. Various performance metrics are analyzed with a focus on increasing convergence speed to reach global optima faster. It is seen that the improved search methods used in this work, i.e., Simulated Annealing based Weighted Mean Best QPSO (SAWQPSO) and Adaptive CSA (ACSA) effect significant reductions in convergence time with ACSA proving to be the faster one. The designed filter is used in the receiver stage of a Frequency Modulated Radio Transmission model using a Quadrature-Phase Shift Keyed (QPSK) Modulator and Demodulator. Its efficiency is validated by obtaining near perfect correlation between the message and recovered signals.",
"DG Placement and Sizing in Radial Distribution Network Using PSO & HBMO Algorithms Optimal placement and sizing of DG in distribution network is an optimization problem with continuous and discrete variables. Many researchers have used evolutionary methods for finding the optimal DG placement and sizing. This paper proposes a hybrid algorithm PSO&HBMO for optimal placement and sizing of distributed generation (DG) in radial distribution system to minimize the total power loss and improve the voltage profile. The proposed method is tested on a standard 13 bus radial distribution system and simulation results carried out using MATLAB software. The simulation results indicate that PSO&HBMO method can obtain better results than the simple heuristic search method and PSO algorithm. The method has a potential to be a tool for identifying the best location and rating of a DG to be installed for improving voltage profile and line losses reduction in an electrical power system. Moreover, current reduction is obtained in distribution system.",
"Fixed-point algorithms for learning determinantal point processes Determinantal point processes (DPPs) offer an elegant tool for encoding probabilities over subsets of a ground set. Discrete DPPs are parametrized by a positive semidefinite matrix (called the DPP kernel), and estimating this kernel is key to learning DPPs from observed data. We consider the task of learning the DPP kernel, and develop for it a surprisingly simple yet effective new algorithm. Our algorithm offers the following benefits over previous approaches: (a) it is much simpler; (b) it yields equally good and sometimes even better local maxima; and (c) it runs an order of magnitude faster on large problems. We present experimental results on both real and simulated data to illustrate the numerical performance of our technique.",
"Solving dynamic vehicle routing problem via evolutionary search with learning capability To date, dynamic vehicle routing problem (DVRP) has attracted great research attentions due to its wide range of real world applications. In contrast to traditional static vehicle routing problem, the whole routing information in DVRP is usually unknown and obtained dynamically during the routing execution process. To solve DVRP, many heuristic and metaheuristic methods have been proposed in the literature. In this paper, we present a novel evolutionary search paradigm with learning capability for solving DVRP. In particular, we propose to capture the structured knowledge from optimized routing solution in early time slot, which can be further reused to bias the customer-vehicle assignment when dynamic occurs. By extending our previous research work, the learning of useful knowledge, and the scheduling of dynamic customer requests are detailed here. Further, to evaluate the efficacy of the proposed search paradigm, comprehensive empirical studies on 21 commonly used DVRP instances with diverse properties are also reported.",
"Grey Wolf Optimizer This work proposes a new meta-heuristic called Grey Wolf Optimizer (GWO) inspired by grey wolves (Canis lupus). The GWO algorithm mimics the leadership hierarchy and hunting mechanism of grey wolves in nature. Four types of grey wolves such as alpha, beta, delta, and omega are employed for simulating the leadership hierarchy. In addition, the three main steps of hunting, searching for prey, encircling prey, and attacking prey, are implemented. The algorithm is then benchmarked on 29 well-known test functions, and the results are verified by a comparative study with Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), Differential Evolution (DE), Evolutionary Programming (EP), and Evolution Strategy (ES). The results show that the GWO algorithm is able to provide very competitive results compared to these wellknown meta-heuristics. The paper also considers solving three classical engineering design problems (tension/compression spring, welded beam, and pressure vessel designs) and presents a real application of the proposed method in the field of optical engineering. The results of the classical engineering design problems and real application prove that the proposed algorithm is applicable to challenging problems with unknown search spaces.",
"Coordinated Scheduling of Residential Distributed Energy Resources to Optimize Smart Home Energy Services We describe algorithmic enhancements to a decision-support tool that residential consumers can utilize to optimize their acquisition of electrical energy services. The decision-support tool optimizes energy services provision by enabling end users to first assign values to desired energy services, and then scheduling their available distributed energy resources (DER) to maximize net benefits. We chose particle swarm optimization (PSO) to solve the corresponding optimization problem because of its straightforward implementation and demonstrated ability to generate near-optimal schedules within manageable computation times. We improve the basic formulation of cooperative PSO by introducing stochastic repulsion among the particles. The improved DER schedules are then used to investigate the potential consumer value added by coordinated DER scheduling. This is computed by comparing the end-user costs obtained with the enhanced algorithm simultaneously scheduling all DER, against the costs when each DER schedule is solved separately. This comparison enables the end users to determine whether their mix of energy service needs, available DER and electricity tariff arrangements might warrant solving the more complex coordinated scheduling problem, or instead, decomposing the problem into multiple simpler optimizations.",
"Scheduling a Major College Basketball Conference - Revisited Nemhauser and Trick presented the problem of finding a timetable for the 1997/98 Atlantic Coast Conference (ACC) in basketball. Their solution, found with a combination of integer programming and exhaustive enumeration, was accepted by the ACC. Finite-domain constraint programming is another programming technique that can be used for solving combinatorial search problems such as sports tournament scheduling. This paper presents a solution of round robin tournament planning based on finite-domain constraint programming. The approach yields a dramatic performance improvement, which makes an integrated interactive software solution feasible.",
"An Efficient Power Scheduling Scheme for Residential Load Management in Smart Homes In this paper, we propose mathematical optimization models of household energy units to optimally control the major residential energy loads while preserving the user preferences. User comfort is modelled in a simple way, which considers appliance class, user preferences and weather conditions. The wind-driven optimization (WDO) algorithm with the objective function of comfort maximization along with minimum electricity cost is defined and implemented. On the other hand, for maximum electricity bill and peak reduction, min-max regret-based knapsack problem (K-WDO) algorithm is used. To validate the effectiveness of the proposed algorithms, extensive simulations are conducted for several scenarios. The simulations show that the proposed algorithms provide with the best optimal results with a fast convergence rate, as compared to the existing techniques. Appl. Sci. 2015, 5 1135",
"Cuckoo Search via Lévy flights In this paper, we intend to formulate a new meta-heuristic algorithm, called Cuckoo Search (CS), for solving optimization problems. This algorithm is based on the obligate brood parasitic behaviour of some cuckoo species in combination with the Lévy flight behaviour of some birds and fruit flies. We validate the proposed algorithm against test functions and then compare its performance with those of genetic algorithms and particle swarm optimization. Finally, we discuss the implication of the results and suggestion for further research.",
"A geometric view of optimal transportation and generative model In this work, we show the intrinsic relations between optimal transportation and convex geometry, especially the variational approach to solve Alexandrov problem: constructing a convex polytope with prescribed face normals and volumes. This leads to a geometric interpretation to generative models, and leads to a novel framework for generative models. By using the optimal transportation view of GAN model, we show that the discriminator computes the Kantorovich potential, the generator calculates the transportation map. For a large class of transportation costs, the Kantorovich potential can give the optimal transportation map by a close-form formula. Therefore, it is sufficient to solely optimize the discriminator. This shows the adversarial competition can be avoided, and the computational architecture can be simplified. Preliminary experimental results show the geometric method outperforms WGAN for approximating probability measures with multiple clusters in low dimensional space.",
"Rapid MPPT for Uniformly and Partial Shaded PV System by Using JayaDE Algorithm in Highly Fluctuating Atmospheric Conditions In photovoltaic (PV) array, the output power and the power–voltage (P–V ) characteristic of PV array are totally dependent on the temperature and solar insolation. Therefore, if these atmospheric parameters fluctuate rapidly, then the maximum power point (MPP) of the P–V curve of PV array also fluctuates very rapidly. This rapid fluctuation of the MPP may be in accordance with the uniform shading of the PV panel or may be in accordance to the partially shaded due to the clouds, tall building, trees, and raindrops. However, in both cases, the MPP tracking (MPPT) is not only a nonlinear problem, this becomes a highly nonlinear problem, which solution is time bounded. Because the highly fluctuating atmospheric conditions change the P–V characteristic after every small time duration. This paper introduces a hybrid of “Jaya” and “differential evolution (DE)” (JayaDE) technique for MPPT in the highly fluctuating atmospheric conditions. This JayaDE algorithm is tested on MATLAB simulator and is verified on a developed hardware of the solar PV system, which consists of a single peak and many multiple peaks in the voltage–power curve. Moreover, the tracking ability is compared with the recent state of the art methods. The satisfactory steady-state and dynamic performances of this new hybrid technique under variable irradiance and temperature levels show the superiority over the state-of-the-art control methods.",
"Adaptive Optimal Control of Highly Dissipative Nonlinear Spatially Distributed Processes With Neuro-Dynamic Programming Highly dissipative nonlinear partial differential equations (PDEs) are widely employed to describe the system dynamics of industrial spatially distributed processes (SDPs). In this paper, we consider the optimal control problem of the general highly dissipative SDPs, and propose an adaptive optimal control approach based on neuro-dynamic programming (NDP). Initially, Karhunen-Loève decomposition is employed to compute empirical eigenfunctions (EEFs) of the SDP based on the method of snapshots. These EEFs together with singular perturbation technique are then used to obtain a finite-dimensional slow subsystem of ordinary differential equations that accurately describes the dominant dynamics of the PDE system. Subsequently, the optimal control problem is reformulated on the basis of the slow subsystem, which is further converted to solve a Hamilton-Jacobi-Bellman (HJB) equation. HJB equation is a nonlinear PDE that has proven to be impossible to solve analytically. Thus, an adaptive optimal control method is developed via NDP that solves the HJB equation online using neural network (NN) for approximating the value function; and an online NN weight tuning law is proposed without requiring an initial stabilizing control policy. Moreover, by involving the NN estimation error, we prove that the original closed-loop PDE system with the adaptive optimal control policy is semiglobally uniformly ultimately bounded. Finally, the developed method is tested on a nonlinear diffusion-convection-reaction process and applied to a temperature cooling fin of high-speed aerospace vehicle, and the achieved results show its effectiveness.",
"Solving optimization problems using black hole algorithm Various meta-heuristic optimization approaches have been recently created and applied in different areas. Many of these approaches are inspired by swarm behaviors in the nature. This paper studies the solving optimization problems using Black Hole Algorithm (BHA) which is a population-based algorithm. Since the performance of this algorithm was not tested in mathematical functions, we have studied this issue using some standard functions. The results of the BHA are compared with the results of GA and PSO algorithms which indicate that the performance of BHA is better than the other two mentioned algorithms.",
"An improved Opposition-Based Sine Cosine Algorithm for global optimization Real life optimization problems require techniques that properly explore the search spaces to obtain the best solutions. In this sense, it is common that traditional optimization algorithms fail in local optimal values. The Sine Cosine Algorithms (SCA) has been recently proposed; it is a global optimization approach based on two trigonometric functions. SCA uses the sine and cosine functions to modify a set of candidate solutions; such operators create a balance between exploration and exploitation of the search space. However, like other similar approaches, SCA tends to be stuck into sub-optimal regions that it is reflected in the computational effort required to find the best values. This situation occurs due that the operators used for exploration do not work well to analyze the search space. This paper presents an improved version of SCA that considers the opposition based learning (OBL) as a mechanism for a better exploration of the search space generating more accurate solutions. OBL is a machine learning strategy commonly used to increase the performance of metaheuristic algorithms. OBL considers the opposite position of a solution in the search space. Based on the objective function value, the OBL selects the best element between the original solution and its opposite position; this task increases the accuracy of the optimization process. The hybridization of concepts from different fields is crucial in intelligent and expert systems; it helps to combine the advantages of algorithms to generate more efficient approaches. The proposed method is an example of this combination; it has been tested over several benchmark functions and engineering problems. Such results support the efficacy of the proposed approach to find the optimal solutions in complex search spaces. © 2017 Elsevier Ltd. All rights reserved. e w g t S f t ( fi b M a b",
"Should one compute the Temporal Difference fix point or minimize the Bellman Residual? The unified oblique projection view We investigate projection methods, for evaluating a linear approximation of the value function of a policy in a Markov Decision Process context. We consider two popular approaches, the one-step Temporal Difference fix-point computation (TD(0)) and the Bellman Residual (BR) minimization. We describe examples, where each method outperforms the other. We highlight a simple relation between the objective function they minimize, and show that while BR enjoys a performance guarantee, TD(0) does not in general. We then propose a unified view in terms of oblique projections of the Bellman equation, which substantially simplifies and extends the characterization of Schoknecht (2002) and the recent analysis of Yu & Bertsekas (2008). Eventually, we describe some simulations that suggest that if the TD(0) solution is usually slightly better than the BR solution, its inherent numerical instability makes it very bad in some cases, and thus worse on average.",
"Solving traveling salesman problems via artificial intelligent search techniques The traveling salesman problem (TSP) is one of the most intensively studied problems in computational mathematics and combinatorial optimization. It is also considered as the class of the NPcomplete combinatorial optimization problems. By literatures, many algorithms and approaches have been launched to solve such the TSP. However, no current algorithms that can provide the exactly optimal solution of the TSP problem are available. This paper proposes the application of AI search techniques to solve the TSP problems. Three AI search methods, i.e. genetic algorithms (GA), tabu search (TS), and adaptive tabu search (ATS), are conducted. They are tested against ten benchmark real-world TSP problems. As results compared with the exactly optimal solutions, the AI search techniques can provide very satisfactory solutions for all TSP problems. Key-Words: Traveling Salesman Problem, Genetic Algorithm, Tabu Search, Adaptive Tabu Search",
"Development and investigation of efficient artificial bee colony algorithm for numerical function optimization Artificial bee colony algorithm (ABC), which is inspired by the foraging behavior of honey bee swarm, is a biological-inspired optimization. It shows more effective than genetic algorithm (GA), particle swarm optimization (PSO) and ant colony optimization (ACO). However, ABC is good at exploration but poor at exploitation, and its convergence speed is also an issue in some cases. For these insufficiencies, we propose an improved ABC algorithm called I-ABC. In I-ABC, the best-so-far solution, inertia weight and acceleration coefficients are introduced to modify the search process. Inertia weight and acceleration coefficients are defined as functions of the fitness. In addition, to further balance search processes, the modification forms of the employed bees and the onlooker ones are different in the second acceleration coefficient. Experiments show that, for most functions, the I-ABC has a faster convergence speed and ptimization better performances than each of ABC and the gbest-guided ABC (GABC). But I-ABC could not still substantially achieve the best solution for all optimization problems. In a few cases, it could not find better results than ABC or GABC. In order to inherit the bright sides of ABC, GABC and I-ABC, a high-efficiency hybrid ABC algorithm, which is called PS-ABC, is proposed. PS-ABC owns the abilities of prediction and selection. Results show that PS-ABC has a faster convergence speed like I-ABC and better search ability ods f than other relevant meth",
"Enhanced Simulated Annealing for Globally Minimizing Functions of Many-Continuous Variables A new global optimization algorithm for functions of many continuous variables is presented, derived from the basic Simulated annealing method. Our main contribution lies in dealing with high-dimensionality minimization problems, which are often difficult to solve by all known minimization methods with or without gradient. In this article we take a special interest in the variables discretization issue. We also develop and implement several complementary stopping criteria. The original Metropolis iterative random search, which takes place in a Euclidean space Rn, is replaced by another similar exploration, performed within a succession of Euclidean spaces Rp, with p <<n. This Enhanced Simulated Annealing (ESA) algorithm was validated first on classical highly multimodal functions of 2 to 100 variables. We obtained significant reductions in the number of function evaluations compared to six other global optimization algorithms, selected according to previously published computational results for the same set of test functions. In most cases, ESA was able to closely approximate known global optima. The reduced ESA computational cost helped us to refine further the obtained global results, through the use of some local search. We have used this new minimizing procedure to solve complex circuit design problems, for which the objective function evaluation can be exceedingly costly.",
"Model-Free Dual Heuristic Dynamic Programming Model-based dual heuristic dynamic programming (MB-DHP) is a popular approach in approximating optimal solutions in control problems. Yet, it usually requires offline training for the model network, and thus resulting in extra computational cost. In this brief, we propose a model-free DHP (MF-DHP) design based on finite-difference technique. In particular, we adopt multilayer perceptron with one hidden layer for both the action and the critic networks design, and use delayed objective functions to train both the action and the critic networks online over time. We test both the MF-DHP and MB-DHP approaches with a discrete time example and a continuous time example under the same parameter settings. Our simulation results demonstrate that the MF-DHP approach can obtain a control performance competitive with that of the traditional MB-DHP approach while requiring less computational resources.",
"The Airport Gate Assignment Problem: Mathematical Model and a Tabu Search Algorithm In this paper, we consider an Airport Gate Assignment Problem that dynamically assigns airport gates to scheduled ights based on passengers' daily origin and destination ow data. The objective of the problem is to minimize the overall connection times that passengers walk to catch their connection ights. We formulate this problem as a mixed 0-1 quadratic integer programming problem and then reformulate it as a mixed 0-1 integer problem with a linear objective function and constraints. We design a simple tabu search meta-heuristic to solve the problem. The algorithm exploits the special properties of di erent types of neighborhood moves, and create highly e ective candidate list strategies. We also address issues of tabu short term memory, dynamic tabu tenure, aspiration rule, and various intensi cation and diversi cation strategies. Preliminary computational experiments are conducted and the results are presented and analyzed.",
"A Comparison of Evolutionary Algorithms and Gradient-based Methods for the Optimal Control Problem An experimental comparison of evolutionary algorithms and gradient-based methods for the optimal control problem is carried out. The problem is solved separately by Particle swarm optimization, Grey wolf optimizer, Fast gradient descent method, Marquardt method and Adam method. The simulation is performed on a jet aircraft model. The results of each algorithm performance are compared according to the best found value of the fitness function, the mean value and the standard deviation.",
"Short-Interval Detailed Production Scheduling in 300mm Semiconductor Manufacturing using Mixed Integer and Constraint Programming Fully automated 300mm manufacturing requires the adoption of a real-time lot dispatching paradigm. Automated dispatching has provided significant improvements over manual dispatching by removing variability from the thousands of dispatching decisions made every day in a fab. Real-time resolution of tool queues, with consideration of changing equipment states, process restrictions, physical and logical location of WIP, supply chain objectives and a myriad of other parameters, is required to ensure successful dispatching in the dynamic fab environment. However, the real-time dispatching decision in semiconductor manufacturing generally remains a reactive, heuristic response in existing applications, limited to the current queue of each tool. The shortcomings of this method of assigning WIP to tools, aptly named \"opportunistic scavenging\" as stated in G. Sullivan (1987), have become more apparent in lean manufacturing environments where lower WIP levels present fewer obvious opportunities for beneficial lot sequencing or batching. Recent advancements in mixed integer programming (MIP) and constraint programming (CP) have raised the possibility of integrating optimization software, commonly used outside of the fab environment to compute optimal solutions for scheduling scenarios ranging from order fulfillment systems to crew-shift-equipment assignments, with a real-time dispatcher to create a short-interval scheduler. The goal of such a scheduler is to optimize WIP flow through various sectors of the fab by expanding the analysis beyond the current WIP queue to consider upstream and downstream flow across the entire tool group or sector. This article describes the production implementation of a short-interval local area scheduler in IBM's leading-edge 300mm fab located in East Fishkill, New York, including motivation, approach, and initial results",
"Evolutionary Computation Meets Machine Learning: A Survey Evolutionary computation (EC) is a kind of optimization methodology inspired by the mechanisms of biological evolution and behaviors of living organisms. In the literature, the terminology evolutionary algorithms is frequently treated the same as EC. This article focuses on making a survey of researches based on using ML techniques to enhance EC algorithms. In the framework of an ML-technique enhanced-EC algorithm (MLEC), the main idea is that the EC algorithm has stored ample data about the search space, problem features, and population information during the iterative search process, thus the ML technique is helpful in analyzing these data for enhancing the search performance. The paper presents a survey of five categories: ML for population initialization, ML for fitness evaluation and selection, ML for population reproduction and variation, ML for algorithm adaptation, and ML for local search.",
"Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems In this study, a new metaheuristic optimization algorithm, called cuckoo search (CS), is introduced for solving structural optimization tasks. The new CS algorithm in combination with Lévy flights is first verified using a benchmark nonlinear constrained optimization problem. For the validation against structural engineering optimization problems, CS is subsequently applied to 13 design problems reported in the specialized literature. The performance of the CS algorithm is further compared with various algorithms representative of the state of the art in the area. The optimal solutions obtained by CS are mostly far better than the best solutions obtained by the existing methods. The unique search features used in CS and the implications for future research are finally discussed in detail.",
"Convolutional Neural Networks for Automatic State-Time Feature Extraction in Reinforcement Learning Applied to Residential Load Control Direct load control of a heterogeneous cluster of residential demand flexibility sources is a high-dimensional control problem with partial observability. This paper proposes a novel approach that uses a convolutional neural network (CNN) to extract hidden state-time features to mitigate the curse of partial observability. More specific, a CNN is used as a function approximator to estimate the state-action value function or <inline-formula> <tex-math notation=\"LaTeX\">${Q}$ </tex-math></inline-formula>-function in the supervised learning step of fitted <inline-formula> <tex-math notation=\"LaTeX\">${Q}$ </tex-math></inline-formula>-iteration. The approach is evaluated in a qualitative simulation, comprising a cluster of thermostatically controlled loads that only share their air temperature, while their envelope temperature remains hidden. The simulation results show that the presented approach is able to capture the underlying hidden features and able to successfully reduce the electricity cost the cluster.",
"SHORTEST PATHS FOR THE REEDS-SHEPP CAR : A WORKED OUT EXAMPLE OF THE USE OF GEOMETRIC TECHNIQUES IN NONLINEAR OPTIMAL CONTROL We illustrate the use of the techniques of modern geometric optimal control theory by studying the shortest paths for a model of a car that can move forwards and backwards. This problem was discussed in recent work by Reeds and Shepp who showed, by special methods, (a) that shortest path motion could always be achieved by means of trajectories of a special kind, namely, concatenations of at most five pieces, each of which is either a straight line or a circle, and (b) that these concatenations can be classified into 48 three-parameter families. We show how these results fit in a much more general framework, and can be discovered and proved by applying in a systematic way the techniques of Optimal Control Theory. It turns out that the “classical” optimal control tools developed in the 1960’s, such as the Pontryagin Maximum Principle and theorems on the existence of optimal trajectories, are helpful to go part of the way and get some information on the shortest paths, but do not suffice to get the full result. On the other hand, when these classical techniques are combined with the use of a more recently developed body of theory, namely, geometric methods based on the Lie algebraic analysis of trajectories, then one can recover the full power of the Reeds-Shepp results, and in fact slightly improve upon them by lowering their 48 to a 46.",
"Hybrid BFOA-PSO algorithm for automatic generation control of linear and nonlinear interconnected power systems In the Bacteria Foraging Optimization Algorithm (BFAO), the chemotactic process is randomly set, imposing that the bacteria swarm together and keep a safe distance from each other. In hybrid bacteria foraging optimization algorithm and particle swarm optimization (hBFOA-PSO) algorithm the principle of swarming is introduced in the framework of BFAO. The hBFOA-PSO algorithm is based on the adjustment of each bacterium position according to the neighborhood environment. In this paper, the effectiveness of the hBFOA-PSO algorithm has been tested for Automatic Generation Control (AGC) of an interconnected power system. A widely used linear model of two area non-reheat thermal system equipped with Proportional-Integral (PI) controller is considered initially for the design and analysis purpose. At first, a conventional Integral Time multiply Absolute Error (ITAE) based objective function is considered and the performance of hBFOA-PSO algorithm is compared with PSO, BFOA and GA. Further a modified objective function using ITAE, damping ratio of dominant eigenvalues and settling time with appropriate weight coefficients is proposed to increase the performance of the controller. Further, robustness analysis is carried out by varying the operating load condition and time constants of speed governor, turbine, tieline power in the range of +50% to -50% as well as size and position of step load perturbation to demonstrate the robustness of the proposed hBFOA-PSO optimized PI controller. The proposed approach is also extended to a nonlinear power system model by considering the effect of governor dead band non-linearity and the superiority of the proposed approach is shown by comparing the results of Craziness based Particle Swarm Optimization (CRAZYPSO) approach for the identical interconnected power system. Finally, the study is extended to a three area system considering both thermal and hydro units with different PI coefficients and comparison between ANFIS and proposed approach has been provided.",
"Optimization algorithm using scrum process Scrum process is methodology for software development. Members in a scrum team have self-organizing team by planning and sharing knowledge. This paper introduces optimization algorithm using the population as scrum team doing the scrum process to find an optimum solution. The proposed algorithm maintains the level of exploration and the exploitation search by specific of the scrum-team. The experiment has compared the proposed approach with GA and PSO by finding an optimal solution of five numerical functions. The experiment result indicates that the proposed algorithm provides the best solution and finds the result quickly.",
"Decoding by linear programming This paper considers a natural error correcting problem with real valued input/output. We wish to recover an input vector f/spl isin/R/sup n/ from corrupted measurements y=Af+e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the /spl lscr//sub 1/-minimization problem (/spl par/x/spl par//sub /spl lscr/1/:=/spl Sigma//sub i/|x/sub i/|) min(g/spl isin/R/sup n/) /spl par/y - Ag/spl par//sub /spl lscr/1/ provided that the support of the vector of errors is not too large, /spl par/e/spl par//sub /spl lscr/0/:=|{i:e/sub i/ /spl ne/ 0}|/spl les//spl rho//spl middot/m for some /spl rho/>0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work. Finally, underlying the success of /spl lscr//sub 1/ is a crucial property we call the uniform uncertainty principle that we shall describe in detail.",
"Solution of the Generalized Noah's Ark Problem. The phylogenetic diversity (PD) of a set of species is a measure of the evolutionary distance among the species in the collection, based on a phylogenetic tree. Such a tree is composed of a root, internal nodes, and leaves that correspond to the set of taxa under study. With each edge of the tree is associated a non-negative branch length (evolutionary distance). If a particular survival probability is associated with each taxon, the PD measure becomes the expected PD measure. In the Noah's Ark Problem (NAP) introduced by Weitzman (1998), these survival probabilities can be increased at some cost. The problem is to determine how best to allocate a limited amount of resources to maximize the expected PD of the considered species. It is easy to formulate the NAP as a (difficult) nonlinear 0-1 programming problem. The aim of this article is to show that a general version of the NAP (GNAP) can be solved simply and efficiently with any set of edge weights and any set of survival probabilities by using standard mixed-integer linear programming software. The crucial point to move from a nonlinear program in binary variables to a mixed-integer linear program, is to approximate the logarithmic function by the lower envelope of a set of tangents to the curve. Solving the obtained mixed-integer linear program provides not only a near-optimal solution but also an upper bound on the value of the optimal solution. We also applied this approach to a generalization of the nature reserve problem (GNRP) that consists of selecting a set of regions to be conserved so that the expected PD of the set of species present in these regions is maximized. In this case, the survival probabilities of different taxa are not independent of each other. Computational results are presented to illustrate potentialities of the approach. Near-optimal solutions with hypothetical phylogenetic trees comprising about 4000 taxa are obtained in a few seconds or minutes of computing time for the GNAP, and in about 30 min for the GNRP. In all the cases the average guarantee varies from 0% to 1.20%.",
"On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators This paper shows, by means of an operator called a splitting operator, that the Douglas-Rachford splitting method for finding a zero of the sum of two monotone operators is a special case of the proximal point algorithm, Therefore, applications of Douglas -Rachford splitting, such as the alternating direction method of multipliers for convex programming decomposit ion, are also special cases of the proximal point algorithm. This observation allows the unification and generalization of a variety of convex programming algorithms. By introducing a modified version of the proximal point algorithm, we derive a new, generalized alternating direction method of multipliers for convex programming. Advances of this sort illustrate the power and generality gained by adopting monotone operator theory as a conceptual framework.",
"No Free Lunch Theorems for Search We show that all algorithms that search for an extremum of a cost function perform exactly the same when averaged over all possible cost functions In particular if algorithm A outperforms algorithm B on some cost functions then loosely speaking there must exist exactly as many other functions where B outperforms A Starting from this we analyze a number of the other a priori characteristics of the search problem like its geometry and its information theoretic aspects This analysis allows us to derive mathematical benchmarks for assessing a particular search algo rithm s performance We also investigate minimax aspects of the search problem the validity of using characteristics of a partial search over a cost function to predict future behavior of the search algorithm on that cost function and time varying cost functions We conclude with some discussion of the justi ability of biologically inspired search methods",
"Particle Swarm Optimization of the Multioscillatory LQR for a Three-Phase Four-Wire Voltage-Source Inverter With an $LC$ Output Filter This paper presents evolutionary optimization of the linear quadratic regulator (LQR) for a voltage-source inverter with an LC output filter. The procedure involves particle-swarm-based search for the best weighting factors in the quadratic cost function. It is common practice that the weights in the cost function are set using the guess-and-check method. However, it becomes quite challenging, and usually very time-consuming, if there are many auxiliary states added to the system. In order to immunize the system against unbalanced and nonlinear loads, oscillatory terms are incorporated into the control scheme, and this significantly increases the number of weights to be guessed. All controller gains are determined altogether in one LQR procedure call, and the originality reported here refers to evolutionary tuning of the weighting matrix. There is only one penalty factor to be set by the designer during the controller synthesis procedure. This coefficient enables shaping the dynamics of the closed-loop system by penalizing the dynamics of control signals instead of selecting individual weighting factors for augmented state vector components. Simulational tuning and experimental verification (the physical converter at the level of 21 kVA) are included.",
"Large-Scale Multiclass Support Vector Machine Training via Euclidean Projection onto the Simplex Dual decomposition methods are the current state-of-the-art for training multiclass formulations of Support Vector Machines (SVMs). At every iteration, dual decomposition methods update a small subset of dual variables by solving a restricted optimization problem. In this paper, we propose an exact and efficient method for solving the restricted problem. In our method, the restricted problem is reduced to the well-known problem of Euclidean projection onto the positive simplex, which we can solve exactly in expected O(k) time, where k is the number of classes. We demonstrate that our method empirically achieves state-of-the-art convergence on several large-scale high-dimensional datasets.",
"Simulation-based optimization of Markov reward processes We propose a simulation based algorithm for optimizing the average reward in a Markov Reward Process that depends on a set of parameters As a special case the method applies to Markov Decision Processes where optimization takes place within a parametrized set of policies The algorithm involves the simulation of a single sample path and can be implemented on line A convergence result with probability is provided This research was supported by contracts with Siemens AG Munich Germany and Alcatel Bell Belgium and by contract DMI with the National Science Foundation Introduction Markov Decision Processes and the associated dynamic programming DP methodology Ber a Put provide a general framework for posing and analyzing problems of se quential decision making under uncertainty DP methods rely on a suitably de ned value function that has to be computed for every state in the state space However many inter esting problems involve very large state spaces curse of dimensionality In addition DP assumes the availability of an exact model in the form of transition probabilities In many practical situations such a model is not available and one must resort to simulation or experimentation with an actual system For all of these reasons dynamic programming in its pure form may be inapplicable The e orts to overcome the aforementioned di culties involve two main ideas The use of simulation to estimate quantities of interest thus avoiding model based computations The use of parametric representations to overcome the curse of dimensionality Parametric representations and the associated algorithms can be broadly classi ed into three main categories a Parametrized value functions Instead of associating a value V i with each state i one uses a parametric form V i r where r is a vector of tunable parameters weights and V is a so called approximation architecture For example V i r could be the output of a multilayer perceptron with weights r when the input is i Other representations are possible e g involving polynomials linear combina tions of feature vectors state aggregation etc When the main ideas from DP are combined with such parametric representations one obtains methods that go un der the names of reinforcement learning or neuro dynamic programming see BT SB for textbook expositions as well as the references therein A key char acteristic is that policy optimization is carried out in an indirect fashion one tries to obtain a good approximation of the optimal value function of dynamic programming and uses it to construct policies that are close to optimal Such methods are reason ably well though not fully understood and there have been some notable practical successes see BT SB for an overview including the world class backgammon player by Tesauro Tes b Parametrized policies In an alternative approach which is the one considered in this paper the tuning of a parametrized value function is bypassed Instead one considers a class of policies described in terms of a parameter vector Simulation is employed to estimate the gradient of the performance metric with respect to and the policy is improved by updating in a gradient direction In some cases the re quired gradient can be estimated using IPA in nitesimal perturbation analysis see e g HC Gla CR and the references therein For general Markov processes and in the absence of special structure IPA is inapplicable but gradient estimation is still possible using likelihood ratio methods Gly Gly GG LEc GI c Actor critic methods A third approach which is a combination of the rst two includes parametrizations of the policy actor and of the value function critic BSA While such methods seem particularly promising theoretical understand ing has been limited to the impractical case of lookup representations one parameter per state KB This paper concentrates on methods based on policy parametrization and approx imate gradient improvement in the spirit of item b above While we are primarily interested in the case of Markov Decision Processes almost everything applies to the more general case of Markov Reward Processes that depend on a parameter vector and we proceed within this broader context We start with a formula for the gradient of the performance metric that has been presented in di erent forms and for various contexts in Gly CC FH JSJ TH CW We then suggest a method for estimating the terms that appear in that formula This leads to a simulation based method that updates the parameter vector at every regeneration time in an approximate gradient direction Furthermore we show how to construct an on line method that updates the parameter vector at each time step The resulting method has some conceptual similarities with those described in CR that reference assumes however the availability of an IPA estimator with certain guaranteed properties that are absent in our context and in JSJ which however does not contain convergence results The method that we propose only keeps in memory and updates K numbers where K is the dimension of Other than itself this includes a vector similar to the eligibility trace in Sutton s temporal di erence methods and as in JSJ an estimate of the average reward under the current value of If that estimate was accurate our method would be a standard stochastic gradient algorithm However as keeps changing is generally a biased estimate of the true average reward and the mathematical structure of our method is more complex than that of stochastic gradient algorithms For reasons that will become clearer later standard approaches e g martingale arguments or the ODE approach do not seem to su ce for establishing convergence and a more elaborate proof is necessary Our gradient estimator can also be derived or interpreted in terms of likelihood ratios Gly GG It takes the same form as the one presented in p of Gly but it is used di erently The development in Gly leads to a consistent estimator of the gradient assuming that a very large number of regenerative cycles are estimated while keeping the policy parameter at a xed value Presumably would be then updated after such a long simulation In contrast our method updates much more frequently and retains the desired convergence properties despite the fact that any single cycle results in a biased gradient estimate An alternative simulation based stochastic gradient method again based on a likeli hood ratio formula has been provided in Gly and uses the simulation of two regen erative cycles to construct an unbiased estimate of the gradient We note some of the di erences with the latter work First the methods in Gly involve a larger number of auxiliary quantities that are propagated in the course of a regenerative cycle Second our method admits a modi cation see Sections that can make it applicable even if the time until the next regeneration is excessive in which case likelihood ratio based methods su er from excessive variance Third our estimate \u0003 of the average reward is obtained as a weighted average of all past rewards not just over the last regenerative cycle In contrast an approach such as the one in Gly would construct an independent estimate of \u0003 during each regenerative cycle which should result in higher variance Finally our method brings forth and makes crucial use of the value di erential reward function of dy namic programming This is important because it paves the way for actor critic methods in which the variance associated with the estimates of the di erential rewards is poten tially reduced by means of learning value function approximation Indeed subsequent to the rst writing of this paper this latter approach has been pursued in KT SMS In summary the main contributions of this paper are as follows We introduce a new algorithm for updating the parameters of a Markov Reward Process on the basis of a single sample path The parameter updates can take place either during visits to a certain recurrent state or at every time step We also specialize the method to Markov Decision Processes with parametrically represented policies In this case the method does not require the transition probabilities to be known We establish that the gradient with respect to the parameter vector of the perfor mance metric converges to zero with probability which is the strongest possible result for gradient related stochastic approximation algorithms The method admits approximate variants with reduced variance such as the one described in Section or various types of actor critic methods The remainder of this paper is organized as follows In Section we introduce our framework and assumptions and state some background results including a formula for the gradient of the performance metric In Section we present an algorithm that per forms updates during visits to a certain recurrent state present our main convergence result and provide a heuristic argument Sections and deal with variants of the algo rithm that perform updates at every time step In Section we specialize our methods to the case of Markov Decision Processes that are optimized within a possibly restricted set of parametrically represented randomized policies We present some numerical results in Section and conclude in Section The lengthy proof of our main results is developed in the appendices Markov Reward Processes Depending on a Parameter In this section we present our general framework make a few assumptions and state some basic results that will be needed later We consider a discrete time nite state Markov chain fing with state space S f Ng whose transition probabilities depend on a parameter vector K and are denoted by pij P in j j in i Whenever the state is equal to i we receive a one stage reward that also depends on and is denoted by gi For every K let P be the stochastic matrix with entries pij Let P fP j Kg be the set of all such matrices and let P be its closure Note that every element of P is also a stochastic matrix and therefore de nes a Markov chain on the same s",
"Load Disaggregation Based on Aided Linear Integer Programming Load disaggregation based on aided linear integer programming (ALIP) is proposed. We start with a conventional linear integer programming (IP)-based disaggregation and enhance it in several ways. The enhancements include additional constraints, correction based on a state diagram, median filtering, and linear-programming-based refinement. With the aid of these enhancements, the performance of IP-based disaggregation is significantly improved. The proposed ALIP system relies only on the instantaneous load samples instead of waveform signatures and, hence, works well on low-frequency data. Experimental results show that the proposed ALIP system performs better than conventional IP-based load disaggregation.",
"Bat-Inspired Optimization Approach for the Brushless DC Wheel Motor Problem This paper presents a metaheuristic algorithm inspired in evolutionary computation and swarm intelligence concepts and fundamentals of echolocation of micro bats. The aim is to optimize the mono and multiobjective optimization problems related to the brushless DC wheel motor problems, which has 5 design parameters and 6 constraints for the mono-objective problem and 2 objectives, 5 design parameters, and 5 constraints for multiobjective version. Furthermore, results are compared with other optimization approaches proposed in the recent literature, showing the feasibility of this newly introduced technique to high nonlinear problems in electromagnetics.",
"Backtracking search algorithm in CVRP models for efficient solid waste collection and route optimization. Waste collection is an important part of waste management that involves different issues, including environmental, economic, and social, among others. Waste collection optimization can reduce the waste collection budget and environmental emissions by reducing the collection route distance. This paper presents a modified Backtracking Search Algorithm (BSA) in capacitated vehicle routing problem (CVRP) models with the smart bin concept to find the best optimized waste collection route solutions. The objective function minimizes the sum of the waste collection route distances. The study introduces the concept of the threshold waste level (TWL) of waste bins to reduce the number of bins to be emptied by finding an optimal range, thus minimizing the distance. A scheduling model is also introduced to compare the feasibility of the proposed model with that of the conventional collection system in terms of travel distance, collected waste, fuel consumption, fuel cost, efficiency and CO2 emission. The optimal TWL was found to be between 70% and 75% of the fill level of waste collection nodes and had the maximum tightness value for different problem cases. The obtained results for four days show a 36.80% distance reduction for 91.40% of the total waste collection, which eventually increases the average waste collection efficiency by 36.78% and reduces the fuel consumption, fuel cost and CO2 emission by 50%, 47.77% and 44.68%, respectively. Thus, the proposed optimization model can be considered a viable tool for optimizing waste collection routes to reduce economic costs and environmental impacts.",
"DESIGNING A LAYOUT USING THE MODIFIED TRIANGLE METHOD , AND GENETIC ALGORITHMS This paper describes the use of genetic algorithms (GA) for solving the facility layout problem (FLP) within manufacturing systems’ design. The paper considers a specific heuristic layout planning method, known as the modified triangle method. This method seeks for an optimal layout solution based on the degrees of flows between workstations. The search for an optimal solution is extremely time-consuming and not suitable for larger systems; therefore we have developed a system based on evolutionary computation. Our paper presents a system, based on GA, and the results obtained regarding several numerical cases from the literature. We propose the usage of this system by presenting numerous advantages over other methods of solving FLP for problem representation and evolutionary computation for solution search. (Received in September 2012, accepted in June 2013. This paper was with the authors 2 months for 2 revisions.)",
"Analyzing the Held-Karp TSP Bound: A Monotonicity Property with Application In their 1971 paper on the Traveling Salesman Problem and Minimum Spanning Trees, Held and Karp showed that finding an optimally weighted 1-tree is equivalent to solving a linear program for the Traveling Salesman Problem (TSP) with only node-degree constraints and subtour elimination constraints. In this paper we show that the Held-Karp 1-trees have a certain monotonicity property: given a particular instance of the symmetric TSP with triangle inequality, the cost of the minimum weighted 1-tree is monotonic with respect to the set of nodes included. As a consequence, we obtain an alternate proof of a result of Wolsey and show that linear programs with node-degree and subtour elimination constraints must have a cost at least 23OPT , where OPT is the cost of the optimum solution to the TSP instance. The traveling salesman problem is one of the most notorious in the field of combinatorial optimization, and one of the most well-studied [7]. Currently, the most successful approach to finding optimal solutions to large-scale problems is based on formulating the problem as a linear program and finding explicit partial descriptions of this linear polytope [5], [8]. The most natural constraints are derived from an integer linear programming formulation that uses nodedegree constraints and subtour elimination constraints. We focus our attention on symmetric instances of the TSP that obey the triangle inequality. Let V = {1, 2, . . . , n} denote the set of nodes. For any distinct i and j, assign a cost cij such that cij = cji, and for any k distinct from i and j, cij ≤ cik + ckj . Then the Subtour LP on this instance is B = min ∑ 1≤i<j≤n cijxij subject to: ∑ j>i xij + ∑ j<i xji = 2, i = 1, 2, . . . , n, ∑ i∈S,j∈S,i<j xij ≤ |S|− 1, for any proper subsetS ⊂ V, 0 ≤ xij ≤ 1, 1 ≤ i < j ≤ n. (1) ∗This research was supported in part by Air Force Contract AFOSR-86-0078 and by the National Science Foundation under a Presidential Young Investigator Award CCR-8996272 with matching support from IBM, Sun, and UPS.",
"Market power and efficiency in a computational electricity market with discriminatory double-auction pricing This study reports experimental market power and efficiency outcomes for a computational wholesale electricity market operating in the short run under systematically varied concentration and capacity conditions. The pricing of electricity is determined by means of a clearinghouse double auction with discriminatory midpoint pricing. Buyers and sellers use a modified Roth Erev individual reinforcement learning algorithm to determine their price and quantity offers in each auction round. It is shown that high market efficiency is generally attained, and that market microstructure is strongly predictive for the relative market power of buyers and sellers independently of the values set for the reinforcement learning parameters. Results are briefly compared against results from an earlier study in which buyers and sellers instead engage in social mimicry learning via genetic algorithms.",
"On-Line Economic Optimization of Energy Systems Using Weather Forecast Information We establish an on-line optimization framework to exploit weather forecast information in the operation of energy systems. We argue that anticipating the weather conditions can lead to more proactive and cost-effective operations. The framework is based on the solution of a stochastic dynamic real-time optimization (D-RTO) problem incorporating forecasts generated from a state-of-the-art weather prediction model. The necessary uncertainty information is extracted from the weather model using an ensemble approach. The accuracy of the forecast trends and uncertainty bounds are validated using real meteorological data. We present a numerical simulation study in a building system to demonstrate the developments.",
"SCUC With Hourly Demand Response Considering Intertemporal Load Characteristics In this paper, the hourly demand response (DR) is incorporated into security-constrained unit commitment (SCUC) for economic and security purposes. SCUC considers fixed and responsive loads. Unlike fixed hourly loads, responsive loads are modeled with their intertemporal characteristics. The responsive loads linked to hourly market prices can be curtailed or shifted to other operating hours. The study results show that DR could shave the peak load, reduce the system operating cost, reduce fuel consumptions and carbon footprints, and reduce the transmission congestion by reshaping the hourly load profile. Numerical simulations in this paper exhibit the effectiveness of the proposed approach.",
"Graphical Models and Belief Propagation-hierarchy for Optimal Physics-Constrained Network Flows In this manuscript we review new ideas and first results on application of the Graphical Models approach, originated from Statistical Physics, Information Theory, Computer Science and Machine Learning, to optimization problems of network flow type with additional constraints related to the physics of the flow. We illustrate the general concepts on a number of enabling examples from power system and natural gas transmission (continental scale) and distribution (district scale) systems. 1.1 Introductory remarks In this chapter we discuss optimization problems which appears naturally in the classical settings describing flows over networks constrained by the physical nature Michael Chertkov Theoretical Division, T-4 & CNLS, Los Alamos National Laboratory Los Alamos, NM 87545, USA and Energy System Center, Skoltech, Moscow, 143026, Russia, e-mail: chertkov@lanl. gov Sidhant Misra Theoretical Division, T-5, Los Alamos National Laboratory Los Alamos, NM 87545, USA, e-mail: sidhant@lanl.gov Marc Vuffray Theoretical Division, T-4, Los Alamos National Laboratory Los Alamos, NM 87545, USA, e-mail: sidhant@lanl.gov Krishnamurthy Dvijotham Pacific Northwest National Laboratory, PO Box 999, Richland, WA 99352, USA e-mail: krishnamurthy.dvijotham@pnnl.gov Pascal Van Hentenryck University of Michigan, Department of Industrial & Operations Engineering Ann Arbor, MI 48109, USA, e-mail: pvanhent@umich.edu 1 ar X iv :1 70 2. 01 89 0v 1 [ cs .S Y ] 7 F eb 2 01 7 2 M. Chertkov, S. Misra, M. Vuffray, D. Krishnamurthy, and P. Van Hentenryck of the flows which appear in the context of electric power systems, see e.g. [27, 44], and natural gas application, see e.g. [13] and references there in. Other examples of physical flows where similar optimization problem arise include pipe-flow systems, such as district heating [75, 1] and water [54], as well as traffic systems [40]. We aim to show that the network flow optimization problem can be stated naturally in terms of the so-called Graphical Models (GM). In general, GMs for optimization and inference are wide spread in statistical disciplines such as Applied Probability, Machine Learning and Artificial Intelligence [53, 29, 16, 12, 32, 50], Information Theory [55] and Statistical Physics [47]. Main benefit of adopting GM methodology to the physics-constrained network flows is in modularity and flexibility of the approach – any new constraints, set of new variables, and any modification of the optimization objective can be incorporated in the GM formulation with an ease. Besides, if all (or at least majority of) constraints and modifications are factorized, i.e. can be stated in terms of a small subset of variables, underlying GM optimization or GM statistical inference problems can be solved exactly or approximately with the help of an emerging set of techniques, algorithms and computational approaches coined collectively Belief Propagation (BP), see e.g. an important original paper [74] and recent reviews [47, 55, 66]. It is also important to emphasize that an additional benefit of the GM formulation is in its principal readiness for generalization. Even though we limit our discussion to application of the GM and BP framework to deterministic optimizations, many probabilistic and/or mixed generalizations (largely not discussed in this paper) fit very naturally in this universal framework as well. We will focus on optimization problems associated with Physics-Constrained Newtork Flow (PCNF) problems. Structure of the networks will obviously be inherited in the GM formulation, however indirectly through graphand variabletransformations and modifications. Specifically, next Section 1.2 is devoted solely to stating a number of exemplary energy system formulations in GM terms. Thus, in Section 1.2.1 and Section 1.2.2 we consider dissipation optimal and respectively general physics-constrained network flow problems. In particular, Section 1.2.2 includes discussion of power flow problems in both power-voltage, Section 1.2.2.1, and current-voltage, Section 1.2.2.2, formats, as well as discussion of the gas flow formulation in Section 1.2.2.3 and general k-component physics-constrained network flow problem in Section 1.2.2.4. Section 1.2.3 describes problems of the next level of complexity – these including optimization over resources. In particular, general optimal physics-controlled network flow problem is discussed in Section 1.2.3.1 and more specific cases of optimal flows, involving optimal power flow (in both power-flow and current-voltage formulations) and gas flows are discussed in Sections 1.2.3.2,1.2.3.3,1.2.2.3, respectively. Section 1.2.4 introduces a number of feasibility problems, all stated as special kinds of optimizations. Here we discuss the so-called instanton, Section 1.2.4.1, containment Section 1.2.4.2, and state estimation, Section 1.2.4.3, formulation. The long introductory section concludes with a discussion in Section 1.2.5 of an exemplary (and even more) complex optimization involving split of resources between participants/aggregators. 1 Graphical Models for Optimal Flows 3 In Section 1.3 we describe how any of the aforementioned PCNF and optimal PCNF problems can be re-stated in the universal Graphical Model format. Then, in Section 1.4, we take advantage of the factorized form of the PCNF GM and illustrate how BP methodology can be used to solve the optimization problems exactly and/or approximately. Specifically, in Section 1.4.1 we restate the optimization (Maximum Likelihood) GM problem as a Linear Programming (LP) in the space of beliefs (proxies for probabilities). The resulting LP is generally difficult as working with all variables in a combination. We take advantage of the GM factorization and introduce in Section 1.4.2 the so-called Linear Programming Belief Propagation (LP-BP) relaxation, providing a provable lower bound for the optimal. Finally, in Section 1.4.3 we construct a tractable relaxation of LP-BP based on an interval partitioning of the underlying space. Section 1.5 discuss hierarchies which allow to generalize, and thus improve LPBP. The so-called LP-BP hierarchies, related to earlier papers on the subject [65, 30, 63] are discussed in Section 1.5.1. Then, relation between the LP-BP hierarchies and classic LP-based Sherali-Adams [59] and Semi-Definite-Programming based Lasserre hierarchies [41, 36, 52, 37] are discussed in Section 1.5.2. Section 1.6 discuss the special case of a GM defined over a tree (graph without loops). In this case LP-BP is exac, equivalent to the so-called Dynamic Programming approach, and as such it provides a distributed alternative to the global optimization through a sequence of graph-element-local optimizations. However, even in the tree case the exact LP-BP and/or DP are not tractable for GM stated in terms of physical variables, such as flows, voltages and/or pressures, drawn from a continuous set. Following, [18] we discuss here how the problem can be resolved with a proper interval-partitioning (discretization). We conclude the manuscript presenting summary and discussing path forward in Section 1.7. 1.2 Problems of Interest: Formulations In this Section we formulate a number of physics-constrained network flow problems which we will then attempt to analyze and solve with the help of Graphical Model (GM)/Belief Propagation (BP) approaches/techniques in the following Sections. 1.2.1 Dissipation-Optimal Network Flow We start introducing/discussing Network Flows constrained by a minimum dissipation principle, i.e. one which can be expressed as an unconstrained optimization/minimization of an energy function (potential). 4 M. Chertkov, S. Misra, M. Vuffray, D. Krishnamurthy, and P. Van Hentenryck Consider a static flow of a commodity over an undirected graph, G = (V ,E ) described through the following network flow equations i ∈ V : qi = ∑ j:(i, j)∈E φi j, (1.1) where qi stand for injection, qi > 0, or consumption, qi < 0, of the flow at the node i and φi j =−φ ji stands for the value of the flow through the directed edge (i, j) – in the direction from i to j 1. We consider a balanced network, ∑i∈V qi = 0. We constraint the flow requiring that the minimum dissipation principle is obeyed min φ ∑ {i, j}∈E Ei j(φi j) ∣∣∣∣ Eq. (1.1) , (1.2) where φ . = (φi j = −φ ji|{i, j} ∈ E ), and Ei j(x) are local (energy) functions of their arguments for all {i, j} ∈ E . The local energy functions Ei j(x) are required to be convex at least on a restricted domain. We call the sum of local energy functions E(φ) = ∑{i, j}∈E Ei j(φi j) the global energy function or simply the energy function. Versions of this problem appear in the context of the feasibility analysis of the dissipative network flows, that is flows whose redistribution over the network is constrained by potentials, e.g. voltages or pressures in the context of resistive electric networks and gas flow networks, respectively [22, 48, 64]. Note, that the formulation (1.2) can also be supplemented by additional flow or potential constraints. Requiring Karush-Kuhn-Tucker (KKT) stationary point conditions on the optimization problem stated in Eq. (1.2) leads to the following set of equations ∀{i, j} ∈ E : E ′ i j(φi j) = λi−λ j, (1.3) where λi is a Lagrangian multiplier corresponding to the i’s equation (1.1). The problem becomes fully defined by the pair of Eqs. (1.1,1.3), which can also be restated solely in terms of the λ -variables i ∈ V : qi = ∑ j:{i, j}∈E ( E ′ i j )−1 (λi−λ j). (1.4) 1.2.2 General Physics-Constrained Network Flows We call “unconstrained” a network flow for which only conservation of flow(s), described by Eq. (1.1), is enforced. Contrariwise we call “Physics-constrained” a 1 In the following we will use notation {i, j} for the undirected graph and (i, j) for the respective directed graph. When the meaning is clear we slightly abuse notations denoting by E both the set of undirected and directed edges. 1 Graphical Models for Optimal Flows 5 network flow t",
"Conflict-Aware Event-Participant Arrangement and Its Variant for Online Setting With the rapid development of Web 2.0 and Online To Offline (O2O) marketing model, various <italic>online <underline> e</underline>vent-<underline>b</underline>ased <underline>s</underline>ocial <underline>n</underline>etwork<underline>s </underline></italic> (EBSNs) are getting popular. An important task of EBSNs is to facilitate the most satisfactory event-participant arrangement for both sides, i.e., events enroll more participants and participants are arranged with personally interesting events. Existing approaches usually focus on the arrangement of each single event to a set of potential users, or ignore the conflicts between different events, which leads to infeasible or redundant arrangements. In this paper, to address the shortcomings of existing approaches, we first identify a more general and useful event-participant arrangement problem, called <italic><underline>G</underline>lobal <underline>E</underline> vent-participant <underline>A</underline>rrangement with <underline>C</underline>onflict and <underline>C</underline> apacity</italic> (<inline-formula><tex-math notation=\"LaTeX\">$GEACC$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"chen-ieq1-2565468.gif\"/></alternatives></inline-formula>) problem, focusing on the conflicts of different events and making event-participant arrangements in a global view. We find that the GEACC problem is NP-hard due to the conflicts among events. Thus, we design two approximation algorithms with provable approximation ratios and an exact algorithm with pruning technique to address this problem. In addition, we propose an online setting of GEACC, called OnlineGEACC, which is also practical in real-world scenarios. We further design an online algorithm with provable performance guarantee. Finally, we verify the effectiveness and efficiency of the proposed methods through extensive experiments on real and synthetic datasets.",
"A Novel Artificial Bee Colony Algorithm Based on Modified Search Equation and Orthogonal Learning The artificial bee colony (ABC) algorithm is a relatively new optimization technique which has been shown to be competitive to other population-based algorithms. However, ABC has an insufficiency regarding its solution search equation, which is good at exploration but poor at exploitation. To address this concerning issue, we first propose an improved ABC method called as CABC where a modified search equation is applied to generate a candidate solution to improve the search ability of ABC. Furthermore, we use the orthogonal experimental design (OED) to form an orthogonal learning (OL) strategy for variant ABCs to discover more useful information from the search experiences. Owing to OED's good character of sampling a small number of well representative combinations for testing, the OL strategy can construct a more promising and efficient candidate solution. In this paper, the OL strategy is applied to three versions of ABC, i.e., the standard ABC, global-best-guided ABC (GABC), and CABC, which yields OABC, OGABC, and OCABC, respectively. The experimental results on a set of 22 benchmark functions demonstrate the effectiveness and efficiency of the modified search equation and the OL strategy. The comparisons with some other ABCs and several state-of-the-art algorithms show that the proposed algorithms significantly improve the performance of ABC. Moreover, OCABC offers the highest solution quality, fastest global convergence, and strongest robustness among all the contenders on almost all the test functions.",
"Locational marginal pricing basics for restructured wholesale power markets Although Locational Marginal Pricing (LMP) plays an important role in many restructured wholesale power markets, the detailed derivation of LMPs as actually used in industry practice is not readily available. This lack of transparency greatly hinders the efforts of researchers to evaluate the performance of these markets. In this paper, different AC and DC optimal power flow (OPF) models are presented to help understand the derivation of LMPs. As a byproduct of this analysis, the paper provides a rigorous explanation of the basic LMP and LMP-decomposition formulas (neglecting real power losses) presented without derivation in the business practice manuals of the U.S. Midwest Independent System Operator (MISO).",
"On a Feasible-Infeasible Two-Population (FI-2Pop) genetic algorithm for constrained optimization: Distance tracing and no free lunch We explore data-driven methods for gaining insight into the dynamics of a two population genetic algorithm (GA), which has been effective in tests on constrained optimization problems. We track and compare one population of feasible solutions and another population of infeasible solutions. Feasible solutions are selected and bred to improve their objective function values. Infeasible solutions are selected and bred to reduce their constraint violations. Interbreeding between populations is completely indirect, that is, only through their offspring that happen to migrate to the other population. We introduce an empirical measure of distance, and apply it between individuals and between population centroids to monitor the progress of evolution. We find that the centroids of the two populations approach each other and stabilize. This is a valuable characterization of convergence. We find the infeasible population influences, and sometimes dominates, the genetic material of the optimum solution. Since the infeasible population is not evaluated by the objective function, it is free to explore boundary regions, where the optimum is likely to be found. Roughly speaking, the No Free Lunch theorems for optimization show that all blackbox algorithms (such as Genetic Algorithms) have the same average performance over the set of all problems. As such, our algorithm would, on average, be no better than random search or any other blackbox search method. However, we provide two general theorems that give conditions that render null the No Free Lunch results for the constrained optimization problem class we study. The approach taken here thereby escapes the No Free Lunch implications, per se.",
"Enhanced parallel cat swarm optimization based on the Taguchi method In this paper, we present an enhanced parallel cat swarm optimization (EPCSO) method for solving numerical optimization problems. The parallel cat swarm optimization (PCSO) method is an optimization algorithm designed to solve numerical optimization problems under the conditions of a small population size and a few iteration numbers. The Taguchi method is widely used in the industry for optimizing the product and the process conditions. By adopting the Taguchi method into the tracing mode process of the PCSO method, we propose the EPCSO method with better accuracy and less computational time. In this paper, five test functions are used to evaluate the accuracy of the proposed EPCSO method. The experimental results show that the proposed EPCSO method gets higher accuracies than the existing PSO-based methods and requires less computational time than the PCSO method. We also apply the proposed method to solve the aircraft schedule recovery problem. The experimental results show that the proposed EPCSO method can provide the optimum recovered aircraft schedule in a very short time. The proposed EPCSO method gets the same recovery schedule having the same total delay time, the same delayed flight numbers and the same number of long delay flights as the Liu, Chen, and Chou method (2009). The optimal solutions can be found by the proposed EPCSO method in a very short time. 2011 Elsevier Ltd. All rights reserved.",
"Efficient Evolutionary Algorithm for Single-Objective Bilevel Optimization Bilevel optimization problems are a class of challenging optimization problems, which contain two levels of optimization tasks. In these problems, the optimal solutions to the lower level problem become possible feasible candidates to the upper level problem. Such a requirement makes the optimization problem difficult to solve, and has kept the researchers busy towards devising methodologies, which can efficiently handle the problem. Despite the efforts, there hardly exists any effective methodology, which is capable of handling a complex bilevel problem. In this paper, we introduce bilevel evolutionary algorithm based on quadratic approximations (BLEAQ) of optimal lower level variables with respect to the upper level variables. The approach is capable of handling bilevel problems with different kinds of complexities in relatively smaller number of function evaluations. Ideas from classical optimization have been hybridized with evolutionary methods to generate an efficient optimization algorithm for generic bilevel problems. The efficacy of the algorithm has been shown on two sets of test problems. The first set is a recently proposed SMD test set, which contains problems with controllable complexities, and the second set contains standard test problems collected from the literature. The proposed method has been evaluated against two benchmarks, and the performance gain is observed to be significant.",
"A hybridization of cuckoo search and particle swarm optimization for solving optimization problems A new hybrid optimization algorithm, a hybridization of cuckoo search and particle swarm optimization (CSPSO), is proposed in this paper for the optimization of continuous functions and engineering design problems. This algorithm can be regarded as some modifications of the recently developed cuckoo search (CS). These modifications involve the construction of initial population, the dynamic adjustment of the parameter of the cuckoo search, and the incorporation of the particle swarm optimization (PSO). To cover search space with balance dispersion and neat comparability, the initial positions of cuckoo nests are constructed by using the principle of orthogonal Lation squares. To reduce the influence of fixed step size of the CS, the step size is dynamically adjusted according to the evolutionary generations. To increase the diversity of the solutions, PSO is incorporated into CS using a hybrid strategy. The proposed algorithm is tested on 20 standard benchmarking functions and 2 engineering optimization problems. The performance of the CSPSO is compared with that of several meta-heuristic algorithms based on the best solution, worst solution, average solution, standard deviation, and convergence rate. Results show that in most cases, the proposed hybrid optimization algorithm performs better than, or as well as CS, PSO, and some other exiting meta-heuristic algorithms. That means that the proposed hybrid optimization algorithm is competitive to other optimization algorithms.",
"Robust Unit Commitment Problem with Demand Response and Wind Energy To improve the efficiency in power generation and to reduce the greenhouse gas emission, both Demand Response (DR) strategy and intermittent renewable energy have been proposed or applied in electric power systems. However, the uncertainty and the generation pattern in wind farms and the complexity of demand side management pose huge challenges in power system operations. In this paper, we analytically investigate how to integrate DR and wind energy with fossil fuel generators to (i) minimize power generation cost; (2) fully take advantage wind energy with managed demand to reduce greenhouse emission. We first build a two-stage robust unit commitment model to obtain day-ahead generator schedules where wind uncertainty is captured by a polyhedron. Then, we extend our model to include DR strategy such that both price levels and generator schedule will be derived for the next day. For these two NP-hard problems, we derive their mathematical properties and develop a novel and analytical solution method. Our computational study on a IEEE 118 system with 36 units shows that (i) the robust unit commitment model can significantly reduce total cost and fully make use of wind energy; (ii) the cutting plane method is computationally superior to known algorithms.",
"Artificial cooperative search algorithm for numerical optimization problems In this paper, a new two-population based global search algorithm, the Artificial Cooperative Search Algorithm (ACS), is introduced. ACS algorithm has been developed to be used in solving real-valued numerical optimization problems. For purposes of examining the success of ACS algorithm in solving numerical optimization problems, 91 benchmark problems that have different specifications were used in the detailed tests. The success of ACS algorithm in solving the related benchmark problems was compared to the successes obtained by PSO, SADE, CLPSO, BBO, CMA-ES, CK and DSA algorithms in solving the related benchmark problems by using Wilcoxon Signed-Rank Statistical Test with Bonferroni-Holm correction. The results obtained in the statistical analysis demonstrate that the success achieved by ACS algorithm in solving numerical optimization problems is better in comparison to the other computational intelligence algorithms used in this paper. 2012 Elsevier Inc. All rights reserved.",
"Credit scoring using support vector machines with direct search for parameters selection Support vector machines (SVM) is an effective tool for building good credit scoring models. However, the performance of the model depends on its parameters’ setting. In this study, we use direct search method to optimize the SVM-based credit scoring model and compare it with other three parameters optimization methods, such as grid search, method based on design of experiment (DOE) and genetic algorithm (GA). Two real-world credit datasets are selected to demonstrate the effectiveness and feasibility of the method. The results show that the direct search method can find the effective model with high classification accuracy and good robustness and keep less dependency on the initial search space or point setting.",
"Solving the 0-1 Knapsack Problem with Genetic Algorithms This paper describes a research project on using Genetic Algorithms (GAs) to solve the 0-1 Knapsack Problem (KP). The Knapsack Problem is an example of a combinatorial optimization problem, which seeks to maximize the benefit of objects in a knapsack without exceeding its capacity. The paper contains three sections: brief description of the basic idea and elements of the GAs, definition of the Knapsack Problem, and implementation of the 0-1 Knapsack Problem using GAs. The main focus of the paper is on the implementation of the algorithm for solving the problem. In the program, we implemented two selection functions, roulette-wheel and group selection. The results from both of them differed depending on whether we used elitism or not. Elitism significantly improved the performance of the roulette-wheel function. Moreover, we tested the program with different crossover ratios and single and double crossover points but the results given were not that different.",
"Direct Multisearch for Multiobjective Optimization In practical applications of optimization it is common to have several conflicting objective functions to optimize. Frequently, these functions are subject to noise or can be of black-box type, preventing the use of derivative-based techniques. We propose a novel multiobjective derivative-free methodology, calling it direct multisearch (DMS), which does not aggregate any of the objective functions. Our framework is inspired by the search/poll paradigm of direct-search methods of directional type and uses the concept of Pareto dominance to maintain a list of nondominated points (from which the new iterates or poll centers are chosen). The aim of our method is to generate as many points in the Pareto front as possible from the polling procedure itself, while keeping the whole framework general enough to accommodate other disseminating strategies, in particular when using the (here also) optional search step. DMS generalizes to multiobjective optimization (MOO) all direct-search methods of directional type. We prove under the common assumptions used in direct search for single optimization that at least one limit point of the sequence of iterates generated by DMS lies in (a stationary form of) the Pareto front. However, extensive computational experience has shown that our methodology has an impressive capability of generating the whole Pareto front, even without using a search step. Two by-products of this paper are (i) the development of a collection of test problems for MOO and (ii) the extension of performance and data profiles to MOO, allowing a comparison of several solvers on a large set of test problems, in terms of their efficiency and robustness to determine Pareto fronts.",
"A new vector evaluated PBIL algorithm for reinsurance analytics The purpose of this paper is to evaluate the performance of a new multiobjective algorithm called Vector Evaluated Population Based Incremental Learning (VEPBIL). The new algorithm was applied in solving a real world application named Reinsurance Contract Optimization (RCO), which is a multiobjective problem consisting of maximizing two conflicting functions: expected return and risk. The VEPBIL was tested on two instances of the problem composed by 7 and 15 layers of real anonymized data. In order to evaluate the algorithm, metrics such as hyper volume, number of solutions and coverage were used. A comparisons against Vector Evaluated Differential evolution (VEDE) is also carried out. The comparison has shown that VEPBIL can dominate about 70% and 50% of solutions from VEDE using 7 and 15 layers respectively, whereas VEDE dominates about 10% and 30% of solutions in the way around.",
"Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.",
"Distributed Event-Triggered Scheme for Economic Dispatch in Smart Grids To reduce information exchange requirements in smart grids, an event-triggered communication-based distributed optimization is proposed for economic dispatch. In this work, the θ-logarithmic barrier-based method is employed to reformulate the economic dispatch problem, and the consensus-based approach is considered for developing fully distributed technology-enabled algorithms. Specifically, a novel distributed algorithm utilizes the minimum connected dominating set (CDS), which efficiently allocates the task of balancing supply and demand for the entire power network at the beginning of economic dispatch. Further, an event-triggered communication-based method for the incremental cost of each generator is able to reach a consensus, coinciding with the global optimality of the objective function. In addition, a fast gradient-based distributed optimization method is also designed to accelerate the convergence rate of the event-triggered distributed optimization. Simulations based on the IEEE 57-bus test system demonstrate the effectiveness and good performance of proposed algorithms.",
"Economic dispatch for a microgrid considering renewable energy cost functions Microgrids are operated by a customer or a group of customers for having a reliable, clean and economic mode of power supply to meet their demand. Understanding the economics of system is a prime factor which really depends on the cost/kWh of electricity supplied. This paper presents an easy and simple method for analyzing the dispatch rate of power. An isolated microgrid with solar and wind is considered in this paper. Generation cost functions are modeled with the inclusion of investment cost and maintenance cost of resources. Economic dispatch problem is solved using the reduced gradient method. The effects on total generation cost, with the inclusion of wind energy and solar energy into a microgrid is studied and found the most profitable solution by considering different practical scenarios. The paper gives a detailed correlation between the cost function, investment cost, lifetime and the fluctuant energy forecasting of wind and solar resources. It also discusses the advantages of including the renewable energy credits for the solar panel.",
"Optimal Power Flow by Black Hole Optimization Algorithm In this paper, a black hole optimization algorithm (BH) is utilized to solve the optimal power flow problem considering the generation fuel cost, reduction of voltage deviation and improvement of voltage stability as an objective functions. The black hole algorithm simulate the black hole phenomenon which relay on tow operations, the star absorption and star sucking. The IEEE 30-Bus and IEEE 57-Bus systems are used to illustrate performance of the proposed algorithm and results are compared with those in literatures.",
"An Optimal Power Scheduling Method for Demand Response in Home Energy Management System With the development of smart grid, residents have the opportunity to schedule their power usage in the home by themselves for the purpose of reducing electricity expense and alleviating the power peak-to-average ratio (PAR). In this paper, we first introduce a general architecture of energy management system (EMS) in a home area network (HAN) based on the smart grid and then propose an efficient scheduling method for home power usage. The home gateway (HG) receives the demand response (DR) information indicating the real-time electricity price that is transferred to an energy management controller (EMC). With the DR, the EMC achieves an optimal power scheduling scheme that can be delivered to each electric appliance by the HG. Accordingly, all appliances in the home operate automatically in the most cost-effective way. When only the real-time pricing (RTP) model is adopted, there is the possibility that most appliances would operate during the time with the lowest electricity price, and this may damage the entire electricity system due to the high PAR. In our research, we combine RTP with the inclining block rate (IBR) model. By adopting this combined pricing model, our proposed power scheduling method would effectively reduce both the electricity cost and PAR, thereby, strengthening the stability of the entire electricity system. Because these kinds of optimization problems are usually nonlinear, we use a genetic algorithm to solve this problem."
] | 31
|
Bearish-Bullish Sentiment Analysis on Financial Microblogs
| ["Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews\n(...TRUNCATED)
| ["Understanding microblog continuance usage intention: an integrated model Purpose – The purpose o(...TRUNCATED)
| 19
|
Predicting defects in SAP Java code: An experience report
| ["Hipikat: a project memory for software development\nSociological and technical difficulties, such (...TRUNCATED)
| ["Semantic component retrieval in software engineering In the early days of programming the concept (...TRUNCATED)
| 7
|
Active-Metric Learning for Classification of Remotely Sensed Hyperspectral Images
| ["Nonlinear Component Analysis as a Kernel Eigenvalue Problem\nA new method for performing a nonline(...TRUNCATED)
| ["Active Object Categorization on a Humanoid Robot We present a Bag of Words-based active object cat(...TRUNCATED)
| 40
|
Ad Hoc Retrieval Experiments Using WordNet and Automatically Constructed Thesauri
| ["Term-Weighting Approaches in Automatic Text Retrieval\nThe experimental evidence accumulated over (...TRUNCATED)
| ["Exploring simultaneous keyword and key sentence extraction: improve graph-based ranking using wiki(...TRUNCATED)
| 8
|
Underwater Acoustic Target Tracking: A Review
| ["The challenges of building mobile underwater wireless networks for aquatic applications\nThe large(...TRUNCATED)
| ["Comparative Analysis of ADS-B Verification Techniques ADS-B is one of many Federal Aviation Admini(...TRUNCATED)
| 42
|
Unsupervised Diverse Colorization via Generative Adversarial Networks
| ["Colorful Image Colorization\nGiven a grayscale photograph as input, this paper attacks the problem(...TRUNCATED)
| ["Conditional Generative Adversarial Nets Generative Adversarial Nets [8] were recently introduced a(...TRUNCATED)
| 40
|
Lane Detection ( Part I ) : Mono-Vision Based Method
| ["Realtime lane tracking of curved local road\nA lane detection system is an important component of (...TRUNCATED)
| ["Robust Abandoned Object Detection Using Dual Foregrounds As an alternative to the tracking-based a(...TRUNCATED)
| 11
|
"Detection of distributed denial of service attacks using machine learning algorithms in software de(...TRUNCATED)
| ["Combining OpenFlow and sFlow for an effective and scalable anomaly detection and mitigation mechan(...TRUNCATED)
| ["Bot Classification for Real-Life Highly Class-Imbalanced Dataset Botnets are networks formed with (...TRUNCATED)
| 7
|
Distributed Privacy-Preserving Collaborative Intrusion Detection Systems for VANETs
| ["Practical privacy: the SuLQ framework\nWe consider a statistical database in which a trusted admin(...TRUNCATED)
| ["Building Better Detection with Privileged Information Modern detection systems use sensor outputs (...TRUNCATED)
| 1
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 4