{"id": "51884181", "revid": "10289486", "url": "https://en.wikipedia.org/wiki?curid=51884181", "title": "Multi-stage game", "text": "In game theory, a multi-stage game is a sequence of several simultaneous games played one after the other. This is a generalization of a repeated game: a repeated game is a special case of a multi-stage game, in which the stage games are identical.\nMulti-Stage Game with Different Information Sets.\nAs an example, consider a two-stage game in which the stage game in \"Figure 1\" is played in each of two periods:\nThe payoff to each player is the simple sum of the payoffs of both games. \nPlayers cannot observe the action of the other player within a round; however, at the beginning of Round 2, Player 2 finds out about Player 1's action in Round 1, while Player 1 does not find out about Player 2's action in Round 1.\nFor Player 1, there are formula_1 strategies.\nFor Player 2, there are formula_2 strategies.\nThe extensive form of this multi-stage game is shown in \"Figure 2\":\nIn this game, the only Nash Equilibrium in each stage is (B, b).\n(BB, bb) will be the Nash Equilibrium for the entire game.\nMulti-Stage Game with Changing Payoffs.\nIn this example, consider a two-stage game in which the stage game in \"Figure 3\" is played in the first period and the game in \"Figure 4\" is played in the second:\nThe payoff to each player is the simple sum of the payoffs of both games. \nPlayers cannot observe the action of the other player within a round; however, at the beginning of Round 2, both players find out about the other's action in Round 1. \nFor Player 1, there are formula_2 strategies.\nFor Player 2, there are formula_2 strategies.\nThe extensive form of this multi-stage game is shown in \"Figure 5\":\nEach of the two stages has two Nash Equilibria: which are (A, a), (B, b), (X, x), and (Y, y).\nIf the complete contingent strategy of Player 1 matches Player 2 (i.e. AXXXX, axxxx), it will be a Nash Equilibrium. There are 32 such combinations in this multi-stage game. Additionally, all of these equilibria are subgame-perfect.", "Automation-Control": 0.6346845627, "Qwen2": "Yes"} {"id": "1180641", "revid": "16809467", "url": "https://en.wikipedia.org/wiki?curid=1180641", "title": "Stochastic gradient descent", "text": "Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient (calculated from the entire data set) by an estimate thereof (calculated from a randomly selected subset of the data). Especially in high-dimensional optimization problems this reduces the very high computational burden, achieving faster iterations in exchange for a lower convergence rate.\nWhile the basic idea behind stochastic approximation can be traced back to the Robbins–Monro algorithm of the 1950s, stochastic gradient descent has become an important optimization method in machine learning.\nBackground.\nBoth statistical estimation and machine learning consider the problem of minimizing an objective function that has the form of a sum:\nwhere the parameter formula_2 that minimizes formula_3 is to be estimated. Each summand function formula_4 is typically associated with the formula_5-th observation in the data set (used for training).\nIn classical statistics, sum-minimization problems arise in least squares and in maximum-likelihood estimation (for independent observations). The general class of estimators that arise as minimizers of sums are called M-estimators. However, in statistics, it has been long recognized that requiring even local minimization is too restrictive for some problems of maximum-likelihood estimation. Therefore, contemporary statistical theorists often consider stationary points of the likelihood function (or zeros of its derivative, the score function, and other estimating equations).\nThe sum-minimization problem also arises for empirical risk minimization. In this formula_6 is the value of the loss function at formula_5-th example, and formula_3 is the empirical risk.\nWhen used to minimize the above function, a standard (or \"batch\") gradient descent method would perform the following iterations:\nwhere formula_10 is a step size (sometimes called the \"learning rate\" in machine learning).\nIn many cases, the summand functions have a simple form that enables inexpensive evaluations of the sum-function and the sum gradient. For example, in statistics, one-parameter exponential families allow economical function-evaluations and gradient-evaluations.\nHowever, in other cases, evaluating the sum-gradient may require expensive evaluations of the gradients from all summand functions. When the training set is enormous and no simple formulas exist, evaluating the sums of gradients becomes very expensive, because evaluating the gradient requires evaluating all the summand functions' gradients. To economize on the computational cost at every iteration, stochastic gradient descent samples a subset of summand functions at every step. This is very effective in the case of large-scale machine learning problems.\nIterative method.\nIn stochastic (or \"on-line\") gradient descent, the true gradient of formula_3 is approximated by a gradient at a single sample:\nAs the algorithm sweeps through the training set, it performs the above update for each training sample. Several passes can be made over the training set until the algorithm converges. If this is done, the data can be shuffled for each pass to prevent cycles. Typical implementations may use an adaptive learning rate so that the algorithm converges.\nIn pseudocode, stochastic gradient descent can be presented as :\nA compromise between computing the true gradient and the gradient at a single sample is to compute the gradient against more than one training sample (called a \"mini-batch\") at each step. This can perform significantly better than \"true\" stochastic gradient descent described, because the code can make use of vectorization libraries rather than computing each step separately as was first shown in where it was called \"the bunch-mode back-propagation algorithm\". It may also result in smoother convergence, as the gradient computed at each step is averaged over more training samples.\nThe convergence of stochastic gradient descent has been analyzed using the theories of convex minimization and of stochastic approximation. Briefly, when the learning rates formula_10 decrease with an appropriate rate,\nand subject to relatively mild assumptions, stochastic gradient descent converges almost surely to a global minimum \nwhen the objective function is convex or pseudoconvex, \nand otherwise converges almost surely to a local minimum.\nThis is in fact a consequence of the Robbins–Siegmund theorem.\nExample.\nSuppose we want to fit a straight line formula_18 to a training set with observations formula_19 and corresponding estimated responses formula_20 using least squares. The objective function to be minimized is:\nThe last line in the above pseudocode for this specific problem will become:\nNote that in each iteration (also called update), the gradient is only evaluated at a single point formula_23 instead of at the set of all samples.\nThe key difference compared to standard (Batch) Gradient Descent is that only one piece of data from the dataset is used to calculate the step, and the piece of data is picked randomly at each step.\nNotable applications.\nStochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the backpropagation algorithm, it is the \"de facto\" standard algorithm for training artificial neural networks. Its use has been also reported in the Geophysics community, specifically to applications of Full Waveform Inversion (FWI).\nStochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for training linear regression models, originally under the name ADALINE.\nAnother stochastic gradient descent algorithm is the least mean squares (LMS) adaptive filter.\nExtensions and variants.\nMany improvements on the basic stochastic gradient descent algorithm have been proposed and used. In particular, in machine learning, the need to set a learning rate (step size) has been recognized as problematic. Setting this parameter too high can cause the algorithm to diverge; setting it too low makes it slow to converge. A conceptually simple extension of stochastic gradient descent makes the learning rate a decreasing function of the iteration number , giving a \"learning rate schedule\", so that the first iterations cause large changes in the parameters, while the later ones do only fine-tuning. Such schedules have been known since the work of MacQueen on -means clustering. Practical guidance on choosing the step size in several variants of SGD is given by Spall.\nImplicit updates (ISGD).\nAs mentioned earlier, classical stochastic gradient descent is generally sensitive to learning rate . Fast convergence requires large learning rates but this may induce numerical instability. The problem can be largely solved by considering \"implicit updates\" whereby the stochastic gradient is evaluated at the next iterate rather than the current one:\nThis equation is implicit since formula_25 appears on both sides of the equation. It is a stochastic form of the proximal gradient method since the update\ncan also be written as:\nAs an example, \nconsider least squares with features formula_27 and observations\nformula_28. We wish to solve:\nwhere formula_30 indicates the inner product.\nNote that formula_31 could have \"1\" as the first element to include an intercept. Classical stochastic gradient descent proceeds as follows:\nwhere formula_5 is uniformly sampled between 1 and formula_34. Although theoretical convergence of this procedure happens under relatively mild assumptions, in practice the procedure can be quite unstable. In particular, when formula_10 is misspecified so that formula_36 has large absolute eigenvalues with high probability, the procedure may diverge numerically within a few iterations. In contrast, \"implicit stochastic gradient descent\" (shortened as ISGD) can be solved in closed-form as:\nThis procedure will remain numerically stable virtually for all formula_10 as the learning rate is now normalized. Such comparison between classical and implicit stochastic gradient descent in the least squares problem is very similar to the comparison between least mean squares (LMS) and \nnormalized least mean squares filter (NLMS).\nEven though a closed-form solution for ISGD is only possible in least squares, the procedure can be efficiently implemented in a wide range of models. Specifically, suppose that formula_6 depends on formula_2 only through a linear combination with features formula_41, so that we can write formula_42, where \nformula_43 may depend on formula_44 as well but not on formula_2 except through formula_46. Least squares obeys this rule, and so does logistic regression, and most generalized linear models. For instance, in least squares, formula_47, and in logistic regression formula_48, where formula_49 is the logistic function. In Poisson regression, formula_50, and so on.\nIn such settings, ISGD is simply implemented as follows. Let formula_51, where formula_52 is scalar.\nThen, ISGD is equivalent to:\nThe scaling factor formula_54 can be found through the bisection method since \nin most regular models, such as the aforementioned generalized linear models, function formula_55 is decreasing, \nand thus the search bounds for formula_56 are \nformula_57.\nMomentum.\nFurther proposals include the \"momentum method\" or the \"heavy ball method\", which in ML context appeared in Rumelhart, Hinton and Williams' paper on backpropagation learning and borrowed the idea from Soviet mathematician Boris Polyak's 1964 article on solving functional equations. Stochastic gradient descent with momentum remembers the update at each iteration, and determines the next update as a linear combination of the gradient and the previous update:\nthat leads to:\nwhere the parameter formula_2 which minimizes formula_3 is to be estimated, formula_10 is a step size (sometimes called the \"learning rate\" in machine learning) and formula_64 is an exponential decay factor between 0 and 1 that determines the relative contribution of the current gradient and earlier gradients to the weight change.\nThe name momentum stems from an analogy to momentum in physics: the weight vector formula_2, thought of as a particle traveling through parameter space, incurs acceleration from the gradient of the loss (\"force\"). Unlike in classical stochastic gradient descent, it tends to keep traveling in the same direction, preventing oscillations. Momentum has been used successfully by computer scientists in the training of artificial neural networks for several decades.\nThe \"momentum method\" is closely related to underdamped Langevin dynamics, and may be combined with Simulated Annealing. \nIn mid-1980s the method was modified by Yurii Nesterov to use the gradient predicted at the next point, and the resulting so-called \"Nesterov Accelerated Gradient\" was sometimes used in ML in the 2010s.\nAveraging.\n\"Averaged stochastic gradient descent\", invented independently by Ruppert and Polyak in the late 1980s, is ordinary stochastic gradient descent that records an average of its parameter vector over time. That is, the update is the same as for ordinary stochastic gradient descent, but the algorithm also keeps track of\nWhen optimization is done, this averaged parameter vector takes the place of .\nAdaGrad.\n\"AdaGrad\" (for adaptive gradient algorithm) is a modified stochastic gradient descent algorithm with per-parameter learning rate, first published in 2011. Informally, this increases the learning rate for sparser parameters and decreases the learning rate for ones that are less sparse. This strategy often improves convergence performance over standard stochastic gradient descent in settings where data is sparse and sparse parameters are more informative. Examples of such applications include natural language processing and image recognition.\nIt still has a base learning rate , but this is multiplied with the elements of a vector which is the diagonal of the outer product matrix\nwhere formula_68, the gradient, at iteration . The diagonal is given by\nThis vector essentially stores a historical sum of gradient squares by dimension and is updated after every iteration. The formula for an update is now\nor, written as per-parameter updates,\nEach gives rise to a scaling factor for the learning rate that applies to a single parameter . Since the denominator in this factor, formula_72 is the \"ℓ\"2 norm of previous derivatives, extreme parameter updates get dampened, while parameters that get few or small updates receive higher learning rates.\nWhile designed for convex problems, AdaGrad has been successfully applied to non-convex optimization.\nRMSProp.\n\"RMSProp\" (for Root Mean Square Propagation) is a method invented by Geoffrey Hinton in 2012 in which the learning rate is, like in Adagrad, adapted for each of the parameters. The idea is to divide the learning rate for a weight by a running average of the magnitudes of recent gradients for that weight. Unusually, it was not published in an article but merely described in a Coursera lecture.\nSo, first the running average is calculated in terms of means square,\nwhere, formula_74 is the forgetting factor. The concept of storing the historical gradient as sum of squares is borrowed from Adagrad, but \"forgetting\" is introduced to solve Adagrad's diminishing learning rates in non-convex problems by gradually decreasing the influence of old data.\nAnd the parameters are updated as,\nRMSProp has shown good adaptation of learning rate in different applications. RMSProp can be seen as a generalization of Rprop and is capable to work with mini-batches as well opposed to only full-batches.\nAdam.\n\"Adam\" (short for Adaptive Moment Estimation) is a 2014 update to the \"RMSProp\" optimizer combining it with the main feature of the \"Momentum method\". In this optimization algorithm, running averages with exponential forgetting of both the gradients and the second moments of the gradients are used. Given parameters formula_76 and a loss function formula_77, where formula_78 indexes the current training iteration (indexed at formula_79), Adam's parameter update is given by:\nwhere formula_85 is a small scalar (e.g. formula_86) used to prevent division by 0, and formula_87 (e.g. 0.9) and formula_88 (e.g. 0.999) are the forgetting factors for gradients and second moments of gradients, respectively. Squaring and square-rooting is done element-wise. The profound influence of this algorithm inspired multiple newer, less well-known momentum-based optimization schemes using Nesterov-enhanced gradients (eg: \"NAdam\" and \"FASFA\") and varying interpretations of second-order information (eg: \"Powerpropagation\" and \"AdaSqrt\"). However, the most commonly used variants are \"AdaMax\", which generalizes \"Adam\" using the infinity norm, and \"AMSGrad\", which addresses convergence problems from \"Adam\" by using maximum of past squared gradients instead of the exponential average.\n\"AdamW\" is a later update which mitigates an unoptimal choice of the weight decay algorithm in \"Adam\".\nSign-based stochastic gradient descent.\nEven though sign-based optimization goes back to the aforementioned Rprop, only in 2018 researchers tried to simplify Adam by removing the magnitude of the stochastic gradient from being taked into account and only considering its sign.\nBacktracking line search.\nBacktracking line search is another variant of gradient descent. All of the below are sourced from the mentioned link. It is based on a condition known as the Armijo–Goldstein condition. Both methods allow learning rates to change at each iteration; however, the manner of the change is different. Backtracking line search uses function evaluations to check Armijo's condition, and in principle the loop in the algorithm for determining the learning rates can be long and unknown in advance. Adaptive SGD does not need a loop in determining learning rates. On the other hand, adaptive SGD does not guarantee the \"descent property\" – which Backtracking line search enjoys – which is that formula_89 for all n. If the gradient of the cost function is globally Lipschitz continuous, with Lipschitz constant L, and learning rate is chosen of the order 1/L, then the standard version of SGD is a special case of backtracking line search.\nSecond-order methods.\nA stochastic analogue of the standard (deterministic) Newton–Raphson algorithm (a \"second-order\" method) provides an asymptotically optimal or near-optimal form of iterative optimization in the setting of stochastic approximation. A method that uses direct measurements of the Hessian matrices of the summands in the empirical risk function was developed by Byrd, Hansen, Nocedal, and Singer. However, directly determining the required Hessian matrices for optimization may not be possible in practice. Practical and theoretically sound methods for second-order versions of SGD that do not require direct Hessian information are given by Spall and others. (A less efficient method based on finite differences, instead of simultaneous perturbations, is given by Ruppert.) Another approach to the approximation Hessian matrix is replacing it with the Fisher information matrix, which transforms usual gradient to natural. These methods not requiring direct Hessian information are based on either values of the summands in the above empirical risk function or values of the gradients of the summands (i.e., the SGD inputs). In particular, second-order optimality is asymptotically achievable without direct calculation of the Hessian matrices of the summands in the empirical risk function.\nApproximations in continuous time.\nFor small learning rate formula_90 stochastic gradient descent formula_91 can be viewed as a discretization of the gradient flow ODE\nformula_92\nsubject to additional stochastic noise. This approximation is only valid on a finite time-horizon in the following sense: assume that all the coefficients formula_93 are sufficiently smooth. Let formula_94 and formula_95 be a sufficiently smooth test function. Then, there exists a constant formula_96 such that for all formula_97\nformula_98\nwhere formula_99 denotes taking the expectation with respect to the random choice of indices in the stochastic gradient descent scheme. \nSince this approximation does not capture the random fluctuations around the mean behavior of stochastic gradient descent solutions to stochastic differential equations (SDEs) have been proposed as limiting objects. More precisely, the solution to the SDE\nformula_100\nfor formula_101 where formula_102 denotes the Ito-integral with respect to a Brownian motion is a more precise approximation in the sense that there exists a constant formula_96 such that \nformula_104 \nHowever this SDE only approximates the one-point motion of stochastic gradient descent. For an approximation of the stochastic flow one has to consider SDEs with infinite-dimensional noise.\nHistory.\nSGD was gradually developed by several collectives during the 1950s.\nThe scaling behavior of SGD (i.e. how the performance evaluation metric of interest (e.g. test loss) varies as the number of training steps varies) was found to follow a broken neural scaling law functional form in 2022. ", "Automation-Control": 0.9316545725, "Qwen2": "Yes"} {"id": "2116830", "revid": "6908984", "url": "https://en.wikipedia.org/wiki?curid=2116830", "title": "Trajectory optimization", "text": "Trajectory optimization is the process of designing a trajectory that minimizes (or maximizes) some measure of performance while satisfying a set of constraints. Generally speaking, trajectory optimization is a technique for computing an open-loop solution to an optimal control problem. It is often used for systems where computing the full closed-loop solution is not required, impractical or impossible. If a trajectory optimization problem can be solved at a rate given by the inverse of the Lipschitz constant, then it can be used iteratively to generate a closed-loop solution in the sense of Caratheodory. If only the first step of the trajectory is executed for an infinite-horizon problem, then this is known as Model Predictive Control (MPC).\nAlthough the idea of trajectory optimization has been around for hundreds of years (calculus of variations, brachystochrone problem), it only became practical for real-world problems with the advent of the computer. Many of the original applications of trajectory optimization were in the aerospace industry, computing rocket and missile launch trajectories. More recently, trajectory optimization has also been used in a wide variety of industrial process and robotics applications.\nHistory.\nTrajectory optimization first showed up in 1697, with the introduction of the Brachystochrone problem: find the shape of a wire such that a bead sliding along it will move between two points in the minimum time. The interesting thing about this problem is that it is optimizing over a curve (the shape of the wire), rather than a single number. The most famous of the solutions was computed using calculus of variations.\nIn the 1950s, the digital computer started to make trajectory optimization practical for solving real-world problems. The first optimal control approaches grew out of the calculus of variations, based on the research of Gilbert Ames Bliss and Bryson in America, and Pontryagin in Russia. Pontryagin's maximum principle is of particular note. These early researchers created the foundation of what we now call indirect methods for trajectory optimization.\nMuch of the early work in trajectory optimization was focused on computing rocket thrust profiles, both in a vacuum and in the atmosphere. This early research discovered many basic principles that are still used today. \nAnother successful application was the climb to altitude trajectories for the early jet aircraft. Because of the high drag associated with the transonic drag region and the low thrust of early jet aircraft, trajectory optimization was the key to maximizing climb to altitude performance. Optimal control based trajectories were responsible for some of the world records. In these situations, the pilot followed a Mach versus altitude schedule based on optimal control solutions.\nOne of the important early problems in trajectory optimization was that of the singular arc, where Pontryagin's maximum principle fails to yield a complete solution. An example of a problem with singular control is the optimization of the thrust of a missile flying at a constant altitude and which is launched at low speed. Here the problem is one of a bang-bang control at maximum possible thrust until the singular arc is reached. Then the solution to the singular control provides a lower variable thrust until burnout. At that point bang-bang control provides that the control or thrust go to its minimum value of zero. This solution is the foundation of the boost-sustain rocket motor profile widely used today to maximize missile performance.\nApplications.\nThere are a wide variety of applications for trajectory optimization, primarily in robotics: industry, manipulation, walking, path-planning, and aerospace. It can also be used for modeling and estimation.\nRobotic manipulators.\nDepending on the configuration, open-chain robotic manipulators require a degree of trajectory optimization. For instance, a robotic arm with 7 joints and 7 links (7-DOF) is a redundant system where one cartesian position of an end-effector can correspond to an infinite number of joint angle positions, thus this redundancy can be used to optimize a trajectory to, for example, avoid any obstacles in the workspace or minimize the torque in the joints.\nQuadrotor helicopters.\nTrajectory optimization is often used to compute trajectories for quadrotor helicopters. These applications typically used highly specialized algorithms.\nOne interesting application shown by the U.Penn GRASP Lab is computing a trajectory that allows a quadrotor to fly through a hoop as it is thrown. Another, this time by the ETH Zurich Flying Machine Arena, involves two quadrotors tossing a pole back and forth between them, with it balanced like an inverted pendulum. The problem of computing minimum-energy trajectories for a quadcopter, has also been recently studied.\nManufacturing.\nTrajectory optimization is used in manufacturing, particularly for controlling chemical processes (such as in \n) or computing the desired path for robotic manipulators (such as in\nWalking robots.\nThere are a variety of different applications for trajectory optimization within the field of walking robotics. For example, one paper used trajectory optimization of bipedal gaits on a simple model to show that walking is energetically favorable for moving at a low speed and running is energetically favorable for moving at a high speed.\nLike in many other applications, trajectory optimization can be used to compute a nominal trajectory, around which a stabilizing controller is built.\nTrajectory optimization can be applied in detailed motion planning complex humanoid robots, such as Atlas.\nFinally, trajectory optimization can be used for path-planning of robots with complicated dynamics constraints, using reduced complexity models.\nAerospace.\nFor tactical missiles, the flight profiles are determined by the thrust and lift histories. These histories can be controlled by a number of means including such techniques as using an angle of attack command history or an altitude/downrange schedule that the missile must follow. Each combination of missile design factors, desired missile performance, and system constraints results in a new set of optimal control parameters.\nTrajectory optimization techniques.\nThe techniques to any optimization problems can be divided into two categories: indirect and direct. An indirect method works by analytically constructing the necessary and sufficient conditions for optimality, which are then solved numerically. A direct method attempts a direct numerical solution by constructing a sequence of continually improving approximations to the optimal solution.\nThe optimal control problem is an infinite-dimensional optimization problem, since the decision variables are functions, rather than real numbers. All solution techniques perform transcription, a process by which the trajectory optimization problem (optimizing over functions) is converted into a constrained parameter optimization problem (optimizing over real numbers). Generally, this constrained parameter optimization problem is a non-linear program, although in special cases it can be reduced to a quadratic program or linear program.\nSingle shooting.\nSingle shooting is the simplest type of trajectory optimization technique. The basic idea is similar to how you would aim a cannon: pick a set of parameters for the trajectory, simulate the entire thing, and then check to see if you hit the target. The entire trajectory is represented as a single segment, with a single constraint, known as a defect constraint, requiring that the final state of the simulation matches the desired final state of the system. Single shooting is effective for problems that are either simple or have an extremely good initialization. Both the indirect and direct formulation tend to have difficulties otherwise.\nMultiple shooting.\nMultiple shooting is a simple extension to single shooting that renders it far more effective. Rather than representing the entire trajectory as a single simulation (segment), the algorithm breaks the trajectory into many shorter segments, and a defect constraint is added between each. The result is large sparse non-linear program, which tends to be easier to solve than the small dense programs produced by single shooting.\nDirect collocation.\nDirect collocation methods work by approximating the state and control trajectories using polynomial splines. These methods are sometimes referred to as direct transcription. Trapezoidal collocation is a commonly used low-order direct collocation method. The dynamics, path objective, and control are all represented using linear splines, and the dynamics are satisfied using trapezoidal quadrature. Hermite-Simpson Collocation is a common medium-order direct collocation method. The state is represented by a cubic-Hermite spline, and the dynamics are satisfied using Simpson quadrature.\nOrthogonal collocation.\nOrthogonal collocation is technically a subset of direct collocation, but the implementation details are so different that it can reasonably be considered its own set of methods. Orthogonal collocation differs from direct collocation in that it typically uses high-order splines, and each segment of the trajectory might be represented by a spline of a different order. The name comes from the use of orthogonal polynomials in the state and control splines.\nPseudospectral discretization.\nIn pseudospectral discretization the entire trajectory is represented by a collection of basis functions in the time domain (independent variable). The basis functions need not be polynomials. Pseudospectral discretization is also known as spectral collocation. When used to solve a trajectory optimization problem whose solution is smooth, a pseudospectral method will achieve spectral (exponential) convergence. If the trajectory is not smooth, the convergence is still very fast, faster than Runge-Kutta methods.\nTemporal Finite Elements.\nIn 1990 Dewey H. Hodges and Robert R. Bless proposed a weak Hamiltonian finite element method for optimal control problems. The idea was to derive a weak variational form of first order necessary conditions for optimality, discretise the time domain in finite intervals and use a simple zero order polynomial representation of states, controls and adjoints over each interval.\nDifferential dynamic programming.\nDifferential dynamic programming, is a bit different than the other techniques described here. In particular, it does not cleanly separate the transcription and the optimization. Instead, it does a sequence of iterative forward and backward passes along the trajectory. Each forward pass satisfies the system dynamics, and each backward pass satisfies the optimality conditions for control. Eventually, this iteration converges to a trajectory that is both feasible and optimal.\nComparison of techniques.\nThere are many techniques to choose from when solving a trajectory optimization problem. There is no best method, but some methods might do a better job on specific problems. This section provides a rough understanding of the trade-offs between methods.\nIndirect vs. direct methods.\nWhen solving a trajectory optimization problem with an indirect method, you must explicitly construct the adjoint equations and their gradients. This is often difficult to do, but it gives an excellent accuracy metric for the solution. Direct methods are much easier to set up and solve, but do not have a built-in accuracy metric. As a result, direct methods are more widely used, especially in non-critical applications. Indirect methods still have a place in specialized applications, particularly aerospace, where accuracy is critical.\nOne place where indirect methods have particular difficulty is on problems with path inequality constraints. These problems tend to have solutions for which the constraint is partially active. When constructing the adjoint equations for an indirect method, the user must explicitly write down when the constraint is active in the solution, which is difficult to know a priori. One solution is to use a direct method to compute an initial guess, which is then used to construct a multi-phase problem where the constraint is prescribed. The resulting problem can then be solved accurately using an indirect method.\nShooting vs. collocation.\nSingle shooting methods are best used for problems where the control is very simple (or there is an extremely good initial guess). For example, a satellite mission planning problem where the only control is the magnitude and direction of an initial impulse from the engines.\nMultiple shooting tends to be good for problems with relatively simple control, but complicated dynamics. Although path constraints can be used, they make the resulting nonlinear program relatively difficult to solve.\nDirect collocation methods are good for problems where the accuracy of the control and the state are similar. These methods tend to be less accurate than others (due to their low-order), but are particularly robust for problems with difficult path constraints.\nOrthogonal collocation methods are best for obtaining high-accuracy solutions to problems where the accuracy of the control trajectory is important. Some implementations have trouble with path constraints. These methods are particularly good when the solution is smooth.", "Automation-Control": 0.7713271379, "Qwen2": "Yes"} {"id": "30729045", "revid": "35887757", "url": "https://en.wikipedia.org/wiki?curid=30729045", "title": "International Conference on Autonomous Agents and Multiagent Systems", "text": "The International Conference on Autonomous Agents and Multiagent Systems or AAMAS is the leading scientific conference for research in the areas of artificial intelligence, autonomous agents, and multiagent systems. It is annually organized by a non-profit organization called the International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS).\nHistory.\nThe AAMAS conference is a merger of three major international conferences/workshops, namely International Conference on Autonomous Agents (AGENTS), International Conference on Multi-Agent Systems (ICMAS), and International Workshop on Agent Theories, Architectures, and Languages (ATAL). As such, this highly respected joint conference provides a quality forum for discussing research in this area.\nActivities.\nBesides the main program that consists of a main track, an industry and applications track, and a couple of special area tracks, AAMAS also hosts over 20 workshops (e.g., AOSE, COIN, DALT, ProMAS, to mention a few) and many tutorials. There is also a demonstration session and a doctoral symposium. Finally, each year AAMAS features a bunch of awards, most notably the IFAAMAS Influential Paper Award. It publishes proceedings which are available online.", "Automation-Control": 0.9957734346, "Qwen2": "Yes"} {"id": "55696911", "revid": "40728885", "url": "https://en.wikipedia.org/wiki?curid=55696911", "title": "Decentralized partially observable Markov decision process", "text": "The decentralized partially observable Markov decision process (Dec-POMDP) is a model for coordination and decision-making among multiple agents. It is a probabilistic model that can consider uncertainty in outcomes, sensors and communication (i.e., costly, delayed, noisy or nonexistent communication). \nIt is a generalization of a Markov decision process (MDP) and a partially observable Markov decision process (POMDP) to consider multiple decentralized agents.\nDefinition.\nFormal definition.\nA Dec-POMDP is a 7-tuple formula_1, where\nAt each time step, each agent takes an action formula_13, the state updates based on the transition function formula_14 (using the current state and the joint action), each agent observes an observation based on the observation function formula_15 (using the next state and the joint action) and a reward is generated for the whole team based on the reward function formula_16. The goal is to maximize expected cumulative reward over a finite or infinite number of steps. These time steps repeat until some given horizon (called finite horizon) or forever (called infinite horizon). The discount factor formula_17 maintains a finite sum in the infinite-horizon case (formula_18).", "Automation-Control": 0.9324265718, "Qwen2": "Yes"} {"id": "42368490", "revid": "18872885", "url": "https://en.wikipedia.org/wiki?curid=42368490", "title": "Robust principal component analysis", "text": "Robust Principal Component Analysis (RPCA) is a modification of the widely used statistical procedure of principal component analysis (PCA) which works well with respect to \"grossly\" corrupted observations. A number of different approaches exist for Robust PCA, including an idealized version of Robust PCA, which aims to recover a low-rank matrix L0 from highly corrupted measurements M = L0 +S0. This decomposition in low-rank and sparse matrices can be achieved by techniques such as Principal Component Pursuit method (PCP), Stable PCP, Quantized PCP, Block based PCP, and Local PCP. Then, optimization methods are used such as the Augmented Lagrange Multiplier Method (ALM), Alternating Direction Method (ADM), Fast Alternating Minimization (FAM), Iteratively Reweighted Least Squares (IRLS ) \nor alternating projections (AP).\nAlgorithms.\nNon-convex method.\nThe 2014 guaranteed algorithm for the robust PCA problem (with the input matrix being formula_1) is an alternating minimization type algorithm. The computational complexity is formula_2 where the input is the superposition of a low-rank (of rank formula_3) and a sparse matrix of dimension formula_4 and formula_5 is the desired accuracy of the recovered solution, i.e., formula_6 where formula_7 is the true low-rank component and formula_8 is the estimated or recovered low-rank component. Intuitively, this algorithm performs projections of the residual onto the set of low-rank matrices (via the SVD operation) and sparse matrices (via entry-wise hard thresholding) in an alternating manner - that is, low-rank projection of the difference the input matrix and the sparse matrix obtained at a given iteration followed by sparse projection of the difference of the input matrix and the low-rank matrix obtained in the previous step, and iterating the two steps until convergence.\nThis alternating projections algorithm is later improved by an accelerated version, coined AccAltProj. The acceleration is achieved by applying a tangent space projection before project the residue onto the set of low-rank matrices. This trick improves the computational complexity to formula_9 with a much smaller constant in front while it maintains the theoretically guaranteed linear convergence.\nAnother fast version of accelerated alternating projections algorithm is IRCUR. It uses the structure of CUR decomposition in alternating projections framework to dramatically reduces the computational complexity of RPCA to formula_10\nConvex relaxation.\nThis method consists of relaxing the rank constraint formula_11 in the optimization problem to the nuclear norm formula_12 and the sparsity constraint formula_13 to formula_14-norm formula_15. The resulting program can be solved using methods such as the method of Augmented Lagrange Multipliers.\nDeep-learning augmented method.\nSome recent works propose RPCA algorithms with learnable/training parameters. Such a learnable/trainable algorithm can be unfolded as a deep neural network whose parameters can be learned via machine learning techniques from a given dataset or problem distribution. The learned algorithm will have superior performance on the corresponding problem distribution.\nApplications.\nRPCA has many real life important applications particularly when the data under study can naturally be modeled as a low-rank plus a sparse contribution. Following examples are inspired by contemporary challenges in computer science, and depending on the applications, either the low-rank component or the sparse component could be the object of interest:\nVideo surveillance.\nGiven a sequence of surveillance video frames, it is often required to identify the activities that stand out from the background. If we stack the video frames as columns of a matrix M, then the low-rank component L0 naturally corresponds to the stationary background and the sparse component S0 captures the moving objects in the foreground.\nFace recognition.\nImages of a convex, Lambertian surface under varying illuminations span a low-dimensional subspace. This is one of the reasons for effectiveness of low-dimensional models for imagery data. In particular, it is easy to approximate images of a human's face by a low-dimensional subspace. To be able to correctly retrieve this subspace is crucial in many applications such as face recognition and alignment. It turns out that RPCA can be applied successfully to this problem to exactly recover the face.\nResources and libraries.\nLibraries.\nThe LRS Library (developed by Andrews Sobral) provides a collection of low-rank and sparse decomposition algorithms in MATLAB. The library was designed for moving object detection in videos, but it can be also used for other computer vision / machine learning tasks. Currently the LRSLibrary offers more than 100 algorithms based on \"matrix\" and \"tensor\" methods.", "Automation-Control": 0.6184529066, "Qwen2": "Yes"} {"id": "42405578", "revid": "5837138", "url": "https://en.wikipedia.org/wiki?curid=42405578", "title": "Internet Gold Golden Lines", "text": "Internet Gold Golden Lines (Internet Gold) is a principal communication service group in Israel. The company was founded in 1992 and is headquartered in Israel. It is a subsidiary of Eurocom Communications Ltd., owned by Shaul Elovitch. It has subsidiaries such as B Communications (formerly 012 Smile Communications) and GoldMind Ltd. (formerly Smile.Media Ltd.). The company was formerly known as Euronet Golden Lines Ltd. and changed its name to Internet Gold - Golden Lines Ltd. in 1999.\nBackground.\nThe company operates in four areas including Bezeq Domestic Fixed-line Communications, Pelephone Communications ltd., Bezeq International Ltd. And D.B.S. Satellite Service Ltd.\nAs of April 4, 2013, the company has a market capitalization of $789.67 million with an enterprise value of $3.79 billion.\nThe company’s subsidiary, B Communications, is the controlling holder, holding 31.37% interest, of Bezeq, which is the largest communication service provider in Israel. The company owns 75.3% interest of its subsidiary, 012 Smile Communications Ltd, which is one of Israel’s major Internet and international telephony service providers and the largest providers of enterprise/IT integration services. In 2010, 012 Smile Communication completed the acquisition of all shares of Golden Lines Ltd. Smile Media Ltd. is 100% owned by Internet Gold and is engaged in Internet portals and e-Commerce business.", "Automation-Control": 0.942532599, "Qwen2": "Yes"} {"id": "10095795", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=10095795", "title": "Soccer robot", "text": "A soccer robot is a specialized autonomous robot and mobile robot that is used to play variants of soccer.\nThe main organised competitions are RoboCup or FIRA tournaments played each year.\nThe RoboCup contest currently has a number of soccer leagues:\nAdditionally, there is a RoboCupJunior league for younger students.\nqfix Soccer robot.\nThe qfix soccer robot \"Terminator\" is an omnidrive robot that can be used for RoboCup Junior. It includes a kicker and a dribbler as well as a controller board with Atmel controller.\nThe robot can be programmed using the GNU GCC compiler.\nGraupner RC-SOCCERBOT.\nThe Graupner \"RC-SOCCERBOT\" is a mobile robot platform developed by qfix which can be used as a radio-controller toy playing soccer with ping-pong balls. Gaining more experience in robotics the user can also implement C++ programs on the robot.", "Automation-Control": 0.969907999, "Qwen2": "Yes"} {"id": "10099552", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=10099552", "title": "Zakai equation", "text": "In filtering theory the Zakai equation is a linear stochastic partial differential equation for the un-normalized density of a hidden state. In contrast, the Kushner equation gives a non-linear stochastic partial differential equation for the normalized density of the hidden state. In principle either approach allows one to estimate a quantity function (the state of a dynamical system) from noisy measurements, even when the system is non-linear (thus generalizing the earlier results of Wiener and Kalman for linear systems and solving a central problem in estimation theory). The application of this approach to a specific engineering situation may be problematic however, as these equations are quite complex. The Zakai equation is a bilinear stochastic partial differential equation. It was named after Moshe Zakai.\nOverview.\nAssume the state of the system evolves according to\nand a noisy measurement of the system state is available:\nwhere formula_3 are independent Wiener processes. Then the unnormalized conditional probability density formula_4 of the state at time t is given by the Zakai equation:\nwhere the operator formula_6\nAs previously mentioned, formula_7 is an unnormalized density and thus does not necessarily integrate to 1. After solving for formula_7, integration and normalization can be done if desired (an extra step not required in the Kushner approach).\nNote that if the last term on the right hand side is omitted (by choosing h identically zero), the result is a nonstochastic PDE: the familiar Fokker–Planck equation, which describes the evolution of the state when no measurement information is available.", "Automation-Control": 0.9560777545, "Qwen2": "Yes"} {"id": "23347838", "revid": "398607", "url": "https://en.wikipedia.org/wiki?curid=23347838", "title": "UNIO Satu Mare", "text": "UNIO, established in 1911, is a large company specialised in machine building, mechanical assembling works and hydraulic, pneumatic and electric equipment assembling commissioning and service for the mining, energy and assembling industries. It is one of the largest companies in Satu Mare.\nIn January 2005 UNIO signed a contract with the German industrial company KUKA for the manufacturing of several assembly lines for the giant carmaker Mercedes-Benz.", "Automation-Control": 0.80112046, "Qwen2": "Yes"} {"id": "25264546", "revid": "27823944", "url": "https://en.wikipedia.org/wiki?curid=25264546", "title": "Evolutionary developmental robotics", "text": "Evolutionary developmental robotics (evo-devo-robo for short) refers to methodologies that systematically integrate evolutionary robotics, epigenetic robotics and morphogenetic robotics to study the evolution, physical and mental development and learning of natural intelligent systems in robotic systems. The field was formally suggested and fully discussed in a published paper and further discussed in a published dialogue.\nThe theoretical foundation of evo-devo-robo includes evolutionary developmental biology (evo-devo), evolutionary developmental psychology, developmental cognitive neuroscience etc. Further discussions on evolution, development and learning in robotics and design can be found in a number of papers, including papers on hardware systems and computing tissues.", "Automation-Control": 0.747458756, "Qwen2": "Yes"} {"id": "21295680", "revid": "35498457", "url": "https://en.wikipedia.org/wiki?curid=21295680", "title": "Robotics middleware", "text": "Robotics middleware is middleware to be used in complex robot control software systems.\nIt can be described as \"software glue\" to make it easier for robot builders focus on their specific problem area.\nRobotics middleware projects.\nA wide variety of projects for robotics middleware exist, but no one of these dominates - and in fact many robotic systems do not use any middleware. Middleware products rely on a wide range of different standards, technologies, and approaches that make their use and interoperation difficult, and some developers may prefer to integrate their system themselves.\nPlayer Project.\nThe Player Project (formerly the \"Player/Stage Project\") is a project to create free software for research into robotics and sensor systems. Its components include the \"Player\" network server and the \"Stage\" robot platform simulators. Although accurate statistics are hard to obtain, Player is one of the most popular open-source robot interfaces in research and post-secondary education. Most of the major intelligent robotics journals and conferences regularly publish papers featuring real and simulated robot experiments using Player and Stage.\nRT-middleware.\nRT-middleware is a common platform standards for Robots based on distributed object technology. RT-middleware supports the construction of various networked robotic systems by the integration of various network-enabled robotic elements called RT-Components. The specification standard of RT-components is discussed and defined by the Object Management Group (OMG).\nUrbi.\nUrbi is an open source cross-platform software platform in C++ used to develop applications for robotics and complex systems. It is based on the UObject distributed C++ component architecture. It also includes the urbiscript orchestration language which is a parallel and event-driven script language. UObject components can be plugged into urbiscript and appear as native objects that can be scripted to specify their interactions and data exchanges. UObjects can be linked to the urbiscript interpreter, or executed as autonomous processes in \"remote\" mode, either in another thread, another process, a machine on the local network, or a machine on a distant network.\nMIRO.\nMiro is a distributed object oriented framework for mobile robot control, based on CORBA (Common Object Request Broker Architecture) technology. \nThe Miro core components have been developed under the aid of ACE (Adaptive Communications Environment), an object oriented multi-platform framework for OS-independent interprocess, network and real time communication. They use TAO (The ACE ORB) as their ORB (Object Request Broker), a CORBA implementation designed for high performance and real time applications.\nCurrently supported platforms include Pioneers, the B21, some robot soccer robots and various robotic sensors.\nOrca.\nOrca describes its goals as:\nThey also state: \"To be successful, we think that a framework with such objectives must be: general, flexible and extensible; sufficiently robust, high-performance and full-featured for use in commercial applications, yet sufficiently simple for experimentation in university research environments.\"\nThey describe their approach as:\nOrca software is released under LGPL and GPL licenses.\nOpenRDK.\nOpenRDK is an open-source software framework for robotics for developing loosely coupled modules. It provides transparent concurrency management, inter-process (via sockets) and intra-process (via shared memory) blackboard-based communication and a linking technique that allows for input/output data ports conceptual system design. Modules for connecting to simulators and generic robot drivers are provided.\nRock.\nRock (Robot Construction Kit), is a software framework for the development of robotic systems. The underlying component model is based on the Orocos RTT (Real Time Toolkit). Rock provides all the tools required to set up and run high-performance and reliable robotic systems for wide variety of applications in research and industry. It contains a rich collection of ready to use drivers and modules for use in your own system, and can easily be extended by adding new components.\nISAAC SDK / Simulation.\nISAAC, The NVIDIA Isaac Software Development Kit (SDK) is a developer toolbox for accelerating the development and deployment of Artificial Intelligence-powered robots. The SDK includes the Isaac Robot Engine, packages with high-performance robotics algorithms (to perform perception and navigation), and hardware reference applications. Isaac Sim is a virtual robotics laboratory and a high-fidelity 3D world simulator. It accelerates research, design, and development in robotics by reducing cost and risk. Developers can quickly and easily train and test their robots in detailed, highly realistic scenarios. There is an open source community version available at GitHub with supported hardware platform includes BOM details, refer kaya-robot\nROS.\nROS (Robot Operating System) is a collection of software frameworks for robot software development on a heterogeneous computer cluster. ROS provides standard operating system services such as hardware abstraction, low-level device control, implementation of commonly used functionality, message-passing between processes, and package management.\nYARP.\nYARP is an open-source software package, written in C++ for interconnecting sensors, processors, and actuators in robots.\nDDX.\nDDX (Dynamic Data eXchange) is (Linux/BSD/Unix) middleware developed by CSIRO to provide a lightweight real-time publish/subscribe service to distributed robot controllers. DDX allows a coalition of programs to share data at run-time through an efficient shared memory mechanism. Multiple machines can be linked by means of a global naming service and, when needed, data is multi-cast across machines. DDX was developed to automate a number of large mining machines: including draglines, LHD trucks, excavators and rock-breakers. ", "Automation-Control": 0.8840990067, "Qwen2": "Yes"} {"id": "5425217", "revid": "7611264", "url": "https://en.wikipedia.org/wiki?curid=5425217", "title": "Full state feedback", "text": "Full state feedback (FSF), or pole placement, is a method employed in feedback control system theory to place the closed-loop poles of a plant in pre-determined locations in the s-plane. Placing poles is desirable because the location of the poles corresponds directly to the eigenvalues of the system, which control the characteristics of the response of the system. The system must be considered controllable in order to implement this method.\nPrinciple.\nIf the closed-loop dynamics can be represented by the state space equation (see State space (controls))\nwith output equation\nthen the poles of the system transfer function are the roots of the characteristic equation given by\nFull state feedback is utilized by commanding the input vector formula_4. Consider an input proportional (in the matrix sense) to the state vector,\nSubstituting into the state space equations above, we have\nThe poles of the FSF system are given by the characteristic equation of the matrix formula_8, formula_9. Comparing the terms of this equation with those of the desired characteristic equation yields the values of the feedback matrix formula_10 which force the closed-loop eigenvalues to the pole locations specified by the desired characteristic equation.\nExample of FSF.\nConsider a system given by the following state space equations:\nThe uncontrolled system has open-loop poles at formula_12 and formula_13. These poles are the eigenvalues of the formula_14 matrix and they are the roots of formula_15. Suppose, for considerations of the response, we wish the controlled system eigenvalues to be located at formula_12 and formula_17, which are not the poles we currently have. The desired characteristic equation is then formula_18, from formula_19.\nFollowing the procedure given above, the FSF controlled system characteristic equation is\nwhere\nUpon setting this characteristic equation equal to the desired characteristic equation, we find\nTherefore, setting formula_5 forces the closed-loop poles to the desired locations, affecting the response as desired.\nThis only works for Single-Input systems. Multiple input systems will have a formula_10 matrix that is not unique. Choosing, therefore, the best formula_10 values is not trivial. A linear-quadratic regulator might be used for such applications.", "Automation-Control": 0.9984340668, "Qwen2": "Yes"} {"id": "5429905", "revid": "1086983627", "url": "https://en.wikipedia.org/wiki?curid=5429905", "title": "Small-world routing", "text": "In network theory, small-world routing refers to routing methods for small-world networks. Networks of this type are peculiar in that relatively short paths exist between any two nodes. Determining these paths, however, can be a difficult problem from the perspective of an individual routing node in the network if no further information is known about the network as a whole.\nGreedy routing.\nNearly every solution to the problem of routing in small world involves the application of greedy routing. This sort of routing depends on a relative reference point by which any node in the path can choose the next node it believes is closest to the destination. That is, there must be something to be greedy about. For example, this could be geographic location, IP address, etc. In the case of Milgram's original small-world experiment, participants knew the location and occupation of the final recipient and could therefore forward messages based on those parameters.\nConstructing a reference base.\nGreedy routing will not readily work when there is no obvious reference base. This can occur, for example, in overlay networks where information about the destination's location in the underlying network is not available. Friend-to-friend networks are a particular example of this problem. In such networks, trust is ensured by the fact that you only know underlying information about nodes with whom you are already a neighbor.\nOne solution in this case, is to impose some sort of artificial addressing on the nodes in such a way that this addressing can be effectively used by greedy routing methods. A 2005 paper by a developer of the Freenet Project discusses how this can be accomplished in friend to friend networks. Given the assumption that these networks exhibit small world properties, often as the result of real-world or acquaintance relationships, it should be possible to recover an embedded Kleinberg small-world graph. This is accomplished by selecting random pairs of nodes and potentially swapping them based on an objective function that minimizes the product of all the distances between any given node and its neighbors.\nAn important problem involved with this solution is the possibility of local minima. This can occur if nodes are in a situation that is optimal only considering a local neighborhood, while ignoring the possibility of a higher optimality resulting from swaps with distant nodes. In the above paper, the authors proposed a simulated annealing method where less-than-optimal swaps were made with a small probability. This probability was proportional to the value of making the switches. Another possible metaheuristic optimization method is a tabu search, which adds a memory to the swap decision. In its most simplistic form, a limited history of past swaps is remembered so that they will be excluded from the list of possible swapping nodes.\nThis method for constructing a reference base can also be adapted to distributed settings, where decisions can only be made at the level of individual nodes who have no knowledge of the overall network. It turns out that the only modification necessary is in the method for selecting pairs of random nodes. In a distributed setting, this is done by having each node periodically send out a random walker terminating at a node to be considered for swapping.\nThe Kleinberg model.\nThe Kleinberg model of a network is effective at demonstrating the effectiveness of greedy small world routing. The model uses an n x n grid of nodes to represent a network, where each node is connected with an undirected edge to its neighbors. To give it the \"small world\" effect, a number of long range edges are added to the network that tend to favor nodes closer in distance rather than farther. When adding edges, the probability of connecting some random vertex formula_1 to another random vertex w is proportional to formula_2, where formula_3 is the clustering exponent.\nGreedy routing in the Kleinberg model.\nIt is easy to see that a greedy algorithm, without using the long range edges, can navigate from random vertices formula_4 on the grid in formula_5 time. By following the guaranteed connections to our neighbors, we can move one unit at a time in the direction of our destination. This is also the case when the clustering component formula_3 is large and the \"long range\" edges end up staying very close; we simply do not take advantage of the weaker ties in this model. When formula_7, the long range edges are uniformly connected at random which means the long range edges are \"too random\" to be used efficiently for decentralized search. Kleinberg has shown that the optimal clustering coefficient for this model is formula_8, or an inverse square distribution.\nTo reason why this is the case, if a circle of radius r is drawn around the initial node it will have nodal density formula_9 where n is the number of nodes in the circular area. As this circle gets expanded further out, the number of nodes in the given area increases proportional to formula_10 as the probability of having a random link with any node remains proportional formula_11, meaning the probability of the original node having a weak tie with any node a given distance away is effectively independent of distance. Therefore, it is concluded that with formula_8, long-range edges are evenly distributed over all distances, which is effective for letting us funnel to our final destination.\nSome structured Peer-to-peer systems based on DHTs often are implementing variants of Kleinberg's Small-World topology to enable efficient routing within Peer-to-peer network with limited node degrees.", "Automation-Control": 0.6097431183, "Qwen2": "Yes"} {"id": "21481156", "revid": "38359508", "url": "https://en.wikipedia.org/wiki?curid=21481156", "title": "Pitch drop-back", "text": "Pitch drop-back is the phenomena by which an aircraft which is perturbed in flight path angle from its trim position by a step input exhibits an output which is indicative of a second order system.\nA pilot who actuates an elevator input may find that the aircraft then \"droops\" or \"drops back\" to a position further toward the start position. The phenomenon is particularly marked in tilt-rotor aircraft. Pitch drop-back may be controlled using a Stability Augmentation System or Stability Control and Augmentation System.", "Automation-Control": 0.9737648368, "Qwen2": "Yes"} {"id": "30832164", "revid": "5202324", "url": "https://en.wikipedia.org/wiki?curid=30832164", "title": "Weighting pattern", "text": "A weighting pattern for a linear dynamical system describes the relationship between an input formula_1 and output formula_2. Given the time-variant system described by\nthen the output can be written as\nwhere formula_6 is the weighting pattern for the system. For such a system, the weighting pattern is formula_7 such that formula_8 is the state transition matrix.\nThe weighting pattern will determine a system, but if there exists a realization for this weighting pattern then there exist many that do so.\nLinear time invariant system.\nIn a LTI system then the weighting pattern is:\nwhere formula_10 is the matrix exponential.", "Automation-Control": 0.9344670177, "Qwen2": "Yes"} {"id": "49976857", "revid": "37843727", "url": "https://en.wikipedia.org/wiki?curid=49976857", "title": "Tardiness (scheduling)", "text": "In scheduling, tardiness is a measure of a delay in executing certain operations and earliness is a measure of finishing operations before due time. The operations may depend on each other and on the availability of equipment to perform them.\nTypical examples include job scheduling in manufacturing and data delivery scheduling in data processing networks.\nIn manufacturing environment, inventory management considers both tardiness and earliness undesirable. Tardiness involves backlog issues such as customer compensation for delays and loss of goodwill. Earliness incurs expenses for storage of the manufactured items and ties up capital.\nMathematical formulations.\nIn an environment with multiple jobs, let the deadline be formula_1 and the completion time be formula_2 of job formula_3. Then for job formula_3\nIn scheduling common objective functions are formula_8 or weighted version of these sums, formula_9, where every job comes with a weight formula_10. The weight is a representation of job cost, priority, etc.\nIn a large number of cases the problems of optimizing these functions are NP-hard.", "Automation-Control": 0.9468539357, "Qwen2": "Yes"} {"id": "10353119", "revid": "1165827118", "url": "https://en.wikipedia.org/wiki?curid=10353119", "title": "Realization (systems)", "text": "In systems theory, a realization of a state space model is an implementation of a given input-output behavior. That is, given an input-output relationship, a realization is a quadruple of (time-varying) matrices formula_1 such that\nwith formula_4 describing the input and output of the system at time formula_5.\nLTI System.\nFor a linear time-invariant system specified by a transfer matrix, formula_6, a realization is any quadruple of matrices formula_7 such that formula_8.\nCanonical realizations.\nAny given transfer function which is strictly proper can easily be transferred into state-space by the following approach (this example is for a 4-dimensional, single-input, single-output system)):\nGiven a transfer function, expand it to reveal all coefficients in both the numerator and denominator. This should result in the following form:\nThe coefficients can now be inserted directly into the state-space model by the following approach:\nThis state-space realization is called controllable canonical form (also known as phase variable canonical form) because the resulting model is guaranteed to be controllable (i.e., because the control enters a chain of integrators, it has the ability to move every state).\nThe transfer function coefficients can also be used to construct another type of canonical form\nThis state-space realization is called observable canonical form because the resulting model is guaranteed to be observable (i.e., because the output exits from a chain of integrators, every state has an effect on the output).\nGeneral System.\n\"D\" = 0.\nIf we have an input formula_14, an output formula_15, and a weighting pattern formula_16 then a realization is any triple of matrices formula_17 such that formula_18 where formula_19 is the state-transition matrix associated with the realization.\nSystem identification.\nSystem identification techniques take the experimental data from a system and output a realization. Such techniques can utilize both input and output data (e.g. eigensystem realization algorithm) or can only include the output data (e.g. frequency domain decomposition). Typically an input-output technique would be more accurate, but the input data is not always available.", "Automation-Control": 0.9986945987, "Qwen2": "Yes"} {"id": "39800443", "revid": "967448215", "url": "https://en.wikipedia.org/wiki?curid=39800443", "title": "Intermittent control", "text": "Intermittent control is a feedback control method which not only explains some human control systems but also has applications to control engineering.\nIn the context of control theory, intermittent control provides a spectrum of possibilities between the two extremes of continuous-time and discrete-time control: the control signal consists of a sequence of (continuous-time) parameterised trajectories whose parameters are adjusted intermittently. It is different from discrete-time control in that the control is not constant between samples; it is different from continuous-time control in that the trajectories are reset intermittently. As a class of control theory, intermittent predictive control is more general than continuous control and provides a new paradigm incorporating continuous predictive and optimal control with intermittent, open loop (ballistic) control.\nThere are at least three areas where intermittent control is relevant. Firstly, continuous-time model-based predictive control where the intermittency is associated with on-line optimisation. Secondly, event-driven control systems where the intersample interval is time varying and determined by the event times. Thirdly, explanation of physiological control systems which, in some cases, have an intermittent character. This intermittency may be due to the “computation” in the central nervous system.\nConventional sampled-data control uses a zero-order hold, which produces a piecewise-constant control signal and can be used to give a\nsampled-data implementation which approximates previously-designed continuous-time controller. In contrast to conventional sampled data control, intermittent control explicitly embeds the underlying continuous-time closed-loop system in a \"system-matched\" hold which generates an open-loop intersample control trajectory based on the underlying continuous-time closed-loop control system.\nHistory.\nIntermittent control initially evolved separately in the engineering and physiological literature.\nPhysiological literature.\nThe concept of \"intermittent control\" appeared in a posthumous paper by Kenneth Craik which states “The human operator behaves basically as an intermittent correction servo”. A colleague of Kenneth Craik, Margaret Vince, related the concept of intermittency to the Psychological refractory period and provided experimental verification of intermittency. Fernando Navas and James Stark showed experimentally that human hand movements were synchronised to input signals rather than to an internal clock: in other words the hand control system is event-driven not clock-driven. The first detailed mathematical model of intermittency was presented by Peter Neilson, Megan Neilson, and Nicholas O’Dwyer.\nA more recent mathematical model of intermittency is given by PeterGawthrop, Ian Loram, Martin Lakie and Henrik Gollee.\nEngineering literature.\nIn the context of Control Engineering, the term intermittent control was used by Eric Ronco, Taner Arsan and Peter Gawthrop.\nThey stated that “A conceptual, and practical difficulty with the continuous-time generalised predictive controller is solved by replacing the continuously moving horizon by an intermittently moving horizon. This allows slow optimisation to occur concurrently with a fast control action.” The concept of intermittent model predictive control was refined by Peter Gawthrop working with Liuping Wang, who also looked at event-driven intermittent control.\nIn a separate line of development Tomas Estrada, Hai Lin and Panos Antsaklis developed the concept of model-based control with intermittent feedback in the context of a networked control system.", "Automation-Control": 0.999109745, "Qwen2": "Yes"} {"id": "470021", "revid": "1056505121", "url": "https://en.wikipedia.org/wiki?curid=470021", "title": "Berkeley printing system", "text": "The Berkeley printing system is one of several standard architectures for printing on the Unix platform. It originated in 2.10BSD, and is used in BSD derivatives such as FreeBSD, NetBSD, OpenBSD, and DragonFly BSD. A system running this print architecture could traditionally be identified by the use of the user command \"lpr\" as the primary interface to the print system, as opposed to the System V printing system \"lp\" command.\nTypical user commands available to the Berkeley print system are:\nThe \"lpd\" program is the daemon with which those programs communicate.\nThese programs support the line printer daemon protocol, so that other machines on a network can submit jobs to a print queue on a machine running the Berkeley printing system, and so that the Berkeley printing system user commands can submit jobs to machines that support that protocol.", "Automation-Control": 0.9083541632, "Qwen2": "Yes"} {"id": "470223", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=470223", "title": "CSIv2", "text": "In distributed computing, CSIv2 (Common Secure Interoperability Protocol Version 2) is a protocol implementing security features for inter-ORB communication. It intends, in part, to address limitations of SSLIOP.\nCSIv2 also facilitates secure EJB-CORBA interoperability.", "Automation-Control": 0.9865598083, "Qwen2": "Yes"} {"id": "2008426", "revid": "1146968393", "url": "https://en.wikipedia.org/wiki?curid=2008426", "title": "Information systems technician", "text": "An information systems technician is a technician whose responsibility is maintaining communications and computer systems.\nDescription.\nInformation systems technicians operate and maintain information systems, facilitating system utilization. In many companies, these technicians assemble data sets and other details needed to build databases. This includes data management, procedure writing, writing job setup instructions, and performing program librarian functions. Information systems technicians assist in designing and coordinating the development of integrated information system databases. Information systems technicians also help maintain Internet and Intranet websites. They decide how information is presented and create digital multimedia and presentation using software and related equipment.\nInformation systems technicians install and maintain multi-platform networking computer environments, a variety of data networks, and a diverse set of telecommunications infrastructures. Information systems technicians schedule information gathering for content in a multiple system environment. Information systems technicians are responsible for the operation, programming, and configuration of many pieces of electronics, hardware and software. ITs often are also tasked to investigate, troubleshoot, and resolve end-user problems. Information systems technicians conduct ongoing assessments of short and long-term hardware and software needs for companies, developing, testing, and implementing new and revised programs.\nInformation systems technicians cooperate with other staff to inventory, maintain and manage computer and communication systems. Information systems technicians provide communication links and connectivity to the department in an organization, serving to equipment modification and installation tasks. This includes:\nAdditionally, Information systems technicians can conduct training and provide technical support to end-users, providing this for a departments (sometimes across multiple organizations).", "Automation-Control": 0.987958014, "Qwen2": "Yes"} {"id": "49272814", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=49272814", "title": "Mesoscale manufacturing", "text": "Mesoscale manufacturing is the process of creating components and products in a range of approximately from 0.1mm to 5mm with high accuracy and precision using a wide variety of engineering materials. Mesomanufacturing processes are filling the gap between macro- and micromanufacturing processes and overlaps both of them (see picture). Other manufacturing technologies are nanoscale ( 0.5 mm).\nApplications.\nApplication of mesomanufacturing include electronics, biotechnology, optics, medicine, avionics, communications, and other areas. Specific applications include mechanical watches, and extremely small motors and bearings; lenses for cameras and other micro parts for mobile telephones; micro-batteries, mesoscale fuel cells, microscale pumps, valves, and mixing devices for microchemical reactors; biomedical implants, microholes for fiber optics; medical devices such as stents and valves; mini nozzles for high-temperature jets; mesoscale molds; desktop- or micro-factories, and many others.\nProcesses.\nManufacturing in the mesoscale can be accomplished by scaling down macroscale manufacturing processes or scaling up nanomanufacturing processes. Macroscale techniques like mill and lathe machining have been successful used to create features in the range of 25 µm. Meso Machine tools (mMTs), for example a miniaturized milling machine, is an expansion of using traditional macroscale techniques to manufacture mesoscale products. With the limitation of self-excited vibration of machine tools and fatigue, microassembly and micro- and mesoscale milling are created to improve the maximum stiffness and dynamic operation of the milling process, which improves the overall performance of manufacturing. The development of mMTs has revealed many challenges that are specific to machining at the small scales. These challenges stem from the large influence of grain size at small scales and the necessity of extremely small tolerances for both the machine tools and the measuring tools.\nLaser machining is a traditional technique that uses nanosecond pulses of ultraviolet light to create mesoscale features like holes, fillets, etc. The removal of material during laser machining is proportional to exposure time and therefore this process can be used to create three dimensional features.\nA less traditional technique is to use focused ion beam sputtering (FIB) to remove material. This process involves focusing a beam of ions, for example gallium, to the work piece, causing material to be removed. FIB sputtering has a relatively low rate of material removal and therefore has limited application.\nElectrical discharge machining (EDM) is another subtractive manufacturing process used in the mesoscale. This process requires that electricity be transferred between the tool electrode and the work piece and therefore it can only be used to manufacture materials that conduct electricity. One advantage of EDM is that it can be used on hard materials that do not work well in traditional machining processes, such as titanium.", "Automation-Control": 0.9221462607, "Qwen2": "Yes"} {"id": "5207273", "revid": "40820929", "url": "https://en.wikipedia.org/wiki?curid=5207273", "title": "SCSI standalone enclosure services", "text": "SCSI standalone enclosure services is a computer protocol used mainly with disk storage enclosures. It allows a host computer to communicate with the enclosure to access its power, cooling, and other non-data characteristics. \nThe host computer communicates with one or more SCSI Enclosure Services (SES) controllers in the enclosure via a SCSI interface which may be Parallel SCSI, FC-AL, SAS, or SSA. Each SES controller has a SCSI identity (address) and so can accept direct SCSI commands.\nImplemented commands.\nThe following SCSI commands are implemented by standalone enclosure services devices:\nNote 1: The initiator needs to send a SCSI inquiry to interrogate the SCCS bit which says whether the SES controller has this command.", "Automation-Control": 0.949683249, "Qwen2": "Yes"} {"id": "18355561", "revid": "1150665934", "url": "https://en.wikipedia.org/wiki?curid=18355561", "title": "Electrohydraulic forming", "text": "Electrohydraulic forming is a type of process in which an electric arc discharge in liquid is used to convert electrical energy to mechanical energy and change the shape of the workpiece. A capacitor bank delivers a pulse of high current across two electrodes, which are positioned a short distance apart while submerged in a fluid (water or oil). The electric arc discharge rapidly vaporizes the surrounding fluid, creating a shock wave. The workpiece, which is kept in contact with the fluid, is deformed into an evacuated die.\nThe potential forming capabilities of submerged arc discharge processes were recognized as early as the mid-1940s (Yutkin L.A.). During the 1950s and early 1960s, the basic process was developed into production systems. This work principally was by and for the aerospace industries. By 1970, forming machines based on submerged arc discharge were available from machine tool builders. A few of the larger aerospace fabricators built machines of their own design to meet specific part fabrication requirements.\nElectrohydraulic forming (EHF) is based on the ultra-high-speed deformation of metal using shockwaves in water. Using the discharge of current from a capacitor bank, an electric arc is generated in water between two electrodes. This electric arc vaporizes the surrounding water, converting electrical energy into an intense shockwave of mechanical energy.\nThe shockwave simultaneously transforms the metal workpiece into a visco-plastic state and accelerates it into a die, enabling forming of complex shapes at high speeds in cold conditions. All of which happens in a matter of milliseconds; total cycle time of seconds including charging time of the system. This process is not limited by size and allows forming of parts up to a few square meters in size. An array of electrodes can be placed over a large workpiece, enabling pressure distribution according to the product’s topology, still using a one-sided die to create complex shapes and fine details.\nVery large capacitor banks are needed to produce the same amount of energy as a modest mass of high explosives - which is expensive for large parts. On the other hand, the electrohydraulic method was seen as better suited to automation because of the fine control of multiple, sequential energy discharges and the relative compactness of the electrode-media containment system.\nAdvantages of EHF", "Automation-Control": 0.939940989, "Qwen2": "Yes"} {"id": "31205078", "revid": "27976443", "url": "https://en.wikipedia.org/wiki?curid=31205078", "title": "Liquid Impact Forming", "text": "Liquid Impact Forming is a metalworking process in which the combined use of a stamping press and a liquid medium forms the desired shape on the workpiece. This technique is a synthesis of two metalworking processes; stamping (metalworking) and hydroforming. It is especially suited for the cold forming of tubular structural parts in automotive, railroad and aerospace industries.\nThe process is based on a patent by Stanley Ash from the Greenville Tool & Die Company in Greenville, Michigan.\nProcess.\nLiquid impact forming uses the principles of hydroforming process with conventional stamping equipment. Even though hydroforming offers great advantages over conventional tube stamping through the reduction of manufacturing steps and the reduction of variation in workpieces, it still requires expensive mechanical equipment such as dies to withstand extreme pressures and pressurizing equipment such as pumps and intensifiers. As an alternative to this, the liquid impact forming utilizes the increase in the internal pressure of the liquid inside of a tube during the stamping process, eliminating the need for the use of above mentioned equipment.\nThe process includes the following stages:\n1. A metal tube is filled with a liquid, preferably water and placed between lower and upper die sections of stamping dies.\n2. The ends of the liquid-filled tube are sealed to confine the liquid within the tube at approximately atmospheric pressure.\n3. The liquid-filled sealed tube is stamped in a conventional die to form the tube into a desired configuration, such as a box-shaped structural member. The compressive forces produced as the die closes to form stamped tube also compress the liquid within the interior of the sealed tube as it changes shape. Thus, the pressure of the liquid increases as the die closes. As the liquid resists compression, it forces the tube walls outwardly toward the interior surface of the die cavity. Once die sections are fully closed around the sealed tube, the tube walls take the shape of the die cavity.\n4. The remaining liquid is drained from the formed tube.\nApplications and Variations.\nThe liquid impact forming process is especially advantageous for the cold forming of tubular structural parts in automotive, railroad and aerospace industries. It may be used in the cold forming of cylindrical or non-cylindrical parts. It is limited for the applications requiring extensive metal flow or bulging because of the absence of external pressure utilization as in hydroforming.\nOne further variation of the liquid impact forming comprises the use of a change-of-state material in liquid state in order to prevent the tube wall from buckling or wrinkling during piercing. The change-of-state material can be water or a metallic lead-bismuth alloy. The liquid can be frozen before the stamping phase. After stamping, the liquid can be melted and drained from the shaped tube. Other applications of the process would be the piercing or bulging of tubes, which could also include the use of the change-of-state material.", "Automation-Control": 0.9661289454, "Qwen2": "Yes"} {"id": "53127729", "revid": "35837895", "url": "https://en.wikipedia.org/wiki?curid=53127729", "title": "Precision Time Protocol Industry Profile", "text": "Industrial automation systems consisting of several distributed controllers need a precise synchronization for commands, events and process data. \nFor instance, motors for newspaper printing are synchronized within some 5 microseconds to ensure that the color pixels in the different cylinders come within 0.1 mm at a paper speed of some 20 m/s. Similar requirements exist in high-power semiconductors (e.g. for converting between AC and DC grids) and in drive-by-wire vehicles (e.g. cars with no mechanical steering wheel). \nThis synchronisation is provided by the communication network, in most cases Industrial Ethernet.\nMany ad-hoc synchronization schemes exist, so IEEE published a standard Precision Time Protocol IEEE 1588 or \"PTP\", which allows sub-microsecond synchronization of clocks.\nPTP is formulated generally, so concrete applications need a stricter profile. In particular, PTP does not specify how the clocks should operate when the network is duplicated for better resilience to failures.\nThe PTP Industrial Profile (PIP) is a standard of the IEC 62439-3 that specifies in its Annex C two Precision Time Protocol IEEE 1588 / IEC 61588 profiles, L3E2E and L2P2P, to synchronize network clocks with an accuracy of 1 μs and provide fault-tolerance against clock failures.\nThe IEC 62439-3 PTP profiles are applicable to most Industrial Ethernet networks, for synchronized drives, robotics, vehicular technology and other applications that require precise time distribution, not necessarily using redundant networks.\nThe IEC 62439-3 profile L2P2P has been adopted as IEC/IEEE 61850-9-3 by the power utility industry to support precise time stamping of voltage and current measurement for differential protection, wide area monitoring and protection, busbar protection and event recording.\nThe IEC 62439-3 PTP profiles can be used to ensure deterministic operation of critical functions in the automation system itself, for instance precise starting of tasks, resource reservation and deadline supervision. \nThe IEC 62439-3 Annexes belongs to the Parallel Redundancy Protocol and High-availability Seamless Redundancy standard suite for high availability automation networks. However, this specification also applies to networks that have no redundancy and do not use PRP or HSR. \nTopology.\nThe PIP relies on the IEEE 1588 topology, consisting of grandmaster clocks (GC), ordinary clocks (OC), boundary clocks (BC), transparent clocks (TC) and hybrid clocks (HC = TC&OC).\nFor redundancy, a PIP network contains several clocks that are master-capable. Normally, the best master clock ensures that only one grandmaster broadcasts the time. \nIn redundant networks, and especially in PRP, several masters can be active at the same time, the slave then chooses its master. \nMain features.\nIEC 62439-3 Annex C uses the following IEEE Std 1588 options:\nPerformance.\nIEC 62439-3 Annex C aims at an accuracy of better than 1 μs after crossing 15 bridges with transparent clocks.\nIt assumes that all network elements (bridges, routers, media converters, links) support PTP with a given performance:\nBy relying on these guaranteed values, the network engineer can calculate the time inaccuracy at different nodes of the network and place the clocks, especially the grandmaster clocks suitably. \nIEC TR 61850-90-4 (Network engineering guidelines) gives advice on the use of IEC/IEEE 61850-9-3 in substation automation networks.\nIEEE 1588 settings.\nIEC 62439-3 Annex C restricts the parameters of IEEE Std 1588 to the following values:\nAdditions to IEEE Std 1588.\nIEC 62439-3 Annex C specifies requirements in addition to IEEE 1588:\nStandard owners.\nThis protocol has been developed by the IEC SC65C WG15 in the framework of IEC 62439, which applies to all IEC industrial networks. \nTo avoid parallel standards in IEC and IEEE in the field of grid automation, the L2PTP profile specific to grid automation previous IEC 62439-3 Annex B has been placed under the umbrella of the IEC&IEEE Joint Development 61850-9-3.\nTechnical responsibility rests with IEC SC65C WG15, which is committed to keep the IEC 62439-3 profile L2P2P and IEC/IEEE 61850-9-3 aligned.", "Automation-Control": 0.7231826782, "Qwen2": "Yes"} {"id": "5465213", "revid": "14423028", "url": "https://en.wikipedia.org/wiki?curid=5465213", "title": "Coefficient diagram method", "text": "In control theory, the coefficient diagram method (CDM) is an algebraic approach applied to a polynomial loop in the parameter space, where a special diagram called a \"\"coefficient diagram\" is used as the vehicle to carry the necessary information, and as the criterion of good design. The performance of the closed loop system is monitored by the coefficient diagram. \nThe most considerable advantages of CDM can be listed as follows:\nIt is usually required that the controller for a given plant should be designed under some practical limitations.\nThe controller is desired to be of minimum degree, minimum phase (if possible) and stable. It must have enough bandwidth and power rating limitations. If the controller is designed without considering these limitations, the robustness property will be very poor, even though the stability and time response requirements are met. CDM controllers designed while considering all these problems is of the lowest degree, has a convenient bandwidth and results with a unit step time response without an overshoot. These properties guarantee the robustness, the sufficient damping of the disturbance effects and the low economic property. \nAlthough the main principles of CDM have been known since the 1950s, the first systematic method was proposed by Shunji Manabe. He developed a new method that easily builds a target characteristic polynomial to meet the desired time response. CDM is an algebraic approach combining classical and modern control theories and uses polynomial representation in the mathematical expression. The advantages of the classical and modern control techniques are integrated with the basic principles of this method, which is derived by making use of the previous experience and knowledge of the controller design. Thus, an efficient and fertile control method has appeared as a tool with which control systems can be designed without needing much experience and without confronting many problems.\nMany control systems have been designed successfully using CDM. It is very easy to design a controller under the conditions of stability, time domain performance and robustness. The close relations between these conditions and coefficients of the characteristic polynomial can be simply determined. This means that CDM is effective not only for control system design but also for controller parameters tuning.", "Automation-Control": 0.9059582949, "Qwen2": "Yes"} {"id": "47813958", "revid": "45382375", "url": "https://en.wikipedia.org/wiki?curid=47813958", "title": "Rule-based DFM analysis for direct metal laser sintering", "text": "Rule based DFM analysis for direct metal laser sintering. Direct metal laser sintering (DMLS) is one type of additive manufacturing process that allows layer by layer printing of metal parts having complex geometries directly from 3D CAD data. It uses a high-energy laser to sinter powdered metal under computer control, binding the material together to create a solid structure. DMLS is a net shape process and allows the creation of highly complex and customized parts with no extra cost incurred for its complexity.\nDMLS is being used to fabricate complex metal parts that are difficult to do so using traditional manufacturing processes thus gives immense freedom to the designer while designing the component. However, there are certain Design for Manufacturability (DFM) considerations that should be taken care of while designing the parts to be printed. DFM provides guidance to the design team in making the product structure more compliant to the given manufacturing process. It removes the wall between the designing and manufacturing phases of product development thus enables designers to take advantages of all the inherent costs and other benefits available in the manufacturing process. The early considerations of DFM principles and guidelines can lead to significant cost and time cutting in the final development of the product. Some of the common guidelines for DMLS are:\nSize.\nThe size of the part that can be printed depends upon the printer that is being used. With the current technology a maximum build size of 228 X 228 X 304 mm can be achieved. Hence, the size of the part to be printed should be within required dimensions. DMLS has a minimum sintering width (depends on laser diameter) varying from 0.6 mm to 0.9 mm. This defines the minimum external feature size of the part and thus the design with any external features having smaller dimensions must be avoided.\nAccuracy.\nThe accuracy and surface roughness of the part depends on the powder grain size ranging between 50 μm to 100 μm. The layer thickness which lies between 0.02 mm and 0.05 mm determines the resolution in the vertical direction. Therefore, the regions of the parts which require high accuracy should be designed with planned allowance of 0.1 mm to 0.5 mm and secondary finishing and/or machining operations should then be used to achieve the required accuracy.\nOverhangs.\nIn DMLS, powder bed supports the parts and keep them held in place. However, support structures are explicitly required for most of the downward facing surfaces that make an angle less than 45 degrees with the powder bed. This is because powder bed alone is not sufficient enough to hold the liquid phase of the metal that is created when laser is scanning the powder. Support structures are also required to restrict curling/warping of the melted powder due to high-temperature gradients. The overhangs having angles less than 45 degrees should be avoided if possible at the design stage. The main advantage of this is to reduce material usage and the post processing requirement of removing support structures from the designed components.\nHeight.\nThe total number of layers required to build the whole part is directly proportional to the height of the part measured along the build direction. Every layer of the part to be printed requires tightly laying compacted thin layer of powdered material using roller, tracing of laser according to the 3D data fed to the machine in the horizontal plane and incremental lowering of powder bed for the successive layer to be laid. These processes require a significant amount of time thus redesigning the product for smaller heights may save manufacturing time greatly. The build orientation should be such that the height of the part should be least along the build direction.\nAnisotropy.\nThe main direction of heat flow which is generated by the laser at the top is along the build direction due to the fact that powder bed lying at the bottom is the major heat sink. The layered addition of material and the directional heat flow in DMLS lead to the growing of microstructural grains along the build direction leading to anisotropic properties. The structure printed through DMLS has weaker properties along the build direction. This anisotropy can be removed using heat treatment methods but they are highly energy intensive and costly processes. Hence, it is advisable to consider the anisotropy in the very beginning of designing such structural parts and the direction of largest stress in the structure should lie in the horizontal plane.\nComplexity.\nBeing an additive manufacturing technique, DMLS doesn't incur any extra cost for the complexity of the part. The build volume along with the number of layers is what determines the production cost and time. DMLS eliminates the need for tool production however such technologies are impervious to economies of scale. Therefore, it is recommended to design parts with least amount of superfluous volumes, building only the relevant geometries. Furthermore, the parts should be designed to avoid assembly requirements because printing sub-assembly with intricate geometries is now possible.", "Automation-Control": 0.9123007655, "Qwen2": "Yes"} {"id": "1376887", "revid": "18779361", "url": "https://en.wikipedia.org/wiki?curid=1376887", "title": "Logical clock", "text": "A logical clock is a mechanism for capturing chronological and causal relationships in a distributed system. Often, distributed systems may have no physically synchronous global clock. In many applications (such as distributed GNU make), if two processes never interact, the lack of synchronization is unobservable and in these applications it is enough for the processes to agree on the event ordering (i.e., logical clock) rather than the wall-clock time. The first logical clock implementation, the Lamport timestamps, was proposed by Leslie Lamport in 1978 (Turing Award in 2013).\nLocal vs global time.\nIn logical clock systems each process has two data structures: \"logical local time\" and \"logical global time\". Logical local time is used by the process to mark its own events, and logical global time is the local information about global time. A special protocol is used to update logical local time after each local event, and logical global time when processes exchange data.\nApplications.\nLogical clocks are useful in computation analysis, distributed algorithm design, individual event tracking, and exploring computational progress.\nAlgorithms.\nSome noteworthy logical clock algorithms are:", "Automation-Control": 0.9381537437, "Qwen2": "Yes"} {"id": "40578121", "revid": "3125232", "url": "https://en.wikipedia.org/wiki?curid=40578121", "title": "Impedance control", "text": "Impedance control is an approach to dynamic control relating force and position. It is often used in applications where a manipulator interacts with its environment and the force position relation is of concern. Examples of such applications include humans interacting with robots, where the force produced by the human relates to how fast the robot should move/stop. Simpler control methods, such as position control or torque control, perform poorly when the manipulator experiences contacts. Thus impedance control is commonly used in these settings.\nMechanical impedance is the ratio of force output to motion input. This is analogous to electrical impedance that is the ratio of voltage output to current input (e.g. resistance is voltage divided by current). A \"spring constant\" defines the force output for a displacement (extension or compression) of the spring. A \"damping constant\" defines the force output for a velocity input. If we control the impedance of a mechanism, we are controlling the force of resistance to external motions that are imposed by the environment.\nMechanical admittance is the inverse of impedance - it defines the motions that result from a force input. If a mechanism applies a force to the environment, the environment will move, or not move, depending on its properties and the force applied. For example, a marble sitting on a table will react very differently to a given force than will a log floating in a lake.\nThe key theory behind the method is to treat the environment as an admittance and the manipulator as an impedance. It assumes the postulate that \"no controller can make the manipulator appear to the environment as anything other than a physical system.\"\nThis rule of thumb can also be stated as: \"in the most common case in which the environment is an admittance (e.g. a mass, possibly kinematically constrained) that relation should be an impedance, a function, possibly nonlinear, dynamic, or even discontinuous, specifying the force produced in response to a motion imposed by the environment.\"\nPrinciple.\nImpedance control doesn't simply regulate the force or position of a mechanism. Instead it regulates the relationship between force and position on the one hand, and velocity and acceleration on the other hand, i.e. the impedance of the mechanism. It requires a position (velocity or acceleration) as input and has a resulting force as output. The inverse of impedance is admittance. It imposes position.\nSo actually the controller imposes a spring-mass-damper behavior on the mechanism by maintaining a dynamic relationship between force formula_1 and position, velocity and acceleration formula_2: formula_3, with formula_4 being friction and formula_5 being static force.\nMasses (formula_6) and springs (with stiffness formula_7) are energy storing elements, whereas a damper (with damping formula_8) is an energy dissipating device. If we can control impedance, we are able to control energy exchange during interaction, \ni.e. the work being done. So impedance control is interaction control.\nNote that mechanical systems are inherently multi-dimensional - a typical robot arm can place an object in three dimensions (formula_9 coordinates) and in three orientations (e.g. roll, pitch, yaw). In theory, an impedance controller can cause the mechanism to exhibit a multi-dimensional mechanical impedance. For example, the mechanism might act very stiff along one axis and very compliant along another. By compensating for the kinematics and inertias of the mechanism, we can orient those axes arbitrarily and in various coordinate systems. For example, we might cause a robotic part holder to be very stiff tangentially to a grinding wheel, while being very compliant (controlling force with little concern for position) in the radial axis of the wheel.\nMathematical Basics.\nJoint space.\nAn uncontrolled robot can be expressed in Lagrangian formulation as\n + \\boldsymbol{c}(\\boldsymbol{q},\\dot{\\boldsymbol{q}}) + \\boldsymbol{g}(\\boldsymbol{q}) + \\boldsymbol{h}(\\boldsymbol{q},\\dot{\\boldsymbol{q}}) + \\boldsymbol{\\tau}_{\\mathrm{ext}},\nwhere formula_10 denotes joint angular position, formula_11 is the symmetric and positive-definite inertia matrix, formula_12 the Coriolis and centrifugal torque, formula_13 the gravitational torque, formula_14 includes further torques from, e.g., inherent stiffness, friction, etc., and formula_15 summarizes all the external forces from the environment. The actuation torque formula_16 on the left side is the input variable to the robot.\nOne may propose a control law of the following form:\n_\\mathrm{d}-\\dot{\\boldsymbol{q}}) + \\hat{\\boldsymbol{M}}(\\boldsymbol{q})\\ddot{\\boldsymbol{q}}_\\mathrm{d} + \\hat{\\boldsymbol{c}}(\\boldsymbol{q},\\dot{\\boldsymbol{q}}) + \\hat{\\boldsymbol{g}}(\\boldsymbol{q}) + \\hat{\\boldsymbol{h}}(\\boldsymbol{q},\\dot{\\boldsymbol{q}}),\n\nwhere formula_17 denotes the desired joint angular position, formula_18 and formula_19 are the control parameters, and formula_20, formula_21, formula_22, and formula_23 are the internal model of the corresponding mechanical terms.\nInserting into gives an equation of the closed-loop system (controlled robot):\nformula_24\nLet formula_25, one obtains\nformula_26\nSince the matrices formula_18 and formula_19 have the dimension of stiffness and damping, they are commonly referred to as stiffness and damping matrix, respectively. Clearly, the controlled robot is essentially a multi-dimensional mechanical impedance (mass-spring-damper) to the environment, which is addressed by formula_15.\nTask space.\nThe same principle also applies to task space. An uncontrolled robot has the following task-space representation in Lagrangian formulation:\nformula_30,\nwhere formula_10 denotes joint angular position, formula_32 task-space position, formula_33 the symmetric and positive-definite task-space inertia matrix. The terms formula_34, formula_35, formula_36, and formula_37 are the generalized force of the Coriolis and centrifugal term, the gravitation, further nonlinear terms, and environmental contacts. Note that this representation only applies to robots with redundant kinematics. The generalized force formula_38 on the left side corresponds to the input torque of the robot.\nAnalogously, one may propose the following control law:\nformula_39\nwhere formula_40 denotes the desired task-space position, formula_41 and formula_42 are the task-space stiffness and damping matrices, and formula_43, formula_44, formula_45, and formula_46 are the internal model of the corresponding mechanical terms.\nSimilarly, one has\nformula_47,\n\nas the closed-loop system, which is essentially a multi-dimensional mechanical impedance to the environment (formula_37) as well. Thus, one can choose desired impedance (mainly stiffness) in the task space. For example, one may want to make the controlled robot act very stiff along one direction while relatively compliant along others by setting\nformula_49\nassuming the task space is a three-dimensional Euclidean space. The damping matrix formula_42 is usually chosen such that the closed-loop system is stable.\nApplications.\nImpedance control is used in applications such as robotics as a general strategy to send commands to a robotics arm and end effector that takes into account the non-linear kinematics and dynamics of the object being manipulated.", "Automation-Control": 0.7238376737, "Qwen2": "Yes"} {"id": "43955598", "revid": "32381689", "url": "https://en.wikipedia.org/wiki?curid=43955598", "title": "Door control unit", "text": "In automotive electronics, a door control unit (DCU) is a generic term for an embedded system that controls a number of electrical systems associated with an advanced motor vehicle. A modern motor vehicle contains a number of ECUs (electronic control units), and the door control unit (DCU) is one of the minor ones.\nThe door control unit is responsible for controlling and monitoring various electronic accessories in a vehicle's door. Since most of the vehicles have more than one door, DCUs may be present in each door separately, or a single centralised one provided. A DCU associated with the driver's door has some additional functionalities. This additional features are the result of complex functions like locking, driver door switch pad, child lock switches, etc., which are associated with the driver's door. In most of the cases driver door module acts as a master and others act as slaves in communication protocols.\nFeatures controlled by door control units.\nIn some advanced motor vehicles, luxury features like puddle lamps and BLIS (Blind spot Indicator System) are also supported by DCUs.", "Automation-Control": 0.8879134655, "Qwen2": "Yes"} {"id": "26460150", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=26460150", "title": "Iso-damping", "text": "Iso-damping is a desirable system property referring to a state where the open-loop phase Bode plot is flat—i.e., the phase derivative with respect to the frequency is zero, at a given frequency called the \"tangent frequency\", formula_1. At the \"tangent frequency\" the Nyquist curve of the open-loop system tangentially touches the sensitivity circle and the phase Bode is locally flat which implies that the system will be more robust to gain variations. For systems that exhibit iso-damping property, the overshoots of the closed-loop step responses will remain almost constant for different values of the controller gain. This will ensure that the closed-loop system is robust to gain variations.\nThe iso-damping property can be expressed as formula_2\nwhere formula_3 is the tangent frequency and formula_4 is the open-loop system transfer function.\nBode's ideal transfer function.\nIn the middle of the 20th century, Bode proposed the first idea involving the use of fractional-order controllers in a feedback problem by what is known as Bode's ideal transfer function. Bode proposed that the ideal shape of the Nyquist plot for the open loop frequency response is a straight line in the complex plane, which provides theoretically infinite gain margin. Ideal open-loop transfer function is given by:\nwhere formula_6 is the desired gain cross over frequency and formula_7 is the slope of the ideal cut-off characteristic.\nThe Bode diagrams of formula_8, formula_9, are very simple. The amplitude curve is a straight line of constant slope formula_10 dB/dec, and the phase curve is a horizontal line at formula_11 rad. The Nyquist curve consists of a straight line through the origin with formula_12 rad.\nThe major benefit achieved through this structure is iso-damping, i.e. overshoot being independent of the payload or the system gain. The usage of fractional elements for description of ideal Bode's control loop is one of the most promising applications of fractional calculus in the process control field. Bode's ideal control loop frequency response has the fractional integrator shape and provides the iso-damping property around the gain crossover frequency. This is due to the fact that the phase margin and the maximum overshoot are defined by one parameter only (the fractional power of formula_13), and are independent of open-loop gain.\nBode's ideal loop transfer function is probably the first design method that addressed robustness explicitly.", "Automation-Control": 0.9615806341, "Qwen2": "Yes"} {"id": "741104", "revid": "23375635", "url": "https://en.wikipedia.org/wiki?curid=741104", "title": "Intelligent control", "text": "Intelligent control is a class of control techniques that use various artificial intelligence computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, reinforcement learning, evolutionary computation and genetic algorithms.\nOverview.\nIntelligent control can be divided into the following major sub-domains:\nNew control techniques are created continuously as new models of intelligent behavior are created and computational methods developed to support them.\nNeural network controller.\nNeural networks have been used to solve problems in almost all spheres of science and technology. Neural network control basically involves two steps:\nIt has been shown that a feedforward network with nonlinear, continuous and differentiable activation functions have universal approximation capability. Recurrent networks have also been used for system identification. Given, a set of input-output data pairs, system identification aims to form a mapping among these data pairs. Such a network is supposed to capture the dynamics of a system. For the control part, deep reinforcement learning has shown its ability to control complex systems.\nBayesian controllers.\nBayesian probability has produced a number of algorithms that are in common use in many advanced control systems, serving as state space estimators of some variables that are used in the controller.\nThe Kalman filter and the Particle filter are two examples of popular Bayesian control components. The Bayesian approach to controller design often requires an important effort in deriving the so-called system model and measurement model, which are the mathematical relationships linking the state variables to the sensor measurements available in the controlled system. In this respect, it is very closely linked to the\nsystem-theoretic approach to control design.", "Automation-Control": 0.9973819256, "Qwen2": "Yes"} {"id": "25717385", "revid": "239610", "url": "https://en.wikipedia.org/wiki?curid=25717385", "title": "Constrained-layer damping", "text": "Constrained-layer damping is a mechanical engineering technique for suppression of vibration. Typically a viscoelastic or other damping material, is sandwiched between two sheets of stiff materials that lack sufficient damping by themselves. The ending result is, any vibration made on either side of the constraining materials (the two stiffer materials on the sides) are trapped and evidently dissipated in the viscoelastic or middle layer.", "Automation-Control": 0.7143988609, "Qwen2": "Yes"} {"id": "6055566", "revid": "1461430", "url": "https://en.wikipedia.org/wiki?curid=6055566", "title": "S-graph", "text": "The S-graph framework is an approach to solving batch process scheduling problems in chemical plants. S-graph is suited for the problems with a non-intermediate storage (NIS) policy, which often appears in chemical productions, but it is also capable of solving problems with an unlimited intermediate storage (UIS) policy.\nOverview.\nThe S-graph representation exploits problem-specific knowledge to develop efficient scheduling algorithms. In the scheduling problem, there are products, and a set of tasks, which have to be performed to produce a product. There are dependencies between the tasks, and every task has a set of needed equipment that can perform the task. Different processing times can be set for the same task in different equipment types. It is possible to have more pieces of equipment of the same type, or define changeover times between two tasks performed on a single piece of equipment.\nThere are two types of the scheduling problems that can be handled:", "Automation-Control": 0.6411032081, "Qwen2": "Yes"} {"id": "9254398", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=9254398", "title": "Multimedia Acceleration eXtensions", "text": "The Multimedia Acceleration eXtensions or MAX are instruction set extensions to the Hewlett-Packard PA-RISC instruction set architecture (ISA). MAX was developed to improve the performance of multimedia applications that were becoming more prevalent during the 1990s.\nMAX instructions operate on 32- or 64-bit SIMD data types consisting of multiple 16-bit integers packed in general purpose registers. The available functionality includes additions, subtractions and shifts.\nThe first version, MAX-1, was for the 32-bit PA-RISC 1.1 ISA. The second version, MAX-2, was for the 64-bit PA-RISC 2.0 ISA.\nNotability.\nThe approach is notable because the set of instructions is much smaller than in other multimedia CPUs, and also more general-purpose. The small set and simplicity of the instructions reduce the recurring costs of the electronics, as well as the costs and difficulty of the design. The general-purpose nature of the instructions increases their overall value. These instructions require only small changes to a CPU's arithmetic-logic unit. A similar design approach promises to be a successful model for the multimedia instructions of other CPU designs. The set is also small because the CPU already included powerful shift and bit-manipulation instructions: \"Shift pair\" which shifts a pair of registers, \"extract\" and \"deposit\" of bit fields, and all the common bit-wise logical operations (and, or, exclusive-or, etc.).\nThis set of multimedia instructions has proven its performance, as well. In 1996 the 64-bit \"MAX-2\" instructions enabled real-time performance of MPEG-1 and MPEG-2 video while increasing the area of a RISC CPU by only 0.2%.\nImplementations.\nMAX-1 was first implemented with the PA-7100LC in 1994. It is usually attributed as being the first SIMD extensions to an ISA. The second version, MAX-2, was for the 64-bit PA-RISC 2.0 ISA. It was first implemented in the PA-8000 microprocessor released in 1996.\nThe basic approach to the arithmetic in MAX-2 is to \"interrupt the carries\" between the 16-bit subwords, and choose between modular arithmetic, signed and unsigned saturation. This requires only small changes to the arithmetic logic unit.\nMAX-2.\nMAX-2 instructions are register-to-register instructions that operate on multiple integers in 64-bit quantities. All have a one cycle latency in the PA-8000 microprocessor and its derivatives. Memory accesses are via the standard 64-bit loads and stores.\nThe \"MIX\" and \"PERMH\" instructions are a notable innovation because they permute words in the register set without accessing memory. This can substantially speed many operations.", "Automation-Control": 0.8296667933, "Qwen2": "Yes"} {"id": "25839999", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=25839999", "title": "Computer-automated design", "text": "Design Automation usually refers to electronic design automation, or Design Automation which is a Product Configurator. Extending Computer-Aided Design (CAD), automated design and Computer-Automated Design (CAutoD) are more concerned with a broader range of applications, such as automotive engineering, civil engineering, composite material design, control engineering, dynamic system identification and optimization, financial systems, industrial equipment, mechatronic systems, steel construction, structural optimisation, and the invention of novel systems.\nThe concept of CAutoD perhaps first appeared in 1963, in the IBM Journal of Research and Development, where a computer program was written.\nMore recently, traditional CAD simulation is seen to be transformed to CAutoD by biologically-inspired machine learning, including heuristic search techniques such as evolutionary computation, and swarm intelligence algorithms.\nGuiding designs by performance improvements.\nTo meet the ever-growing demand of quality and competitiveness, iterative physical prototyping is now often replaced by 'digital prototyping' of a 'good design', which aims to meet multiple objectives such as maximised output, energy efficiency, highest speed and cost-effectiveness. The design problem concerns both finding the best design within a known range (i.e., through 'learning' or 'optimisation') and finding a new and better design beyond the existing ones (i.e., through creation and invention). This is equivalent to a search problem in an almost certainly, multidimensional (multivariate), multi-modal space with a single (or weighted) objective or multiple objectives.\nNormalized objective function: cost vs. fitness.\nUsing single-objective CAutoD as an example, if the objective function, either as a cost function formula_1, or inversely, as a fitness function formula_2, where\nis differentiable under practical constraints in the multidimensional space, the design problem may be solved analytically. Finding the parameter sets that result in a zero first-order derivative and that satisfy the second-order derivative conditions would reveal all local optima. Then comparing the values of the performance index of all the local optima, together with those of all boundary parameter sets, would lead to the global optimum, whose corresponding 'parameter' set will thus represent the best design. However, in practice, the optimization usually involves multiple objectives and the matters involving derivatives are a lot more complex.\nDealing with practical objectives.\nIn practice, the objective value may be noisy or even non-numerical, and hence its gradient information may be unreliable or unavailable. This is particularly true when the problem is multi-objective. At present, many designs and refinements are mainly made through a manual trial-and-error process with the help of a CAD simulation package. Usually, such \"a posteriori\" learning or adjustments need to be repeated many times until a ‘satisfactory’ or ‘optimal’ design emerges.\nExhaustive search.\nIn theory, this adjustment process can be automated by computerised search, such as exhaustive search. As this is an exponential algorithm, it may not deliver solutions in practice within a limited period of time.\nSearch in polynomial time.\nOne approach to virtual engineering and automated design is evolutionary computation such as evolutionary algorithms.\nEvolutionary algorithms.\nTo reduce the search time, the biologically-inspired evolutionary algorithm (EA) can be used instead, which is a (non-deterministic) polynomial algorithm. The EA based multi-objective \"search team\" can be interfaced with an existing CAD simulation package in a batch mode. The EA encodes the design parameters (encoding being necessary if some parameters are non-numerical) to refine multiple candidates through parallel and interactive search. In the search process, 'selection' is performed using 'survival of the fittest' \"a posteriori\" learning. To obtain the next 'generation' of possible solutions, some parameter values are exchanged between two candidates (by an operation called 'crossover') and new values introduced (by an operation called 'mutation'). This way, the evolutionary technique makes use of past trial information in a similarly intelligent manner to the human designer.\nThe EA based optimal designs can start from the designer's existing design database, or from an initial generation of candidate designs obtained randomly. A number of finely evolved top-performing candidates will represent several automatically optimized digital prototypes.\nThere are websites that demonstrate interactive evolutionary algorithms for design. EndlessForms.com allows you to evolve 3D objects online and have them 3D printed. PicBreeder.org allows you to do the same for 2D images.", "Automation-Control": 0.7852261662, "Qwen2": "Yes"} {"id": "25851504", "revid": "15996738", "url": "https://en.wikipedia.org/wiki?curid=25851504", "title": "SprutCAM", "text": "SprutCAM is a high-level Computer-Aided Manufacturing (CAM) software that provides off-line features for programming of various CNC machines used for cutting, wire electrical discharge (EDM), 2, 3, and multi axial (CNC Swiss-Type Lathe) machining.\nThe program was developed by SprutCAM Tech Ltd based in Limassol, Cyprus.\nSprutCAM only supports Microsoft Windows 10/11 version.\nHistory.\nSPRUT Technology was founded in 1997 by Alexander Kharadziev, who recruited a team of engineers to build a company for developing CAx software. And released its own product, SprutCAM, in 1997. The company was relocated in 2021. The headquarters is now located in Limassol, Cyprus.\nVersion history.\nPast Versions\nCurrent Version:\nSystem requirements.\nThe system requirements for SprutCAM:\nComputer aided design products.\nSprutCAM works with associative CAD geometry and toolpaths; this allows modified geometry or machining parameters to quickly obtain updated toolpaths.\nFile format.\nSprutCAM opens/saves following file formats:\nExport/Convert following Formats via CAD Plugin as Background Process to IGES an Import:", "Automation-Control": 0.7802222967, "Qwen2": "Yes"} {"id": "2255218", "revid": "24619723", "url": "https://en.wikipedia.org/wiki?curid=2255218", "title": "Catalog server", "text": "A catalog server provides a single point of access that allows users to centrally search for information across a distributed network. In other words, it indexes databases, files and information across large network and allows keywords, Boolean and other searches. If you need to provide a comprehensive searching service for your intranet, extranet or even the Internet, a catalog server is a standard solution.", "Automation-Control": 0.9818986654, "Qwen2": "Yes"} {"id": "51457325", "revid": "15996738", "url": "https://en.wikipedia.org/wiki?curid=51457325", "title": "Machine tending", "text": "Machine tending refers to the automated operation of industrial machine tools in a manufacturing plant, primarily using robot automation systems. While loading and unloading is the primary function of machine tending systems, often the robot performs other valuable functions within the automation system such as part inspection, blow off, wash, deburring, sorting, packaging and gauging.\nBenefits of machine tending systems include:\nBecause of the sophistication, functionality, and costs associated with machine tending systems, most manufacturers require a capital approval process prior to investing in these systems where executive management must approve the purchase. Typically, an ROI (return on investment) is calculated to justify the purchase.", "Automation-Control": 0.9988349676, "Qwen2": "Yes"} {"id": "38532595", "revid": "5846", "url": "https://en.wikipedia.org/wiki?curid=38532595", "title": "Driveway alarm", "text": "A driveway alarm is a device that is designed to detect people or vehicles entering a property via the driveway. A driveway alarm system is often integrated as a component of a system which automatically performs a task or alerts home owners of an unexpected intruder or visitor. Driveway alarms can be a vital component of security, automated lighting control, home control, energy efficiency, and other useful systems. \nOverview.\nA driveway alarm always consists of two components, a sensor and receiver. The sensor is used to detect people or vehicles on the driveway and the receiver is the device which alerts the user or owner to this detection. Wireless driveway alarms consist of three components, the sensor, receiver and a transmitter which sends the wireless alert signal to the receiver. \nA driveway alarm may be connected to a burglar alarm that is used to alert the home owner or security service after it detects an intruder. Such a driveway may also trigger a security camera. Driveway alarms have found great application in domestic and commercial applications, especially in retail drive thru applications. Some of the more popular residential or commercial applications include motion-activated outdoor lighting systems, motion sensor street lamps and motion sensor lanterns.\nSensors.\nThree types of sensors are commonly used in driveway alarm systems:\nWireless and wired systems.\nDriveway alarms come in both wired and wireless packages. Infrared and rubber hose systems are generally sold as wireless units while magnetic probe systems are equally available as hard wired and wireless systems. Choosing between wireless or wired systems is typically decided by the environment the unit is being installed in. The most common driveway alarms sold are wireless infrared units.", "Automation-Control": 0.7004386187, "Qwen2": "Yes"} {"id": "4271848", "revid": "7611264", "url": "https://en.wikipedia.org/wiki?curid=4271848", "title": "Boost controller", "text": "In turbocharged internal combustion engines, a boost controller is a device sometimes used to increase the boost pressure produced by the turbocharger. It achieves this by reducing the boost pressure seen by the wastegate.\nOperation.\nThe purpose of a boost controller is to reduce the boost pressure seen by the wastegate's reference port, in order to trick the wastegate into allowing higher boost pressures than it was designed for.\nMany boost controllers use a needle valve that is opened and closed by an electric solenoid. By varying the pulse width to the solenoid, the solenoid valve can be commanded to be open a certain percentage of the time. This effectively alters the flow rate of air pressure through the valve, changing the amount of air that is bled out instead of going to the wastegate's reference port. Solenoids may require small diameter restrictors be installed in the air control lines to limit airflow and even out the on/off nature of their operation. Two-port solenoid bleed systems with a PID controller tend to be common on factory turbocharged cars.\nAn alternative design is to use a stepper motor. These designs allow fine control of airflow based on position and speed of the motor, but may have low total airflow capability. Some systems use a solenoid in conjunction with a stepper motor, with the stepper motor allowing fine control and the solenoid coarse control.\nControl system.\nMost modern designs are \"electronic boost controllers\" that use an electronic control unit to control the boost via a solenoid or stepper motor. The operating principle is the same as older \"manual boost controllers\", which is to control the air pressure presented to the wastegate actuator. Electronic controllers add greater flexibility in management of boost pressures, compared with the manual controllers.\nThe actuation of an electronic boost controller can be managed by one of two control systems:\nAdvantages.\nBy keeping the wastegate in a closed position more often, a boost controller causes more of the exhaust gas to be routed through the turbocharger, thus reducing turbo lag and lowering the boost threshold.\nDisadvantages.\nRegardless of the effectiveness of the boost controller, wastegate actuator springs that are too soft can cause the wastegate to open before desired. This due to the exhaust gas backpressure pushing against the wastegate valve itself, causing the valve to open of the actuator at all. Therefore, there is an upper limit to the effectiveness of a boost controller for a given spring stiffness in the wastegate actuator.\nTo prevent excessive boost pressures in the event of a failure, the boost controller needs to be designed such that failure mode do not result in any pressure being bled off. For instance, a solenoid-type boost controller should direct all air to the wastegate when it is in the non-energized position (the common failure mode for a solenoid). Otherwise, the boost controller could get stuck in a position that lets no boost pressure reach the wastegate, causing boost to quickly rise out of control.\nAlso, the electronic systems, extra hoses, solenoids and control systems add cost and complexity. Nonetheless, in recent times most automobile manufacturers use boost controllers on their turbocharged engines.\nAlternatives.\nIn the past, boost pressures have been controlled by restricting or bleeding off some intake air before it reaches the intake manifold. Designs which restrict the intake air can use a butterfly valve in the intake to restrict airflow as desired boost is approached. Designs which bleed off the intake air functioned similar to a blowoff valve, but on a constant basis to maintain the desired boost pressure. These methods are rarely used in modern system due to the large sacrifices in efficiency, heat, and reliability.\nVariable geometry turbocharger can be used to manage boost levels, negating the need for an external boost controller. Also, the wastegate itself has a similar function to a boost controller, in that it is used to manage the turbocharger's boost pressure.", "Automation-Control": 0.8292115927, "Qwen2": "Yes"} {"id": "56329781", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=56329781", "title": "Adamant Namiki Precision Jewel Co", "text": "Adamant Namiki Precision Jewel Co., Ltd. ( アダマンド並木精密宝石 Adamant Namiki Seimitsu Houseki Kabushiki-gaisha) is a Japanese precision components manufacturer based in Tokyo, Japan.\nOverview.\nIn 1939, Namiki Precision Jewel Co., Ltd. started business as a manufacturer of synthetic sapphire jewel bearings for electrical measuring instruments. It later began selling these jewel bearings for use in watches in the 1960s. In 1957, Adamant Shoji (renamed Adamant Kogyo Co., Ltd. in 1959, and Adamant Co., Ltd. in 2014) was founded as a spin-off of Namiki as a result of business practices of the time. Thereafter, Namiki developed its product lineup primarily focusing on industrial jewel components, DC coreless motors, and medical devices. Meanwhile, Adamant Shoji’s business focused mainly on optical communication components. In 2017, Namiki and Adamant mutually agreed to unite their specialties to take their technologies and products to the next level. As a result, Adamant Namiki Precision Jewel Co., Ltd. was established on January 1, 2018.\nHistory.\nSee Namiki Precision Jewel Co., Ltd. and Adamant Co., Ltd. for the history of each company prior to the January 1, 2018 intragroup merger.\nNamiki merged with its subsidiary company, Adamant Co., Ltd., on January 1, 2018 and changed its name to Adamant Namiki Precision Jewel Co., Ltd.\nPrimary business.\nIndustrial jewel components.\nAdamant Namiki uses integrated manufacturing, handling its products from the raw material, to processing, through to polishing. Industrial jewels, such as diamond, sapphire, and ruby, are used for jewel bearings, sapphire substrates, exterior watch parts, semiconductor wire bonding capillaries, nozzles, LTCC(Co-fired ceramic) and so on.\nFor sapphire product growth, Adamant Namiki employs the highly productive EFG method. \nIn 2021, the company succeeded in developing a mass production method for 2-inch diamond wafers.\nAdamant Namiki also supplies ceramic parts, combining precision processing and various molding technologies, such as injection, powder press, and CIP molding, to provide a diverse range of products.\nOptical components.\nAdamant Namiki offers optical components with a focus on ferrules, sleeves, and connectors. \nA ferrule is a component to link optical fibers together and high-precision (less than 1 micrometer) processing technology is required to ensure the secure connection of several micrometer optical fiber cores. \nAdamant Namiki also combines its high-precision processing and assembly technologies to provide optical device components such as receptacles and pigtails, optical switches using MEMS(Microelectromechanical systems) technology, and optical devices such as variable attenuators.\nDC coreless motors.\nSince developing the smallest coreless motor of the time in 1973, Adamant Namiki has been consistently producing miniature DC coreless motors. The core components have evolved with high-precision processing technology and a more optimal magnetic circuit has produced a high-efficiency motor. In 2009, Adamant Namiki successfully developed the world’s smallest DC brushless motor at 0.6mm in diameter. The company also produces motor units such as its de-energized locking system, micro robot servo, multi-finger robotic hand, and micro mechanisms.\nMedical devices.\nSince the start of OEM manufacturing of its computer-controlled infusion pump in 1987, Adamant Namiki has established a line of business that offers medical device development, manufacturing, and service.", "Automation-Control": 0.8386597633, "Qwen2": "Yes"} {"id": "15399117", "revid": "84417", "url": "https://en.wikipedia.org/wiki?curid=15399117", "title": "Electromechanical modeling", "text": "The purpose of electromechanical modeling is to model and simulate an electromechanical system, such that its physical parameters can be examined before the actual system is built. Parameter estimation utilizing different estimation theory coupled with physical experiments and physical realization by doing proper stability criteria evaluation of the overall system is the major objective of electromechanical modeling. Theory driven mathematical model can be used or applied to other system to judge the performance of the joint system as a whole. This is a well known and proven technique for designing large control system for industrial as well as academic multi-disciplinary complex system. This technique is also being employed in MEMS technology recently.\nDifferent types of mathematical modeling.\nThe modeling of purely mechanical systems is mainly based on the Lagrangian which is a function of the generalized coordinates and the associated velocities. If all forces are derivable from a potential, then the time behavior of the dynamical systems is completely determined. For simple mechanical systems, the Lagrangian is defined as the difference of the kinetic energy and the potential energy.\nThere exists a similar approach for electrical system. By means of the electrical coenergy and well defined power quantities, the equations of motions are uniquely defined. The currents of the inductors and the voltage drops across the capacitors play the role of the generalized coordinates. All constraints, for instance caused by the Kirchhoff laws, are eliminated from the considerations. After that, a suitable transfer function is to be derived from the system parameters which eventually governs the behavior of the system.\nIn consequence, we have quantities (kinetic and potential energy, generalized forces) which determine the mechanical part and quantities (coenergy, powers) for the description of the electrical part. This offers a combination of the mechanical and electrical parts by means of an energy approach. As a result, an extended Lagrangian format is produced.\nReferences.\n__notoc__", "Automation-Control": 0.8972985148, "Qwen2": "Yes"} {"id": "71184026", "revid": "16185737", "url": "https://en.wikipedia.org/wiki?curid=71184026", "title": "IEEE Transactions on Robotics", "text": "IEEE Transactions on Robotics is a bimonthly peer-reviewed scientific journal published by the Institute of Electrical and Electronics Engineers (IEEE). It covers all aspects of robotics and is sponsored by the IEEE Robotics and Automation Society. The editor-in-chief is Kevin Lynch (Northwestern University).\nPublication History.\nThe journal was established in 1985 as the IEEE Journal on Robotics and Automation, but changed name in 1989 to IEEE Transactions on Robotics and Automation. In 2004 the journal split into IEEE Transactions on Automation Science and Engineering and IEEE Transactions on Robotics.\nAbstracting and indexing.\nThe journal is abstracted and indexed in:\nAccording to the \"Journal Citation Reports\", the journal has a 2021 impact factor of 6.835, ranking it 7th out of 30 journals in the category \"Robotics\"", "Automation-Control": 0.9973012209, "Qwen2": "Yes"} {"id": "9156983", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=9156983", "title": "Bonding protocol", "text": "Bonding protocol (short for \"Bandwidth On Demand Interoperability Group\") is a generic name for a method of bonding or aggregation of multiple physical links to form a single logical link. Bonding is the term often used in Linux implementations: on Windows based systems the term teaming is often used, and between network-devices we talk about link aggregation, LAG and Link Aggregation Control Protocol.", "Automation-Control": 0.986761868, "Qwen2": "Yes"} {"id": "24822862", "revid": "35498457", "url": "https://en.wikipedia.org/wiki?curid=24822862", "title": "Robotino", "text": "Robotino is a mobile robot system made by \"Festo Didactic\", and used for educational, training and research purposes.\nOperation.\nRobotino is based on an omnidirectional drive assembly, which enables the system to roam freely. The robot is controlled by an industry-standard PC system, which is powerful enough to plan routes for fully autonomous driving. Via a WLAN-Link, Robotino can send all sensor readings to an external PC. In the other direction, control commands can be issued by the external PC. This way, control programs can run on the external PC or on Robotino directly. Mixed mode or shared control are also possible.\nFor users with little prior robotics knowledge, Robotino can be readily programmed in its “native” programming environment RobotinoView II. More experienced programmers may find it useful that the robot can also be programmed in C, C++, Java, .NET, Matlab, Simulink, Labview and Microsoft Robotics Developer Studio.\nHardware.\nThe omnidirectional drive consists of three Mecanum wheels, all of which are individually controllable. These wheels are arranged at angles of 120°. Robotino has a bumper sensor around its circumference, infrared distance sensors, a color camera with VGA resolution, optical wheel encoders, power measurement for the entire system and the various motors, as well as a battery voltage monitor. Moreover, as optional additional sensors, Robotino can be equipped with a precise laser scanner, a gyroscope, and an indoor positioning system (created by Evolution Robotics). For signal input and output Robotino has several interfaces:\nPower is supplied by two 12V/5Ah lead-acid batteries or optionally by two 12V/9Ah NiMH batteries.", "Automation-Control": 0.9750017524, "Qwen2": "Yes"} {"id": "24828603", "revid": "31831", "url": "https://en.wikipedia.org/wiki?curid=24828603", "title": "Emergency Detection System", "text": "An Emergency Detection System (EDS) is a system that is used on crewed rocket missions. It monitors critical launch vehicle and spacecraft systems and issues status, warning and abort commands to the crew during their mission to low Earth orbit. It can trigger the Launch Abort System which will take the astronauts to safety.\nReferences.\nNASA Technical Note TN-D6487 (pdf), pp. 3, 6, vii", "Automation-Control": 0.8853066564, "Qwen2": "Yes"} {"id": "52020537", "revid": "508734", "url": "https://en.wikipedia.org/wiki?curid=52020537", "title": "Ji-Feng Zhang", "text": "Ji-Feng Zhang (born 1963) was born in Shandong, China. He is currently the vice-chair of the technical board of the International Federation of Automatic Control (IFAC), the vice-president of the Systems Engineering Society of China (SESC), the vice-president of the Chinese Association of Automation (CAA), the chair of the technical committee on Control Theory (CAA), and the editor-in-chief for both \"All About Systems and Control\" and the \"Journal of Systems Science and Mathematical Sciences\".\nBiography.\nJi-Feng Zhang was born in September 1963, Shandong, China. He received the B.S. degree in mathematics, from Shandong University in 1985, and M.S. and Ph.D. in control theory and stochastic systems, from Institute of Systems Science (ISS), Chinese Academy of Sciences (CAS) in 1988 and 1991, respectively. From November 1991 to December 1992, he was a postdoctoral fellow was with McGill University, Canada. From December 1996 to February 1998, he was with the Chinese University of Hong Kong. Since 1985 he has been with the ISS, CAS, where he is now a Guan Zhapzhi Chair Professor of the Academy of Mathematics and System Sciences (AMSS), and the director of the ISS.\nContributions to the field.\nJob history.\nZhang has served as a vice-chair of the technical board of IFAC (2014–present), convener of Systems Science Discipline, Academic Degree Committee of the State Council, China (2009–present), vice-president of the Systems Engineering Society of China (2010–present), vice- president of the Chinese Association of Automation (CAA, 2014–present), chair of the Technical Committee on Control Theory (TCCT), CAA (2010–present), standing member of the Chinese Mathematical Society (2008–2015), vice-president of the Beijing Mathematical Society, China (2006–2013), member of the board of governors, IEEE Control Systems Society (2013), member of the steering committee, Asian Control Association, (2009–2014), vice-general secretary of CAA (2002–2008), vice-chair of TCCT, CAA (2002–2007), general secretary of TCCT, CAA (1993–2002), senior member of IEEE (1997–2013), member of the IFAC Technical Committee on Modeling, identification and Signal Processing (2009–present).\nHe also has been a general co-chair of the 32nd and 33rd Chinese Control Conference (2013, 2014), program chair/co-chair of the 17th IFAC Symposium on System Identification (2015), the 30th Chinese Control Conference (2011), the 9th World Congress on Intelligent Control and Automation, Beijing, China (2012), the IEEE International Conference on Control Applications, part of the IEEE Multi-Conference on Systems and Control (2012), vice-chair of the 20th IFAC World Congress (2017), and an organizing committee co-chair of the 21st-26th Chinese Control Conferences (2002–2007), the 1st-4th Chinese-Swedish Conference on Control (2003–2008), the 1st-8th Conference on Frontier Problems in Systems and Control (2000–2008), and a finance co-chair of the 48th Conference on Decision and Control (2009).\nHe is/was the founding editor-in-chief of \"All About Systems and Control\" (2014–present), editor-in-chief of the \"Journal of Systems Science and Mathematical Sciences\" (2014–present), managing editor of \"Journal of Systems Science and Complexity\" (2007–2014), deputy editor-in-chief of the following journals: \"Science China: Information Sciences\" (2014–present), \"Scientia Sinica: Informationis\" (2014–present), \"Journal of Systems Science and Mathematical Sciences\" (2004–2013), \"Acta Automatica Sinica\" (2005–2010), \"Control Theory and Applications\" (2008–2013), \"Systems Engineering: Theory and Practice\" (2011–present); and associate editor or an editorial board member of the following journals: \"IEEE Transactions on Automatic Control\" (2007–2009), \"SIAM\" \"Journal on Control and Optimization\" (2008–2013), \"Aerospace Control and Application\" (2008–present), \"Mathematics in Practice and Theory\" (2006–2013), \"Acta Automatica Sinica\" (1999–2010), \"Control Theory and Applications\" (2003–2008), \"Journal of Control Theory and Applications\" (2003–2008), and the \"Journal of Shandong University\" \"(Engineering Science)\" (2011–2015).\nResearch areas.\nZhang’s current research interests are system identification, adaptive control, stochastic systems, and multi-agent systems.\nSystem identification.\nHe made original contributions on system identification, including the estimation of the orders, time-delays and parameters of stochastic systems. He gave a criterion for time-delay estimate, with which one can get a strong consistent time-delay estimate. He with co-authors initiated the research on the parameter identification and adaptive control of the systems with quantized observations, and investigated the optimal adaptive control and identification errors, time complexity, optimal input design, and impact of disturbances and unmodeled dynamics on identification accuracy and complexity in both stochastic and deterministic frameworks. With a series of significant results, he has established a solid framework for the identification and adaptive control of uncertainty systems with quantized information. This is of great importance for many practical systems, especially, when digital communications are needed.\nAdaptive control.\nHe investigated the capability issues of robust and adaptive control in dealing with uncertainty, and revealed that to capture the intrinsic limitations of adaptive control, it is necessary to use sup-types of transient and persistent performance, rather than limsup-types which reflect only asymptotic behavior of a system. This indicates that intimate interaction and inherent conflict between identification and control result in a certain performance lower bound which does not approach the nominal performance even when the system varies very slowly. For nonlinear hybrid stochastic systems with unknown jump-Markov parameters, he with co-authors used the Wonham nonlinear filter to estimate the unknown parameters and presented an estimation error bound, which is a basic tool and plays an important role in performance analysis of adaptive control of nonlinear hybrid stochastic systems. He also attacked a series of hard problems related on global output-feedback control of nonlinear stochastic systems with inverse dynamics, including practical output-feedback risk-sensitive control, robust adaptive stabilization, small-gain theorem of general nonlinear stochastic systems. Different from the existing literature, the systems considered in his work are so complicated that renders any control design for them is much difficult. He developed a set of predominant methods and obtained many innovative results. The work represents an accomplishment for both the field of stochastic nonlinear stabilization and the backstepping method.\nStochastic multi-agent systems.\nIn control of stochastic multi-agent systems, Zhang thoroughly studied the interaction of interest coupled decision-makers and the uncertainty of individual behavior, which is the prominent characteristic of multi-agent systems (MASs). He made a systematic study of the sample path behavior of the closed-loop system in relation to Nash Equilibria (NE) and a substantial contribution to the developing theory of Nash Certainty Equivalence (NCE) for large population stochastic dynamic games. He introduced the concepts of asymptotic Nash- equilibrium in probability and almost surely, and elucidated the relationship between these concepts, which provides necessary tools for analyzing the optimality of the decentralized control laws. With respect to the decentralized quadratic-type and tracking-type performance indices, by using Nash Certainty Equivalence he developed decentralized optimal controls, and proved the optimality of the closed-loop systems. He also initiated the study on consensusability and formability of MAS and obtained necessary and sufficient conditions which reflect the intrinsic relationships between the consensusability/formability and the agents’ dynamics, admissible control sets and communication topologies. These works are of great significance, since they break through the framework of conventional control theory and extend the methodology and tools in the stochastic adaptive control theory to analyzing MAS.\nIndex-coupled example.\nThe multi-agent system Zhang mentioned could be used to describe an engineering or economic system. The uncertainty in his work is a kind of random noise appearing in the agent’s dynamic model. Brownian agent swarm systems are such examples, where the acceleration of agent depends on not only its own state variables (e.g. position, velocity, and energy), control, Gaussian white noise, but also the population position average. The dynamic equations are coupled together via the population position average. Other interest or performance index-coupled examples can be found in wireless communication networks and stock markets. In a wireless communication network with users, the changing rate of the received power for user depends on, its neighbors’ powers, control, random noise. Each user makes its own power control strategy to ensure the signal-to- interference-ratio to approach a desired level. This can be formulated by the following model (for simplicity, here we use a linear model with constant parameters) and a coupled-index group: where is the neighbor of user, are system parameters, is the constant background noise intensity, and. In a stock market with investigators, suppose that profits of each investigator is influenced by his recent profits situation and the profits situation of his neighbors, and each investigator wants to get something around the average value. Then, the problem can be described by the following model (for simplicity, here we use a linear model with constant parameters) and a coupled-interest index group: When a=1 and b=0, the coupled-interest index becomes.\nPublications and awards.\nZhang was elected as a Fellow of the Institute of Electrical and Electronics Engineers (IEEE) and as a Fellow of the International Federation of Automatic Control (IFAC). He was the second-place winner of the State Natural Science Award (China) in both 2010 and 2015. Zhang has also received the Distinguished Young Scholar Fund from National Natural Science Foundation of China in 1997; the First Prize of the Young Scientist Award of CAS in 1995; Excellent Chinese Doctoral Dissertation Supervisor in 2009; Excellent Graduate Student Supervisor of Chinese Academy of Sciences (CAS) in 2007, 2008 and 2009; the Best Paper award of the 7th Asian Control Conference in 2009; and the Guan Zhaozhi Best Paper award of the 23rd Chinese Control Conference in 2004.\nZhang’s current research interests are system identification, adaptive control, stochastic systems, and multi-agent systems. He has published 2 books, over 110 journal papers and 70 conference papers, in journals such as \"IEEE Transactions on Automatic Control\", \"Automatica\", and \"SIAM Journal on Control and Optimization\". He has 5 papers listed in \"Highly Cited Papers\" by the ISI Web of Knowledge, Essential Science Indicators from Aug 2007 to Aug 2015.", "Automation-Control": 0.926283896, "Qwen2": "Yes"} {"id": "52032623", "revid": "43558034", "url": "https://en.wikipedia.org/wiki?curid=52032623", "title": "Agile tooling", "text": "Agile tooling is the design and fabrication of manufacturing related-tools such as dies, molds, patterns, jigs and fixtures in a configuration that aims to maximise the tools' performance, minimise manufacturing time and cost, and avoid delay in prototyping. A fully functional agile tooling laboratory consists of CNC milling, turning and routing equipment. It can also include additive manufacturing platforms (such as fused filament fabrication, selective laser sintering, Stereolithography, and direct metal laser sintering), hydroforming, vacuum forming, die casting, stamping, injection molding and welding equipment.\nAgile tooling is similar to rapid tooling, which uses additive manufacturing to make tools or tooling quickly, either directly by making parts that serve as the actual tools or tooling components, such as mold inserts; or indirectly by producing patterns that are in turn used in a secondary process to produce the actual tools. Another similar technique is prototype tooling, where molds, dies and other devices are used to produce prototypes. Rapid manufacturing, and specifically rapid tooling technologies, are earlier in their development than rapid prototyping (RP) technologies, and are often extensions of RP.\nThe aim of all toolmaking is to catch design errors early in the design process; improve product design better products, reduce product cost, and reduce time to market.\nUsers.\nHundreds of universities and research centers around the globe are investing in additive manufacturing equipment in order to be positioned to make prototypes and tactile representations of real parts. Few have fully committed the concept of using additive manufacturing (AM) to create manufacturing tools (fixturing, clamps, molds, dies, patterns, negatives, etc.). AM experts seem to agree that tooling is a large, namely untapped market. Deloitte University Press estimated that in 2012 alone, the AM Tooling market $1.2 Billion. At that point in the development cycle of AM Tooling, much of the work was performed under the guise of “let’s try it and see what happens”.\nIndustry applications.\nAdditive manufacturing, starting with today's infancy period, requires manufacturing firms to be flexible, ever-improving users of all available technologies to remain competitive. Advocates of additive manufacturing also predict that this arc of technological development will counter globalization, as end users will do much of their own manufacturing rather than engage in trade to buy products from other people and corporations. The real integration of the newer additive technologies into commercial production, however, is more a matter of complementing traditional subtractive methods rather than displacing them entirely.\nAutomotive – approaching niche vehicle markets (making less than 100, 000 vehicles), rather than high production volume\nAircraft – the U.S. aircraft industry operates in an environment where production volumes are relatively low and resulting product costs are relatively high. Agile tooling can be applied in the early design stage of the development cycle to minimize the high cost of redesign.\nMedical – cast tooling would benefit a great deal from agile tooling. However, the cost for the tooling may still be significantly greater than the cost of a casting piece, with high lead times. Since only several dozen or several hundred metal parts are needed, the challenge for mass production is still prevalent. A balance between these four areas – quantity, design, material, and speed are key to designing and producing a fully functional product.", "Automation-Control": 0.964370966, "Qwen2": "Yes"} {"id": "19079798", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=19079798", "title": "State-transition matrix", "text": "In control theory, the state-transition matrix is a matrix whose product with the state vector formula_1 at an initial time formula_2 gives formula_1 at a later time formula_4. The state-transition matrix can be used to obtain the general solution of linear dynamical systems.\nLinear systems solutions.\nThe state-transition matrix is used to find the solution to a general state-space representation of a linear system in the following form\nwhere formula_6 are the states of the system, formula_7 is the input signal, formula_8 and formula_9 are matrix functions, and formula_10 is the initial condition at formula_2. Using the state-transition matrix formula_12, the solution is given by:\nThe first term is known as the zero-input response and represents how the system's state would evolve in the absence of any input. The second term is known as the zero-state response and defines how the inputs impact the system.\nPeano–Baker series.\nThe most general transition matrix is given by the Peano–Baker series\nwhere formula_15 is the identity matrix. This matrix converges uniformly and absolutely to a solution that exists and is unique.\nOther properties.\nThe state transition matrix formula_16 satisfies the following relationships:\n1. It is continuous and has continuous derivatives.\n2, It is never singular; in fact formula_17 and formula_18, where formula_19 is the identity matrix.\n3. formula_20 for all formula_4 .\n4. formula_22 for all formula_23.\n5. It satisfies the differential equation formula_24 with initial conditions formula_25.\n6. The state-transition matrix formula_12, given by\nwhere the formula_28 matrix formula_29 is the fundamental solution matrix that satisfies\n7. Given the state formula_32 at any time formula_33, the state at any other time formula_4 is given by the mapping\nEstimation of the state-transition matrix.\nIn the time-invariant case, we can define formula_16, using the matrix exponential, as formula_37. \nIn the time-variant case, the state-transition matrix formula_38 can be estimated from the solutions of the differential equation formula_39 with initial conditions formula_40 given by formula_41, formula_42, ..., formula_43. The corresponding solutions provide the formula_44 columns of matrix formula_38. Now, from property 4, \nformula_46 for all formula_47. The state-transition matrix must be determined before analysis on the time-varying solution can continue.", "Automation-Control": 0.8539856076, "Qwen2": "Yes"} {"id": "32546455", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=32546455", "title": "Microextrusion", "text": "Microextrusion is a microforming extrusion process performed at the submillimeter range. Like extrusion, material is pushed through a die orifice, but the resulting product's cross section can fit through a 1mm square. Several microextrusion processes have been developed since microforming was envisioned in 1990. Forward (ram and billet move in the same direction) and backward (ram and billet move in the opposite direction) microextrusion were first introduced, with forward rod-backward cup and double cup extrusion methods developing later. Regardless of method, one of the greatest challenges of creating a successful microextrusion machine is the manufacture of the die and ram. \"The small size of the die and ram, along with the stringent accuracy requirement, needs suitable manufacturing processes.\" Additionally, as Fu and Chan pointed out in a 2013 state-of-the-art technology review, several issues must still be resolved before microextrusion and other microforming technologies can be implemented more widely, including deformation load and defects, forming system stability, mechanical properties, and other size-related effects on the crystallite (grain) structure and boundaries.\nDevelopment and use.\nMicroextrusion is an outgrowth of microforming, a science that was in its infancy in the early 1990s. In 2002, Engel \"et al.\" expressed that up to that point, only a few research experiments involving micro-deep drawing and extruding processes had been attempted, citing limitations in shearing on billets and difficulties in tool manufacturing and handling. By the mid- to late 2000s, researchers were working on issues such as billet flow, interfacial friction, extrusion force, and size effects, \"the deviations from the expected results that occur when the dimension of a workpiece or sample is reduced.\" Most recently, research into using ultrafine-grained material at higher formation temperatures and applying ultrasonic vibration to the process has pushed the science further. However, before bulk production of microparts such as pins, screws, fasteners, connectors, and sockets using microforming and microextrusion techniques can occur, more research into billet production, transportation, positioning, and ejection are required.\nMicroextrusion techniques have also been applied to bioceramic and plastic extrusion and the manufacture of components for resorbable and implantable medical devices, from bioresorbable stents to controlled drug release systems. \nMicroextrusion processes.\nLike normal macro-level extrusion, several similar microextrusion processes have been described over the years. The most basic processes were forward (direct) and backward (indirect) microextrusion. The ram (which propels the billet forward) and billet both move in the same direction with forward microextrusion, while in backward microextrusion has the ram and billet moving in opposite directions. These in turn have been applied to specialized applications such as the manufacture of microbillet, brass micropins, microgear shafts, and microcondensers. However, other processes have been applied to microextrusion, including forward rod–backward cup extrusion and double cup (one forward, one backward) extrusion.\nStrengths and limitations.\nStrengths of microextrusion over other manufacturing processes include its ability to create very complex cross-sections, preserve chemical properties, condition physical properties, and process materials which are delicate or dependent on physical or chemical properties. However, microextrusion has some limitations, though primarily related to the need for improvement of the relatively young process. Dixit and Das described it thus in 2012:\nWith the scaling down of dimensions and increasing geometric complexity of objects, currently available technologies and systems may not be able to meet the development needs. New measuring devices, principles and instrumentation, tolerance rules, and procedures have to be developed. Materials databases with detailed information on various materials and their properties/interface properties including microstructures and size effect would be very useful for product innovation and process design. More studies are necessary on micro/nanowear and damages/failures of the micromanufacturing tools. The forming limits for different types of materials at the microlevel must be prescribed. More specific considerations must be incorporated into the design of machines that are scaled down for microforming to meet engineering applications and requirements.", "Automation-Control": 0.9772025943, "Qwen2": "Yes"} {"id": "32573740", "revid": "31364895", "url": "https://en.wikipedia.org/wiki?curid=32573740", "title": "Hautus lemma", "text": "In control theory and in particular when studying the properties of a linear time-invariant system in state space form, the Hautus lemma (after Malo L. J. Hautus), also commonly known as the Popov-Belevitch-Hautus test or PBH test, can prove to be a powerful tool. This result appeared first in and. Today it can be found in most textbooks on control theory.\nThe main result.\nThere exist multiple forms of the lemma.\nHautus Lemma for controllability.\nThe Hautus lemma for controllability says that given a square matrix formula_1 and a formula_2 the following are equivalent:\nHautus Lemma for stabilizability.\nThe Hautus lemma for stabilizability says that given a square matrix formula_1 and a formula_2 the following are equivalent:\nHautus Lemma for observability.\nThe Hautus lemma for observability says that given a square matrix formula_1 and a formula_17 the following are equivalent:\nHautus Lemma for detectability.\nThe Hautus lemma for detectability says that given a square matrix formula_1 and a formula_17 the following are equivalent:", "Automation-Control": 0.999640286, "Qwen2": "Yes"} {"id": "1971624", "revid": "7611264", "url": "https://en.wikipedia.org/wiki?curid=1971624", "title": "Pilot valve", "text": "A pilot valve is a small valve that controls a limited-flow control feed to a separate piloted valve. Typically, this separate valve controls a high pressure or high flow feed. Pilot valves are useful because they allow a small and easily operated feed to control a much higher pressure or higher flow feed, which would otherwise require a much larger force to operate; indeed, this is even useful when a solenoid is used to operate the valve.\nPilot valves are often used in critical applications (e.g., emergency and SIS controls) and are human-operated. They can be set up as a push-to-activate or dead man's switch.", "Automation-Control": 0.9975332618, "Qwen2": "Yes"} {"id": "18436210", "revid": "1078161314", "url": "https://en.wikipedia.org/wiki?curid=18436210", "title": "Massera's lemma", "text": "In stability theory and nonlinear control, Massera's lemma, named after José Luis Massera, deals with the construction of the Lyapunov function to prove the stability of a dynamical system. The lemma appears in as the first lemma in section 12, and in more general form in as lemma 2. In 2004, Massera's original lemma for single variable functions was extended to the multivariable case, and the resulting lemma was used to prove the stability of switched dynamical systems, where a common Lyapunov function describes the stability of multiple modes and switching signals.\nMassera's original lemma.\nMassera’s lemma is used in the construction of a converse Lyapunov function of the following form (also known as the integral construction)\nfor an asymptotically stable dynamical system whose stable trajectory starting from formula_2\nThe lemma states:\nLet formula_3 be a positive, continuous, strictly decreasing function with formula_4 as formula_5. Let formula_6 be a positive, continuous, nondecreasing function. Then there exists a function formula_7 such that\nExtension to multivariable functions.\nMassera's lemma for single variable functions was extended to the multivariable case by Vu and Liberzon.\nLet formula_3 be a positive, continuous, strictly decreasing function with formula_4 as formula_5. Let formula_6 be a positive, continuous, nondecreasing function. Then there exists a differentiable function formula_7 such that", "Automation-Control": 0.8770262003, "Qwen2": "Yes"} {"id": "18436459", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=18436459", "title": "Extended Kalman filter", "text": "In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance. In the case of well defined transition models, the EKF has been considered the \"de facto\" standard in the theory of nonlinear state estimation, navigation systems and GPS.\nHistory.\nThe papers establishing the mathematical foundations of Kalman type filters were published between 1959 and 1961. The Kalman filter is the optimal linear estimator for \"linear\"\nsystem models with additive independent white noise in both the transition and the measurement systems.\nUnfortunately, in engineering, most systems are \"nonlinear\", so attempts were made to apply\nthis filtering method to nonlinear systems; most of this work was done at NASA Ames. The EKF adapted techniques from calculus, namely multivariate Taylor series expansions, to linearize a model about a working point. If the system model (as described below) is not well known or is inaccurate, then Monte Carlo methods, especially particle filters, are employed for estimation. Monte Carlo techniques predate the existence of the EKF but are more computationally expensive for any moderately dimensioned state-space.\nFormulation.\nIn the extended Kalman filter, the state transition and observation models don't need to be linear functions of the state but may instead be differentiable functions.\nHere w\"k\" and v\"k\" are the process and observation noises which are both assumed to be zero mean multivariate Gaussian noises with covariance Q\"k\" and R\"k\" respectively. u\"k\" is the control vector.\nThe function \"f\" can be used to compute the predicted state from the previous estimate and similarly the function \"h\" can be used to compute the predicted measurement from the predicted state. However, \"f\" and \"h\" cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian) is computed.\nAt each time step, the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearizes the non-linear function around the current estimate.\nSee the Kalman Filter article for notational remarks.\nDiscrete-time predict and update equations.\nNotation formula_3 represents the estimate of formula_4 at time \"n\" given observations up to and including at time .\nUpdate.\nwhere the state transition and observation matrices are defined to be the following Jacobians\nDisadvantages and alternatives.\nUnlike its linear counterpart, the extended Kalman filter in general is \"not\" an optimal estimator (it is optimal if the measurement and the state transition model are both linear, as in that case the extended Kalman filter is identical to the regular one). In addition, if the initial estimate of the state is wrong, or if the process is modeled incorrectly, the filter may quickly diverge, owing to its linearization. Another problem with the extended Kalman filter is that the estimated covariance matrix tends to underestimate the true covariance matrix and therefore risks becoming inconsistent in the statistical sense without the addition of \"stabilising noise\"\nMore generally one should consider the infinite dimensional nature of the nonlinear filtering problem and the inadequacy of a simple mean and variance-covariance estimator to fully represent the optimal filter. It should also be noted that the extended Kalman filter may give poor performances even for very simple one-dimensional systems such as the cubic sensor, where the optimal filter can be bimodal and as such cannot be effectively represented by a single mean and variance estimator, having a rich structure, or similarly for the quadratic sensor.\nIn such cases the projection filters have been studied as an alternative, having been applied also to navigation. Other general nonlinear filtering methods like full particle filters may be considered in this case. \nHaving stated this, the extended Kalman filter can give reasonable performance, and is arguably the de facto standard in navigation systems and GPS.\nGeneralizations.\nContinuous-time extended Kalman filter.\nModel\nInitialize\nPredict-Update\nUnlike the discrete-time extended Kalman filter, the prediction and update steps are coupled in the continuous-time extended Kalman filter.\nDiscrete-time measurements.\nMost physical systems are represented as continuous-time models while discrete-time measurements are frequently taken for state estimation via a digital processor. Therefore, the system model and measurement model are given by\nwhere formula_11.\nInitialize\nPredict\nwhere\nUpdate\nwhere\nThe update equations are identical to those of discrete-time extended Kalman filter.\nHigher-order extended Kalman filters.\nThe above recursion is a first-order extended Kalman filter (EKF). Higher order EKFs may be obtained by retaining more terms of the Taylor series expansions. For example, second and third order EKFs have been described. However, higher order EKFs tend to only provide performance benefits when the measurement noise is small.\nNon-additive noise formulation and equations.\nThe typical formulation of the EKF involves the assumption of additive process and measurement noise. This assumption, however, is not necessary for EKF implementation. Instead, consider a more general system of the form:\nHere w\"k\" and v\"k\" are the process and observation noises which are both assumed to be zero mean multivariate Gaussian noises with covariance Q\"k\" and R\"k\" respectively. Then the covariance prediction and innovation equations become\nwhere the matrices formula_23 and formula_24 are Jacobian matrices:\nThe predicted state estimate and measurement residual are evaluated at the mean of the process and measurement noise terms, which is assumed to be zero. Otherwise, the non-additive noise formulation is implemented in the same manner as the additive noise EKF.\nImplicit extended Kalman filter.\nIn certain cases, the observation model of a nonlinear system cannot be solved for formula_27, but can be expressed by the implicit function:\nwhere formula_29 are the noisy observations.\nThe conventional extended Kalman filter can be applied with the following substitutions:\nwhere:\nHere the original observation covariance matrix formula_33 is transformed, and the innovation formula_34 is defined differently. The Jacobian matrix formula_33 is defined as before, but determined from the implicit observation model formula_36.\nModifications.\nIterated extended Kalman filter.\nThe iterated extended Kalman filter improves the linearization of the extended Kalman filter by recursively modifying the centre point of the Taylor expansion. This reduces the linearization error at the cost of increased computational requirements.\nRobust extended Kalman filter.\nThe extended Kalman filter arises by linearizing the signal model about the current state estimate and using the linear Kalman filter to predict the next estimate. This attempts to produce a locally optimal filter, however, it is not necessarily stable because the solutions of the underlying Riccati equation are not guaranteed to be positive definite. One way of improving performance is the faux algebraic Riccati technique \nwhich trades off optimality for stability. The familiar structure of the extended Kalman filter is retained but stability is achieved by selecting a positive definite solution to a faux algebraic Riccati equation for the gain design.\nAnother way of improving extended Kalman filter performance is to employ the H-infinity results from robust control. Robust filters are obtained by adding a positive definite term to the design Riccati equation. The additional term is parametrized by a scalar which the designer may tweak to achieve a trade-off between mean-square-error and peak error performance criteria.\nInvariant extended Kalman filter.\nThe invariant extended Kalman filter (IEKF) is a modified version of the EKF for nonlinear systems possessing symmetries (or \"invariances\"). It combines the advantages of both the EKF and the recently introduced symmetry-preserving filters. Instead of using a linear correction term based on a linear output error, the IEKF uses a geometrically adapted correction term based on an invariant output error; in the same way the gain matrix is not updated from a linear state error, but from an invariant state error. The main benefit is that the gain and covariance equations converge to constant values on a much bigger set of trajectories than equilibrium points as it is the case for the EKF, which results in a better convergence of the estimation.\nUnscented Kalman filters.\nA nonlinear Kalman filter which shows promise as an improvement over the EKF is the unscented Kalman filter (UKF). In the UKF, the probability density is approximated by a deterministic sampling of points which represent the underlying distribution as a Gaussian. The nonlinear transformation of these points are intended to be an estimation of the posterior distribution, the moments of which can then be derived from the transformed samples. The transformation is known as the unscented transform. The UKF tends to be more robust and more accurate than the EKF in its estimation of error in all the directions.\n\"The extended Kalman filter (EKF) is probably the most widely used estimation algorithm for nonlinear systems. However, more than 35 years of experience in the estimation community has shown that is difficult to implement, difficult to tune, and only reliable for systems that are almost linear on the time scale of the updates. Many of these difficulties arise from its use of linearization.\"\nA 2012 paper includes simulation results which suggest that some published variants of the UKF fail to be as accurate as the Second Order Extended Kalman Filter (SOEKF), also known as the augmented Kalman filter. The SOEKF predates the UKF by approximately 35 years with the moment dynamics first described by Bass et al. The difficulty in implementing any Kalman-type filters for nonlinear state transitions stems from the numerical stability issues required for precision, however the UKF does not escape this difficulty in that it uses linearization as well, namely linear regression. The stability issues for the UKF generally stem from the numerical approximation to the square root of the covariance matrix, whereas the stability issues for both the EKF and the SOEKF stem from possible issues in the Taylor Series approximation along the trajectory.\nEnsemble Kalman Filter.\nThe UKF was in fact predated by the Ensemble Kalman filter, invented by Evensen in 1994. It has the advantage over the UKF that the number of ensemble members used can be much smaller than the state dimension, allowing for applications in very high-dimensional systems, such as weather prediction, with state-space sizes of a billion or more.\nFuzzy Kalman Filter.\nFuzzy Kalman filter with a new method to represent possibility distributions was recently proposed to replace probability distributions by possibility distributions in order to obtain a genuine possibilistic filter, enabling the use of non-symmetric process and observation noises as well as higher inaccuracies in both process and observation models.", "Automation-Control": 0.9863591194, "Qwen2": "Yes"} {"id": "13259772", "revid": "41840956", "url": "https://en.wikipedia.org/wiki?curid=13259772", "title": "SVN Notifier", "text": "SVN Notifier is a tool to monitor one's Subversion project repository for changes. SVN Notifier notifies a person about recent commits and helps you keep one's local copy up to date. A person reviews all the changes and updates their local copy right from the application. It is free software released under the GNU General Public License. It uses SVN, TortoiseSVN and Microsoft .NET Framework 2.0.\nComparison with other tools.\n\"SVN Notifier\" differs from the tool CommitMonitor in that it watches repository URLs via working copies rather than directly. \"SVN Notifier\" is tightly integrated with TortoiseSVN. It implements the \"monitoring/notification\" feature only (that is missing in TortoiseSVN) and thus has a very simple user interface.", "Automation-Control": 0.9473178387, "Qwen2": "Yes"} {"id": "73325021", "revid": "7611264", "url": "https://en.wikipedia.org/wiki?curid=73325021", "title": "Comparison of platforms for software agents", "text": "There several platforms for software agents or also agent development toolkits, which can facilitate the development of multi-agent systems. Hereby, software agents are implemented as independent threads which communicate with each other using agent communication languages. Below is a chart intended to capture many of the features that are important to such platforms.", "Automation-Control": 0.7739176154, "Qwen2": "Yes"} {"id": "91182", "revid": "44878732", "url": "https://en.wikipedia.org/wiki?curid=91182", "title": "System analysis", "text": "System analysis in the field of electrical engineering characterizes electrical systems and their properties. System analysis can be used to represent almost anything from population growth to audio speakers; electrical engineers often use it because of its direct relevance to many areas of their discipline, most notably signal processing, communication systems and control systems.\nCharacterization of systems.\nA system is characterized by how it responds to input signals. In general, a system has one or more input signals and one or more output signals. Therefore, one natural characterization of systems is by how many inputs and outputs they have:\nIt is often useful (or necessary) to break up a system into smaller pieces for analysis. Therefore, we can regard a SIMO system as multiple SISO systems (one for each output), and similarly for a MIMO system. By far, the greatest amount of work in system analysis has been with SISO systems, although many parts inside SISO systems have multiple inputs (such as adders).\nSignals can be continuous or discrete in time, as well as continuous or discrete in the values they take at any given time:\nWith this categorization of signals, a system can then be characterized as to which type of signals it deals with:\nAnother way to characterize systems is by whether their output at any given time depends only on the input at that time or perhaps on the input at some time in the past (or in the future!).\nAnalog systems with memory may be further classified as \"lumped\" or \"distributed\". The difference can be explained by considering the meaning of memory in a system. Future output of a system with memory depends on future input and a number of state variables, such as values of the input or output at various times in the past. If the number of state variables necessary to describe future output is finite, the system is lumped; if it is infinite, the system is distributed.\nFinally, systems may be characterized by certain properties which facilitate their analysis:\nThere are many methods of analysis developed specifically for linear time-invariant (\"LTI\") deterministic systems. Unfortunately, in the case of analog systems, none of these properties are ever perfectly achieved. Linearity implies that operation of a system can be scaled to arbitrarily large magnitudes, which is not possible. Time-invariance is violated by aging effects that can change the outputs of analog systems over time (usually years or even decades). Thermal noise and other random phenomena ensure that the operation of any analog system will have some degree of stochastic behavior. Despite these limitations, however, it is usually reasonable to assume that deviations from these ideals will be small.\nLTI systems.\nAs mentioned above, there are many methods of analysis developed specifically for Linear time-invariant systems (LTI systems). This is due to their simplicity of specification. An LTI system is completely specified by its transfer function (which is a rational function for digital and lumped analog LTI systems). Alternatively, we can think of an LTI system being completely specified by its frequency response. A third way to specify an LTI system is by its characteristic linear differential equation (for analog systems) or linear difference equation (for digital systems). Which description is most useful depends on the application.\nThe distinction between lumped and distributed LTI systems is important. A lumped LTI system is specified by a finite number of parameters, be it the zeros and poles of its transfer function, or the coefficients of its differential equation, whereas specification of a distributed LTI system requires a complete function, or partial differential equations.", "Automation-Control": 0.9958639145, "Qwen2": "Yes"} {"id": "6329960", "revid": "33467233", "url": "https://en.wikipedia.org/wiki?curid=6329960", "title": "LAPACK++", "text": "LAPACK++, the Linear Algebra PACKage in C++, is a computer software library of algorithms for numerical linear algebra that solves systems of linear equations and eigenvalue problems.\nIt supports various matrix classes for vectors, non-symmetric matrices, SPD matrices, symmetric matrices, banded, triangular, and tridiagonal matrices. However, it does not include all of the capabilities of original LAPACK library. \nHistory.\nThe original LAPACK++ (up to v1.1a) was written by R. Pozo et al. at the University of Tennessee and Oak Ridge National Laboratory.\nIn 2000, R. Pozo et al. left the project, with the projects' web page stating LAPACK++ would be superseded by the Template Numerical Toolkit (TNT).\nThe current LAPACK++ (versions 1.9 onwards) started off as a fork from the original LAPACK++. There are extensive fixes and changes, such as more wrapper functions for LAPACK and BLAS routines.", "Automation-Control": 0.7959795594, "Qwen2": "Yes"} {"id": "44455145", "revid": "43051325", "url": "https://en.wikipedia.org/wiki?curid=44455145", "title": "Automation engineering", "text": "Automation engineering is the provision of automated solutions to physical activities and industries.\nAutomation engineer.\nAutomation engineers are experts who have the knowledge and ability to design, create, develop and manage machines and systems, for example, factory automation, process automation and warehouse automation.\nAutomation technicians are also involved.\nScope.\nAutomation engineering is the integration of standard engineering fields. Automatic control of various control systems for operating various systems or machines to reduce human efforts & time to increase accuracy. Automation engineers design and service electromechanical devices and systems to high-speed robotics and programmable logic controllers (PLCs).\nWork and career after graduation.\nGraduates can work for both government and private sector entities such as industrial production, \ncompanies that create and use automation systems, for example paper industry, automotive industry, food and agricultural industry, water treatment, and oil & gas sector such as refineries, power plants.\nJob Description.\nAutomation engineers can design, program, simulate and test automated machinery and processes, and usually are employed in industries such as the energy sector in plants, car manufacturing facilities or food processing plants and robots. Automation engineers are responsible for creating detailed design specifications and other documents, developing automation based on specific requirements for the process involved, and conforming to international standards like IEC-61508, local standards, and other process specific guidelines and specifications, simulate, test and commission electronic equipment for automation.", "Automation-Control": 0.9999102354, "Qwen2": "Yes"} {"id": "47752705", "revid": "910180", "url": "https://en.wikipedia.org/wiki?curid=47752705", "title": "Automation technician", "text": "Automation technicians repair and maintain the computer-controlled systems and robotic devices used within industrial and commercial facilities to reduce human intervention and maximize efficiency. Their duties require knowledge of electronics, mechanics and computers. Automation technicians perform routine diagnostic checks on automated systems, monitor automated systems, isolate problems and perform repairs. If a problem occurs, the technician needs to be able to troubleshoot the issue and determine if the problem is mechanical, electrical or from the computer systems controlling the process. Once the issue has been diagnosed, the technician must repair or replace any necessary components, such as a sensor or electrical wiring. In addition to troubleshooting, Automation technicians design and service control systems ranging from electromechanical devices and systems to high-speed robotics and programmable logic controllers (PLCs). These types of systems include robotic assembly devices, conveyors, batch mixers, electrical distribution systems, and building automation systems. These machines and systems are often found within industrial and manufacturing plants, such as food processing facilities. Alternate job titles include field technician, bench technician, robotics technician, PLC technician, production support technician and maintenance technician.\nEducation and training.\nAutomation technician programs integrate computer programming with mechanics, electronics and process controls, They also commonly include coursework in hydraulics, pneumatics, programmable logic controllers, electrical circuits, electrical machinery and human-machine interfaces. Typical courses include math, communications, circuits, digital devices and electrical controls. Other courses include robotics, automation, electrical motor controls, programmable logic controllers, and computer-aided design. Good math and analytic skills are essential to understand automated systems and isolate problems. In addition to programming, Automation Technicians are expected to become proficient with various instruments and hand tools for troubleshooting, such as electrical multimeters, signal analyzers, and frequency counters.\nEmployers generally prefer applicants who have completed an automation technician certificate or associate degree. These programs can be completed at colleges and universities in either an in-class or online format. Some colleges, such as George Brown College, offer an online automation technician program that uses simulation software, LogixSim, to complete automation lab projects and assignments.\nOther relevant credentials to become an automation technician include mechatronics, robotics, and PLCs. Up-to-date credentials and certifications can enhance employment opportunities and keep technicians current with the latest technological developments. In addition to colleges and universities, other organizations and companies also offer credential programs in automation, including equipment manufacturers such as Rockwell and professional associations, such as the Electronics Technicians Association, Robotics Industries Association and the Manufacturing Skill Standards Council.\nCareer prospects.\nCareer opportunities for automation technicians include a wide range of manufacturing and service industries such as automotive, pharmaceutical, power distribution, food processing, mining, and transportation. Other career prospects include areas as machine assembly, troubleshooting and testing, systems integration, application support, maintenance, component testing and assembly, automation programming, robot maintenance and programming, technical sales and services.\nTypical job-related activities may involve:\nExperienced automation technicians with advanced training may become specialists or troubleshooters who help other technicians diagnose difficult problems, or work with engineers in designing equipment and developing maintenance procedures. Automation technicians with leadership ability also may eventually become maintenance supervisors or service managers. Due to the highly specialized skills and knowledge required, there are many opportunities available to automation technicians in the service sector where there is a great demand for contract and sub-contract work with smaller manufacturing and distribution companies. Some experienced automation technicians open their own design, installation and maintenance companies. They can also become wholesalers or retailers of automation equipment, including inside and outside sales of automation equipment and systems. \nBecause of their familiarity with control systems and equipment, automation technicians are particularly well qualified to become manufacturers' sales representatives. Other related opportunities include customer service, quality-control, quality-assurance and consulting.", "Automation-Control": 0.9990227818, "Qwen2": "Yes"} {"id": "12953353", "revid": "16185737", "url": "https://en.wikipedia.org/wiki?curid=12953353", "title": "IEEE Transactions on Control Systems Technology", "text": "The IEEE Transactions on Control Systems Technology is published bimonthly by the IEEE Control Systems Society. The journal publishes papers, letters, tutorials, surveys, and perspectives on control systems technology. The editor-in-chief is Prof. Andrea Serrani (Ohio State University). According to the \"Journal Citation Reports\", the journal has a 2019 impact factor of 5.312.", "Automation-Control": 0.9993864298, "Qwen2": "Yes"} {"id": "12961208", "revid": "1069822031", "url": "https://en.wikipedia.org/wiki?curid=12961208", "title": "Single-pass bore finishing", "text": "Single-pass bore finishing is a machining process similar to honing to finish a bore, except the tool only takes a single pass. The process was originally developed to improve bore quality in cast iron workpieces.\nProcess.\nThis process uses multiple diamond-plated, barrel-shaped tools to finish a bore. The tool has a single layer of diamonds bonded to the tool, with about half of each diamond exposed. These special tools are made to a specific diameter and are only meant to open up the hole to that size.\nThe tools are usually mounted in a dedicated bore finishing machine, however they can also be mounted in a milling machine. In either case the tool, workpiece, or both are rotated and the tool is plunged into the bore and removed. The part is then transferred to the next station or a larger tool is mounted and a larger bore machined, and the process repeated until the desired bore geometry is reached. The number of tools required to achieve the desired bore size is dependent on the workpiece material, the amount of stock to be removed and geometrical requirements, with four to six tool pieces being common. Each tool is progressively larger than the last, but in diminishing increments; as the stock removal is reduced, so is the tool's diamond grit size.\nThe process is similar to honing, in that the tool follows the existing center line of the bore. To make sure the tool follows the existing center line, the tool, workpiece, or both are allowed to float. Usually just the workpiece is floated, but both pieces may be floated to get the tightest tolerances, however this greatly increases complexity. For workpieces that are larger than approximately it may be more feasible to float the tool. The process can achieve a size tolerance of and a geometry tolerance of in production.\nMachine tool.\nSingle-pass bore finishing is not usually done in a milling machine for several reasons. Firstly, most milling machines have only one spindle, so changing the tool more than four to six times can increase cycle times significantly. Secondly, most workpieces that require this process are made on horizontal machining centers (HMC), which reduces float-ability due to gravity. Thirdly, the lubrication may not be sufficient, which can lead to material build-up between diamonds, diminishing the tool's effectiveness. Finally, if any chips remain from previous operations they can ruin the tool.\nInstead, typically a dedicated machine tool is used. It has four to eight spindles and usually a rotary table. The cycle time for this type of setup is determined by the longest individual operation, which in this situation is determined by how long it takes to plunge and retract the tool through the bore. Throughput can be increased by completing two workpieces on each cycle; this is achieved by having two identical stations for each tool size so that two workpieces can be operated on concurrently.\nAdvantages and disadvantages.\nThere is little downtime due to tool changes because tools usually last from tens of thousands of passes to over a million. The perishable tool cost can be as low as a 0.01 USD per bore for very large quantity runs. To make the process cost effective minimum runs would be on the order of one to two hundred parts with several runs each year.\nSingle-pass bore finishing is not well suited for blind holes because the tool has a tapered lead on it which prevents the bottom of the hole from being finished. The process can be performed on blind holes, but it requires an alternative tool design and suitable manufacturing conditions. A better alternative is ID grinding.\nCommonly processed materials include soft and hard steels, aluminum, bronze, brass, ceramics, and chrome. Note that gummy grades of stainless steel, aluminum, and all but the hardest grades of plastic are much tougher for this process. The gumminess problem can be overcome with special oil based cutting fluids. Also, the process does not work well on thin-walled workpieces owing to a tendency to expand when the tool is inserted.\nThis method of bore finishing is better suited for bores with relatively low length-to-diameter ratios, usually less than 2:1. However, if there are cross-holes, or other interruptions in the bore, then a ratio greater than 2:1 is possible, because swarf and fluids may be expelled via these routes. This process is also not well suited for surfaces that require cross-hatching.", "Automation-Control": 0.8275164366, "Qwen2": "Yes"} {"id": "3269334", "revid": "42584677", "url": "https://en.wikipedia.org/wiki?curid=3269334", "title": "Plant operator", "text": "A plant operator is an employee who supervises the operation of an industrial plant. The term is usually applied to workers employed in utilities, wastewater treatment plants, power plants or chemical plants such as gas extraction facilities, petrochemical or oil refineries.\nModern industrial plants are generally highly automated, with control of the plant's processes centralised in a control room from which valves, gauges, alarms and switches may be operated. Employees working in these environments are sometimes known as control room, panel or board operators - conversely, workers carrying out field operations may be known as 'outside operators'. Generally, operators are assigned to a particular unit, on which they are responsible for a certain function or area of equipment. Operators are also often responsible for ensuring work is being done in a safe manner, including managing 'permit to work' systems covering other workers.", "Automation-Control": 0.6453838348, "Qwen2": "Yes"} {"id": "3270224", "revid": "1170500315", "url": "https://en.wikipedia.org/wiki?curid=3270224", "title": "List of SIP software", "text": "This list of SIP software documents notable software applications which use Session Initiation Protocol (SIP) as a voice over IP (VoIP) protocol.\nServers.\nFree and open-source license.\nA SIP server, also known as a SIP proxy, manages all SIP calls within a network and takes responsibility for receiving requests from user agents for the purpose of placing and terminating calls.", "Automation-Control": 0.9910950661, "Qwen2": "Yes"} {"id": "1273491", "revid": "6727347", "url": "https://en.wikipedia.org/wiki?curid=1273491", "title": "Exponential stability", "text": "In control theory, a continuous linear time-invariant system (LTI) is exponentially stable if and only if the system has eigenvalues (i.e., the poles of input-to-output systems) with strictly negative real parts. (i.e., in the left half of the complex plane). A discrete-time input-to-output LTI system is exponentially stable if and only if the poles of its transfer function lie strictly within the unit circle centered on the origin of the complex plane. Systems that are not LTI are exponentially stable if their convergence is bounded by exponential decay.\nExponential stability is a form of asymptotic stability, valid for more general dynamical systems.\nPractical consequences.\nAn exponentially stable LTI system is one that will not \"blow up\" (i.e., give an unbounded output) when given a finite input or non-zero initial condition. Moreover, if the system is given a fixed, finite input (i.e., a step), then any resulting oscillations in the output will decay at an exponential rate, and the output will tend asymptotically to a new final, steady-state value. If the system is instead given a Dirac delta impulse as input, then induced oscillations will die away and the system will return to its previous value. If oscillations do not die away, or the system does not return to its original output when an impulse is applied, the system is instead marginally stable.\nExample exponentially stable LTI systems.\nThe graph on the right shows the impulse response of two similar systems. The green curve is the response of the system with impulse response formula_1, while the blue represents the system formula_2. Although one response is oscillatory, both return to the original value of 0 over time. \nReal-world example.\nImagine putting a marble in a ladle. It will settle itself into the lowest point of the ladle and, unless disturbed, will stay there. Now imagine giving the ball a push, which is an approximation to a Dirac delta impulse. The marble will roll back and forth but eventually resettle in the bottom of the ladle. Drawing the horizontal position of the marble over time would give a gradually diminishing sinusoid rather like the blue curve in the image above.\nA step input in this case requires supporting the marble away from the bottom of the ladle, so that it cannot roll back. It will stay in the same position and will not, as would be the case if the system were only marginally stable or entirely unstable, continue to move away from the bottom of the ladle under this constant force equal to its weight.\nIt is important to note that in this example the system is not stable for all inputs. Give the marble a big enough push, and it will fall out of the ladle and fall, stopping only when it reaches the floor. For some systems, therefore, it is proper to state that a system is exponentially stable \"over a certain range of inputs\".", "Automation-Control": 0.9956436157, "Qwen2": "Yes"} {"id": "1273629", "revid": "38627444", "url": "https://en.wikipedia.org/wiki?curid=1273629", "title": "Marginal stability", "text": "In the theory of dynamical systems and control theory, a linear time-invariant system is marginally stable if it is neither asymptotically stable nor unstable. Roughly speaking, a system is stable if it always returns to and stays near a particular state (called the steady state), and is unstable if it goes farther and farther away from any state, without being bounded. A marginal system, sometimes referred to as having neutral stability, is between these two types: when displaced, it does not return to near a common steady state, nor does it go away from where it started without limit.\nMarginal stability, like instability, is a feature that control theory seeks to avoid; we wish that, when perturbed by some external force, a system will return to a desired state. This necessitates the use of appropriately designed control algorithms.\nIn econometrics, the presence of a unit root in observed time series, rendering them marginally stable, can lead to invalid regression results regarding effects of the independent variables upon a dependent variable, unless appropriate techniques are used to convert the system to a stable system.\nContinuous time.\nA homogeneous continuous linear time-invariant system is marginally stable if and only if the real part of every pole (eigenvalue) in the system's transfer-function is non-positive, one or more poles have zero real part, and all poles with zero real part are simple roots (i.e. the poles on the imaginary axis are all distinct from one another). In contrast, if all the poles have strictly negative real parts, the system is instead asymptotically stable. If the system is neither stable nor marginally stable, it is unstable.\nIf the system is in state space representation, marginal stability can be analyzed by deriving the Jordan normal form: if and only if the Jordan blocks corresponding to poles with zero real part are scalar is the system marginally stable.\nDiscrete time.\nA homogeneous discrete time linear time-invariant system is marginally stable if and only if the greatest magnitude of any of the poles (eigenvalues) of the transfer function is 1, and the poles with magnitude equal to 1 are all distinct. That is, the transfer function's spectral radius is 1. If the spectral radius is less than 1, the system is instead asymptotically stable.\nA simple example involves a single first-order linear difference equation: Suppose a state variable \"x\" evolves according to\nwith parameter \"a\" > 0. If the system is perturbed to the value formula_2 its subsequent sequence of values is formula_3 If \"a\" 1 the numbers get larger and larger without bound. But if \"a\" = 1, the numbers do neither of these: instead, all future values of \"x\" equal the value formula_5 Thus the case \"a\" = 1 exhibits marginal stability.\nSystem response.\nA marginally stable system is one that, if given an impulse of finite magnitude as input, will not \"blow up\" and give an unbounded output, but neither will the output return to zero. A bounded offset or oscillations in the output will persist indefinitely, and so there will in general be no final steady-state output. If a continuous system is given an input at a frequency equal to the frequency of a pole with zero real part, the system's output will increase indefinitely (this is known as pure resonance). This explains why for a system to be BIBO stable, the real parts of the poles have to be strictly negative (and not just non-positive).\nA continuous system having imaginary poles, i.e. having zero real part in the pole(s), will produce sustained oscillations in the output. For example, an undamped second-order system such as the suspension system in an automobile (a mass–spring–damper system), from which the damper has been removed and spring is ideal, i.e. no friction is there, will in theory oscillate forever once disturbed. Another example is a frictionless pendulum. A system with a pole at the origin is also marginally stable but in this case there will be no oscillation in the response as the imaginary part is also zero (\"jw\" = 0 means \"w\" = 0 rad/sec). An example of such a system is a mass on a surface with friction. When a sidewards impulse is applied, the mass will move and never returns to zero. The mass will come to rest due to friction however, and the sidewards movement will remain bounded.\nSince the locations of the marginal poles must be \"exactly\" on the imaginary axis or unit circle (for continuous time and discrete time systems respectively) for a system to be marginally stable, this situation is unlikely to occur in practice unless marginal stability is an inherent theoretical feature of the system.\nStochastic dynamics.\nMarginal stability is also an important concept in the context of stochastic dynamics. For example, some processes may follow a random walk, given in discrete time as \nwhere formula_7 is an i.i.d. error term. This equation has a unit root (a value of 1 for the eigenvalue of its characteristic equation), and hence exhibits marginal stability, so special time series techniques must be used in empirically modeling a system containing such an equation.\nMarginally stable Markov processes are those that possess null recurrent classes.", "Automation-Control": 0.9328565598, "Qwen2": "Yes"} {"id": "9055760", "revid": "46016783", "url": "https://en.wikipedia.org/wiki?curid=9055760", "title": "Sumitomo Heavy Industries", "text": " (SHI) is an integrated manufacturer of industrial machinery, automatic weaponry, ships, bridges and steel structure, equipment for environmental protection, including recycling, power transmission equipment, plastic molding machines, laser processing systems, particle accelerators, material handling systems, cancer diagnostic and treatment equipment and others.\nHistory.\nIn 1888, a company was formed to provide equipment repair services to the Besshi copper mine. Almost 50 years later, in 1934, the company incorporated as Sumitomo Machinery Co., Ltd. to manufacture machinery for the steel and transportation industries in support of that period of rapid economic growth.\nIn 1969, Sumitomo Machinery Co., Ltd. merged with Uraga Heavy Industries Co., Ltd. to create Sumitomo Heavy Industries, Ltd. The company continues to innovate and expand to meet the demands of the new market frontiers. Today, Sumitomo Heavy Industries manufactures injection molding machines, laser systems, semiconductor machinery and liquid crystal production machinery.\nIn 1979, the company famously built the \"Seawise Giant\", an Ultra Large Crude Carrier (ULCC) supertanker; the longest ship ever built.\nAs of 2021, it's reported that SHI has ceased making light machine guns for the JSDF due to bleak economic prospects.", "Automation-Control": 0.9914690852, "Qwen2": "Yes"} {"id": "15702071", "revid": "9215586", "url": "https://en.wikipedia.org/wiki?curid=15702071", "title": "Evolutionary acquisition of neural topologies", "text": "Evolutionary acquisition of neural topologies (EANT/EANT2) is an evolutionary reinforcement learning method that evolves both the topology and weights of artificial neural networks. It is closely related to the works of Angeline et al. and Stanley and Miikkulainen. Like the work of Angeline et al., the method uses a type of parametric mutation that comes from evolution strategies and evolutionary programming (now using the most advanced form of the evolution strategies CMA-ES in EANT2), in which adaptive step sizes are used for optimizing the weights of the neural networks. Similar to the work of Stanley (NEAT), the method starts with minimal structures which gain complexity along the evolution path.\nContribution of EANT to neuroevolution.\nDespite sharing these two properties, the method has the following important features which distinguish it from previous works in neuroevolution.\nIt introduces a genetic encoding called common genetic encoding (CGE) that handles both direct and indirect encoding of neural networks within the same theoretical framework. The encoding has important properties that makes it suitable for evolving neural networks: \nThese properties have been formally proven.\nFor evolving the structure and weights of neural networks, an evolutionary process is used, where the \"exploration\" of structures is executed at a larger timescale (structural exploration), and the \"exploitation\" of existing structures is done at a smaller timescale (structural exploitation). In the structural exploration phase, new neural structures are developed by gradually adding new structures to an initially minimal network that is used as a starting point. In the structural exploitation phase, the weights of the currently available structures are optimized using an evolution strategy.\nPerformance.\nEANT has been tested on some benchmark problems such as the double-pole balancing problem, and the RoboCup keepaway benchmark. In all the tests, EANT was found to perform very well. Moreover, a newer version of EANT, called EANT2, was tested on a visual servoing task and found to outperform NEAT and the traditional iterative Gauss–Newton method. Further experiments include results on a classification problem ", "Automation-Control": 0.834140718, "Qwen2": "Yes"} {"id": "18437430", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=18437430", "title": "Exploration problem", "text": "In robotics, the exploration problem deals with the use of a robot to maximize the knowledge over a particular area. The exploration problem arises in robotic mapping and search & rescue situations, where an environment might be dangerous or inaccessible to humans.\nOverview.\nThe exploration problem naturally arises in situations in which a robot is utilized to survey an area that is dangerous or inaccessible for humans. The field of robotic explorations draws from various fields of information gathering and decision theory, and have been studied as far back as the 1950s. \nThe earliest work in robotic exploration was done in the context of simple finite state automata known as bandits, where algorithms were designed to distinguish and map different states in a finite state automaton. Since then, the primary emphasis has been shifted to the robotics system development domain, where exploration-algorithms guided robot have been used to survey volcanos, search and rescue, and abandoned mines mapping. Current state of the art system include advanced techniques on active localization, simultaneous localization and mapping (SLAM) based exploration, and multi-agent cooperative exploration.\nInformation gain.\nThe key concept in the exploration problem is the notion of information gain, that is, the amount of knowledge acquired while pushing the frontiers. A probabilistic measure of information gain is defined by the entropy\nThe function formula_2 is maximized if \"p\" is a uniform distribution and minimized when \"p\" is a point mass distribution. By minimizing the expected entropy of belief, information gain is maximized as", "Automation-Control": 0.9283407927, "Qwen2": "Yes"} {"id": "66256", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=66256", "title": "Proportional–integral–derivative controller", "text": "A proportional–integral–derivative controller (PID controller or three-term controller) is a control loop mechanism employing feedback that is widely used in industrial control systems and a variety of other applications requiring continuously modulated control. A PID controller continuously calculates an \"error value\" formula_1 as the difference between a desired setpoint (SP) and a measured process variable (PV) and applies a correction based on proportional, integral, and derivative terms (denoted \"P\", \"I\", and \"D\" respectively), hence the name.\nPID systems automatically apply accurate and responsive correction to a control function. An everyday example is the cruise control on a car, where ascending a hill would lower speed if constant engine power were applied. The controller's PID algorithm restores the measured speed to the desired speed with minimal delay and overshoot by increasing the power output of the engine in a controlled manner.\nThe first theoretical analysis and practical application of PID was in the field of automatic steering systems for ships, developed from the early 1920s onwards. It was then used for automatic process control in the manufacturing industry, where it was widely implemented in pneumatic and then electronic controllers. Today the PID concept is used universally in applications requiring accurate and optimized automatic control.\nFundamental operation.\nThe distinguishing feature of the PID controller is the ability to use the three \"control terms\" of proportional, integral and derivative influence on the controller output to apply accurate and optimal control. The block diagram on the right shows the principles of how these terms are generated and applied. It shows a PID controller, which continuously calculates an \"error value\" formula_1 as the difference between a desired setpoint formula_3 and a measured process variable formula_4: formula_5, and applies a correction based on proportional, integral, and derivative terms. The controller attempts to minimize the error over time by adjustment of a \"control variable\" formula_6, such as the opening of a control valve, to a new value determined by a weighted sum of the control terms.\nIn this model:\nTuning – The balance of these effects is achieved by loop tuning to produce the optimal control function. The tuning constants are shown below as \"K\" and must be derived for each control application, as they depend on the response characteristics of the complete loop external to the controller. These are dependent on the behavior of the measuring sensor, the final control element (such as a control valve), any control signal delays, and the process itself. Approximate values of constants can usually be initially entered knowing the type of application, but they are normally refined, or tuned, by \"bumping\" the process in practice by introducing a setpoint change and observing the system response.\nControl action – The mathematical model and practical loop above both use a \"direct\" control action for all the terms, which means an increasing positive error results in an increasing positive control output correction. The system is called \"reverse\" acting if it is necessary to apply negative corrective action. For instance, if the valve in the flow loop was 100–0% valve opening for 0–100% control output – meaning that the controller action has to be reversed. Some process control schemes and final control elements require this reverse action. An example would be a valve for cooling water, where the fail-safe mode, in the case of signal loss, would be 100% opening of the valve; therefore 0% controller output needs to cause 100% valve opening.\nMathematical form.\nThe overall control function\nformula_8\nwhere formula_9, formula_10, and formula_11, all non-negative, denote the coefficients for the proportional, integral, and derivative terms respectively (sometimes denoted \"P\", \"I\", and \"D\").\nIn the \"standard form\" of the equation (see later in article), formula_10 and formula_11 are respectively replaced by formula_14 and formula_15; the advantage of this being that formula_16 and formula_17 have some understandable physical meaning, as they represent an integration time and a derivative time respectively. formula_15 is the time constant with which the controller will attempt to approach the set point. formula_14 determines how long the controller will tolerate the output being consistently above or below the set point. \nSelective use of control terms.\nAlthough a PID controller has three control terms, some applications need only one or two terms to provide appropriate control. This is achieved by setting the unused parameters to zero and is called a PI, PD, P, or I controller in the absence of the other control actions. PI controllers are fairly common in applications where derivative action would be sensitive to measurement noise, but the integral term is often needed for the system to reach its target value.\nApplicability.\nThe use of the PID algorithm does not guarantee optimal control of the system or its control stability . Situations may occur where there are excessive delays: the measurement of the process value is delayed, or the control action does not apply quickly enough. In these cases lead–lag compensation is required to be effective. The response of the controller can be described in terms of its responsiveness to an error, the degree to which the system overshoots a setpoint, and the degree of any system oscillation. But the PID controller is broadly applicable since it relies only on the response of the measured process variable, not on knowledge or a model of the underlying process.\nHistory.\nOrigins.\nContinuous control, before PID controllers were fully understood and implemented, has one of its origins in the centrifugal governor, which uses rotating weights to control a process. This was invented by Christiaan Huygens in the 17th century to regulate the gap between millstones in windmills depending on the speed of rotation, and thereby compensate for the variable speed of grain feed.\nWith the invention of the low-pressure stationary steam engine there was a need for automatic speed control, and James Watt’s self-designed \"conical pendulum\" governor, a set of revolving steel balls attached to a vertical spindle by link arms, came to be an industry standard. This was based on the millstone-gap control concept.\nRotating-governor speed control, however, was still variable under conditions of varying load, where the shortcoming of what is now known as proportional control alone was evident. The error between the desired speed and the actual speed would increase with increasing load. In the 19th century, the theoretical basis for the operation of governors was first described by James Clerk Maxwell in 1868 in his now-famous paper \"On Governors\". He explored the mathematical basis for control stability, and progressed a good way towards a solution, but made an appeal for mathematicians to examine the problem. The problem was examined further in 1874 by Edward Routh, Charles Sturm, and in 1895, Adolf Hurwitz, all of whom contributed to the establishment of control stability criteria.\nIn subsequent applications, speed governors were further refined, notably by American scientist Willard Gibbs, who in 1872 theoretically analyzed Watt's conical pendulum governor.\nAbout this time, the invention of the Whitehead torpedo posed a control problem that required accurate control of the running depth. Use of a depth pressure sensor alone proved inadequate, and a pendulum that measured the fore and aft pitch of the torpedo was combined with depth measurement to become the pendulum-and-hydrostat control. Pressure control provided only a proportional control that, if the control gain was too high, would become unstable and go into overshoot with considerable instability of depth-holding. The pendulum added what is now known as derivative control, which damped the oscillations by detecting the torpedo dive/climb angle and thereby the rate-of-change of depth. This development (named by Whitehead as \"The Secret\" to give no clue to its action) was around 1868.\nAnother early example of a PID-type controller was developed by Elmer Sperry in 1911 for ship steering, though his work was intuitive rather than mathematically-based.\nIt was not until 1922, however, that a formal control law for what we now call PID or three-term control was first developed using theoretical analysis, by Russian American engineer Nicolas Minorsky. Minorsky was researching and designing automatic ship steering for the US Navy and based his analysis on observations of a helmsman. He noted the helmsman steered the ship based not only on the current course error but also on past error, as well as the current rate of change; this was then given a mathematical treatment by Minorsky.\nHis goal was stability, not general control, which simplified the problem significantly. While proportional control provided stability against small disturbances, it was insufficient for dealing with a steady disturbance, notably a stiff gale (due to steady-state error), which required adding the integral term. Finally, the derivative term was added to improve stability and control.\nTrials were carried out on the USS \"New Mexico\", with the controllers controlling the \"angular velocity\" (not the angle) of the rudder. PI control yielded sustained yaw (angular error) of ±2°. Adding the D element yielded a yaw error of ±1/6°, better than most helmsmen could achieve.\nThe Navy ultimately did not adopt the system due to resistance by personnel. Similar work was carried out and published by several others in the 1930s.\nIndustrial control.\nThe wide use of feedback controllers did not become feasible until the development of wideband high-gain amplifiers to use the concept of negative feedback. This had been developed in telephone engineering electronics by Harold Black in the late 1920s, but not published until 1934. Independently, Clesson E Mason of the Foxboro Company in 1930 invented a wide-band pneumatic controller by combining the nozzle and flapper high-gain pneumatic amplifier, which had been invented in 1914, with negative feedback from the controller output. This dramatically increased the linear range of operation of the nozzle and flapper amplifier, and integral control could also be added by the use of a precision bleed valve and a bellows generating the integral term. The result was the \"Stabilog\" controller which gave both proportional and integral functions using feedback bellows. The integral term was called \"Reset\". Later the derivative term was added by a further bellows and adjustable orifice.\nFrom about 1932 onwards, the use of wideband pneumatic controllers increased rapidly in a variety of control applications. Air pressure was used for generating the controller output, and also for powering process modulating devices such as diaphragm-operated control valves. They were simple low maintenance devices that operated well in harsh industrial environments and did not present explosion risks in hazardous locations. They were the industry standard for many decades until the advent of discrete electronic controllers and distributed control systems (DCSs).\nWith these controllers, a pneumatic industry signaling standard of was established, which had an elevated zero to ensure devices were working within their linear characteristic and represented the control range of 0-100%.\nIn the 1950s, when high gain electronic amplifiers became cheap and reliable, electronic PID controllers became popular, and the pneumatic standard was emulated by 10-50 mA and 4–20 mA current loop signals (the latter became the industry standard). Pneumatic field actuators are still widely used because of the advantages of pneumatic energy for control valves in process plant environments.\nMost modern PID controls in industry are implemented as computer software in DCSs, programmable logic controllers (PLCs), or discrete compact controllers.\nElectronic analog controllers.\nElectronic analog PID control loops were often found within more complex electronic systems, for example, the head positioning of a disk drive, the power conditioning of a power supply, or even the movement-detection circuit of a modern seismometer. Discrete electronic analog controllers have been largely replaced by digital controllers using microcontrollers or FPGAs to implement PID algorithms. However, discrete analog PID controllers are still used in niche applications requiring high-bandwidth and low-noise performance, such as laser-diode controllers.\nControl loop example.\nConsider a robotic arm that can be moved and positioned by a control loop. An electric motor may lift or lower the arm, depending on forward or reverse power applied, but power cannot be a simple function of position because of the inertial mass of the arm, forces due to gravity, external forces on the arm such as a load to lift or work to be done on an external object.\nBy measuring the position (PV), and subtracting it from the setpoint (SP), the error (e) is found, and from it the controller calculates how much electric current to supply to the motor (MV).\nProportional.\nThe obvious method is proportional control: the motor current is set in proportion to the existing error. However, this method fails if, for instance, the arm has to lift different weights: a greater weight needs a greater force applied for the same error on the down side, but a smaller force if the error is low on the upside. That's where the integral and derivative terms play their part.\nIntegral.\nAn integral term increases action in relation not only to the error but also the time for which it has persisted. So, if the applied force is not enough to bring the error to zero, this force will be increased as time passes. A pure \"I\" controller could bring the error to zero, but it would be both slow reacting at the start (because the action would be small at the beginning, depending on time to get significant) and brutal at the end (the action increases as long as the error is positive, even if the error has started to approach zero).\nApplying too much integral when the error is small and decreasing will lead to overshoot. After overshooting, if the controller were to apply a large correction in the opposite direction and repeatedly overshoot the desired position, the output would oscillate around the setpoint in either a constant, growing, or decaying sinusoid. If the amplitude of the oscillations increases with time, the system is unstable. If they decrease, the system is stable. If the oscillations remain at a constant magnitude, the system is marginally stable.\nDerivative.\nA derivative term does not consider the magnitude of the error (meaning it cannot bring it to zero: a pure D controller cannot bring the system to its setpoint), but the rate of change of error, trying to bring this rate to zero. It aims at flattening the error trajectory into a horizontal line, damping the force applied, and so reduces overshoot (error on the other side because of too great applied force).\nControl damping.\nIn the interest of achieving a controlled arrival at the desired position (SP) in a timely and accurate way, the controlled system needs to be critically damped. A well-tuned position control system will also apply the necessary currents to the controlled motor so that the arm pushes and pulls as necessary to resist external forces trying to move it away from the required position. The setpoint itself may be generated by an external system, such as a PLC or other computer system, so that it continuously varies depending on the work that the robotic arm is expected to do. A well-tuned PID control system will enable the arm to meet these changing requirements to the best of its capabilities.\nResponse to disturbances.\nIf a controller starts from a stable state with zero error (PV = SP), then further changes by the controller will be in response to changes in other measured or unmeasured inputs to the process that affect the process, and hence the PV. Variables that affect the process other than the MV are known as disturbances. Generally, controllers are used to reject disturbances and to implement setpoint changes. A change in load on the arm constitutes a disturbance to the robot arm control process.\nApplications.\nIn theory, a controller can be used to control any process that has a measurable output (PV), a known ideal value for that output (SP), and an input to the process (MV) that will affect the relevant PV. Controllers are used in industry to regulate temperature, pressure, force, feed rate, flow rate, chemical composition (component concentrations), weight, position, speed, and practically every other variable for which a measurement exists.\nController theory.\nThe PID control scheme is named after its three correcting terms, whose sum constitutes the manipulated variable (MV). The proportional, integral, and derivative terms are summed to calculate the output of the PID controller. Defining formula_6 as the controller output, the final form of the PID algorithm is\nwhere\nEquivalently, the transfer function in the Laplace domain of the PID controller is\nwhere formula_31 is the complex frequency.\nProportional term.\nThe proportional term produces an output value that is proportional to the current error value. The proportional response can be adjusted by multiplying the error by a constant \"K\"p, called the proportional gain constant.\nThe proportional term is given by\nA high proportional gain results in a large change in the output for a given change in the error. If the proportional gain is too high, the system can become unstable (see the section on loop tuning). In contrast, a small gain results in a small output response to a large input error, and a less responsive or less sensitive controller. If the proportional gain is too low, the control action may be too small when responding to system disturbances. Tuning theory and industrial practice indicate that the proportional term should contribute the bulk of the output change.\nSteady-state error.\nThe steady-state error is the difference between the desired final output and the actual one. Because a non-zero error is required to drive it, a proportional controller generally operates with a steady-state error. Steady-state error (SSE) is proportional to the process gain and inversely proportional to proportional gain. SSE may be mitigated by adding a compensating bias term to the setpoint AND output or corrected dynamically by adding an integral term.\nIntegral term.\nThe contribution from the integral term is proportional to both the magnitude of the error and the duration of the error. The integral in a PID controller is the sum of the instantaneous error over time and gives the accumulated offset that should have been corrected previously. The accumulated error is then multiplied by the integral gain (\"K\"i) and added to the controller output.\nThe integral term is given by\nThe integral term accelerates the movement of the process towards setpoint and eliminates the residual steady-state error that occurs with a pure proportional controller. However, since the integral term responds to accumulated errors from the past, it can cause the present value to overshoot the setpoint value (see the section on loop tuning).\nDerivative term.\nThe derivative of the process error is calculated by determining the slope of the error over time and multiplying this rate of change by the derivative gain \"K\"d. The magnitude of the contribution of the derivative term to the overall control action is termed the derivative gain, \"K\"d.\nThe derivative term is given by\nDerivative action predicts system behavior and thus improves settling time and stability of the system. An ideal derivative is not causal, so that implementations of PID controllers include an additional low-pass filtering for the derivative term to limit the high-frequency gain and noise. Derivative action is seldom used in practice though – by one estimate in only 25% of deployed controllers – because of its variable impact on system stability in real-world applications.\nLoop tuning.\n\"Tuning\" a control loop is the adjustment of its control parameters (proportional band/gain, integral gain/reset, derivative gain/rate) to the optimum values for the desired control response. Stability (no unbounded oscillation) is a basic requirement, but beyond that, different systems have different behavior, different applications have different requirements, and requirements may conflict with one another.\nEven though there are only three parameters and it is simple to describe in principle, PID tuning is a difficult problem because it must satisfy complex criteria within the limitations of PID control. Accordingly, there are various methods for loop tuning, and more sophisticated techniques are the subject of patents; this section describes some traditional, manual methods for loop tuning.\nDesigning and tuning a PID controller appears to be conceptually intuitive, but can be hard in practice, if multiple (and often conflicting) objectives, such as short transient and high stability, are to be achieved. PID controllers often provide acceptable control using default tunings, but performance can generally be improved by careful tuning, and performance may be unacceptable with poor tuning. Usually, initial designs need to be adjusted repeatedly through computer simulations until the closed-loop system performs or compromises as desired.\nSome processes have a degree of nonlinearity, so parameters that work well at full-load conditions do not work when the process is starting up from no load. This can be corrected by gain scheduling (using different parameters in different operating regions).\nStability.\nIf the PID controller parameters (the gains of the proportional, integral and derivative terms) are chosen incorrectly, the controlled process input can be unstable; i.e., its output diverges, with or without oscillation, and is limited only by saturation or mechanical breakage. Instability is caused by \"excess\" gain, particularly in the presence of significant lag.\nGenerally, stabilization of response is required and the process must not oscillate for any combination of process conditions and setpoints, though sometimes marginal stability (bounded oscillation) is acceptable or desired.\nMathematically, the origins of instability can be seen in the Laplace domain.\nThe closed-loop transfer function is:\nwhere formula_36 is the PID transfer function and formula_37 is the plant transfer function. A system is \"unstable\" where the closed-loop transfer function diverges for some formula_31. This happens in situations where formula_39. Typically, this happens when formula_40 with a 180-degree phase shift. Stability is guaranteed when formula_41 for frequencies that suffer high phase shifts. A more general formalism of this effect is known as the Nyquist stability criterion.\nOptimal behavior.\nThe optimal behavior on a process change or setpoint change varies depending on the application.\nTwo basic requirements are \"regulation\" (disturbance rejection – staying at a given setpoint) and \"command tracking\" (implementing setpoint changes). These terms refer to how well the controlled variable tracks the desired value. Specific criteria for command tracking include rise time and settling time. Some processes must not allow an overshoot of the process variable beyond the setpoint if, for example, this would be unsafe. Other processes must minimize the energy expended in reaching a new setpoint.\nOverview of tuning methods.\nThere are several methods for tuning a PID loop. The most effective methods generally involve developing some form of process model and then choosing P, I, and D based on the dynamic model parameters. Manual tuning methods can be relatively time-consuming, particularly for systems with long loop times.\nThe choice of method depends largely on whether the loop can be taken offline for tuning, and on the response time of the system. If the system can be taken offline, the best tuning method often involves subjecting the system to a step change in input, measuring the output as a function of time, and using this response to determine the control parameters.\nManual tuning.\nIf the system must remain online, one tuning method is to first set formula_42 and formula_43 values to zero. Increase the formula_44 until the output of the loop oscillates; then set formula_44 to approximately half that value for a \"quarter amplitude decay\"-type response. Then increase formula_42 until any offset is corrected in sufficient time for the process, but not until too great a value causes instability. Finally, increase formula_43, if required, until the loop is acceptably quick to reach its reference after a load disturbance. Too much formula_43 causes excessive response and overshoot. A fast PID loop tuning usually overshoots slightly to reach the setpoint more quickly; however, some systems cannot accept overshoot, in which case an overdamped closed-loop system is required, which in turn requires a formula_44 setting significantly less than half that of the formula_44 setting that was causing oscillation.\nZiegler–Nichols method.\nAnother heuristic tuning method is known as the Ziegler–Nichols method, introduced by John G. Ziegler and Nathaniel B. Nichols in the 1940s. As in the method above, the formula_42 and formula_43 gains are first set to zero. The proportional gain is increased until it reaches the ultimate gain, formula_53, at which the output of the loop starts to oscillate constantly. formula_53 and the oscillation period formula_55 are used to set the gains as follows:\nThese gains apply to the ideal, parallel form of the PID controller. When applied to the standard PID form, only the integral and derivative gains formula_42 and formula_43 are dependent on the oscillation period formula_55.\nCohen–Coon parameters.\nThis method was developed in 1953 and is based on a first-order + time delay model. Similar to the Ziegler–Nichols method, a set of tuning parameters were developed to yield a closed-loop response with a decay ratio of formula_59. Arguably the biggest problem with these parameters is that a small change in the process parameters could potentially cause a closed-loop system to become unstable.\nRelay (Åström–Hägglund) method.\nPublished in 1984 by Karl Johan Åström and Tore Hägglund, the relay method temporarily operates the process using bang-bang control and measures the resultant oscillations. The output is switched (as if by a relay, hence the name) between two values of the control variable. The values must be chosen so the process will cross the setpoint, but they need not be 0% and 100%; by choosing suitable values, dangerous oscillations can be avoided.\nAs long as the process variable is below the setpoint, the control output is set to the higher value. As soon as it rises above the setpoint, the control output is set to the lower value. Ideally, the output waveform is nearly square, spending equal time above and below the setpoint. The period and amplitude of the resultant oscillations are measured, and used to compute the ultimate gain and period, which are then fed into the Ziegler–Nichols method.\nSpecifically, the ultimate period formula_55 is assumed to be equal to the observed period, and the ultimate gain is computed as formula_61 where is the amplitude of the process variable oscillation, and is the amplitude of the control output change which caused it.\nThere are numerous variants on the relay method.\nFirst order with dead time model.\nThe transfer function for a first-order process, with dead time, is:\nformula_62\nwhere kp is the process gain, τp is the time constant, θ is the dead time, and u(s) is a step change input. Converting this transfer function to the time domain results in:\nformula_63\nusing the same parameters found above.\nIt is important when using this method to apply a large enough step change input that the output can be measured; however, too large of a step change can affect the process stability. Additionally, a larger step change ensures that the output does not change due to a disturbance (for best results, try to minimize disturbances when performing the step test).\nOne way to determine the parameters for the first-order process is using the 63.2% method. In this method, the process gain (kp) is equal to the change in output divided by the change in input. The dead time (θ) is the amount of time between when the step change occurred and when the output first changed. The time constant (τp) is the amount of time it takes for the output to reach 63.2% of the new steady-state value after the step change. One downside to using this method is that it can take a while to reach a new steady-state value if the process has large time constants.\nTuning software.\nMost modern industrial facilities no longer tune loops using the manual calculation methods shown above. Instead, PID tuning and loop optimization software are used to ensure consistent results. These software packages gather data, develop process models, and suggest optimal tuning. Some software packages can even develop tuning by gathering data from reference changes.\nMathematical PID loop tuning induces an impulse in the system and then uses the controlled system's frequency response to design the PID loop values. In loops with response times of several minutes, mathematical loop tuning is recommended, because trial and error can take days just to find a stable set of loop values. Optimal values are harder to find. Some digital loop controllers offer a self-tuning feature in which very small setpoint changes are sent to the process, allowing the controller itself to calculate optimal tuning values.\nAnother approach calculates initial values via the Ziegler–Nichols method, and uses a numerical optimization technique to find better PID coefficients.\nOther formulas are available to tune the loop according to different performance criteria. Many patented formulas are now embedded within PID tuning software and hardware modules.\nAdvances in automated PID loop tuning software also deliver algorithms for tuning PID Loops in a dynamic or non-steady state (NSS) scenario. The software models the dynamics of a process, through a disturbance, and calculate PID control parameters in response.\nLimitations.\nWhile PID controllers are applicable to many control problems, and often perform satisfactorily without any improvements or only coarse tuning, they can perform poorly in some applications and do not in general provide \"optimal\" control. The fundamental difficulty with PID control is that it is a feedback control system, with \"constant\" parameters, and no direct knowledge of the process, and thus overall performance is reactive and a compromise. While PID control is the best controller for an observer without a model of the process, better performance can be obtained by overtly modeling the actor of the process without resorting to an observer.\nPID controllers, when used alone, can give poor performance when the PID loop gains must be reduced so that the control system does not overshoot, oscillate or hunt about the control setpoint value. They also have difficulties in the presence of non-linearities, may trade-off regulation versus response time, do not react to changing process behavior (say, the process changes after it has warmed up), and have lag in responding to large disturbances.\nThe most significant improvement is to incorporate feed-forward control with knowledge about the system, and using the PID only to control error. Alternatively, PIDs can be modified in more minor ways, such as by changing the parameters (either gain scheduling in different use cases or adaptively modifying them based on performance), improving measurement (higher sampling rate, precision, and accuracy, and low-pass filtering if necessary), or cascading multiple PID controllers.\nLinearity and symmetry.\nPID controllers work best when the loop to be controlled is linear and symmetric. Thus, their performance in non-linear and asymmetric systems is degraded.\nA non-linear valve, for instance, in a flow control application, will result in variable loop sensitivity, requiring dampened action to prevent instability. One solution is the use of the valve's non-linear characteristic in the control algorithm to compensate for this.\nAn asymmetric application, for example, is temperature control in HVAC systems using only active heating (via a heating element), where there is only passive cooling available. When it is desired to lower the controlled temperature the heating output is off, but there is no active cooling due to control output. Any overshoot of rising temperature can therefore only be corrected slowly; it cannot be forced downward by the control output. In this case the PID controller could be tuned to be over-damped, to prevent or reduce overshoot, but this reduces performance by increasing the settling time of a rising temperature to the set point. The inherent degradation of control quality in this application could be solved by application of active cooling.\nNoise in derivative term.\nA problem with the derivative term is that it amplifies higher frequency measurement or process noise that can cause large amounts of change in the output. It is often helpful to filter the measurements with a low-pass filter in order to remove higher-frequency noise components. As low-pass filtering and derivative control can cancel each other out, the amount of filtering is limited. Therefore, low noise instrumentation can be important. A nonlinear median filter may be used, which improves the filtering efficiency and practical performance. In some cases, the differential band can be turned off with little loss of control. This is equivalent to using the PID controller as a PI controller.\nModifications to the algorithm.\nThe basic PID algorithm presents some challenges in control applications that have been addressed by minor modifications to the PID form.\nIntegral windup.\nOne common problem resulting from the ideal PID implementations is integral windup. Following a large change in setpoint the integral term can accumulate an error larger than the maximal value for the regulation variable (windup), thus the system overshoots and continues to increase until this accumulated error is unwound. This problem can be addressed by:\nOvershooting from known disturbances.\nFor example, a PID loop is used to control the temperature of an electric resistance furnace where the system has stabilized. Now when the door is opened and something cold is put into the furnace the temperature drops below the setpoint. The integral function of the controller tends to compensate for error by introducing another error in the positive direction. This overshoot can be avoided by freezing of the integral function after the opening of the door for the time the control loop typically needs to reheat the furnace.\nPI controller.\nA PI controller (proportional-integral controller) is a special case of the PID controller in which the derivative (D) of the error is not used.\nThe controller output is given by\nwhere formula_65 is the error or deviation of actual measured value (PV) from the setpoint (SP).\nA PI controller can be modelled easily in software such as Simulink or Xcos using a \"flow chart\" box involving Laplace operators:\nwhere\nSetting a value for formula_70 is often a trade off between decreasing overshoot and increasing settling time.\nThe lack of derivative action may make the system more steady in the steady state in the case of noisy data. This is because derivative action is more sensitive to higher-frequency terms in the inputs.\nWithout derivative action, a PI-controlled system is less responsive to real (non-noise) and relatively fast alterations in state and so the system will be slower to reach setpoint and slower to respond to perturbations than a well-tuned PID system may be.\nDeadband.\nMany PID loops control a mechanical device (for example, a valve). Mechanical maintenance can be a major cost and wear leads to control degradation in the form of either stiction or backlash in the mechanical response to an input signal. The rate of mechanical wear is mainly a function of how often a device is activated to make a change. Where wear is a significant concern, the PID loop may have an output deadband to reduce the frequency of activation of the output (valve). This is accomplished by modifying the controller to hold its output steady if the change would be small (within the defined deadband range). The calculated output must leave the deadband before the actual output will change.\nSetpoint step change.\nThe proportional and derivative terms can produce excessive movement in the output when a system is subjected to an instantaneous step increase in the error, such as a large setpoint change. In the case of the derivative term, this is due to taking the derivative of the error, which is very large in the case of an instantaneous step change. As a result, some PID algorithms incorporate some of the following modifications:\nFeed-forward.\nThe control system performance can be improved by combining the feedback (or closed-loop) control of a PID controller with feed-forward (or open-loop) control. Knowledge about the system (such as the desired acceleration and inertia) can be fed forward and combined with the PID output to improve the overall system performance. The feed-forward value alone can often provide the major portion of the controller output. The PID controller primarily has to compensate for whatever difference or \"error\" remains between the setpoint (SP) and the system response to the open-loop control. Since the feed-forward output is not affected by the process feedback, it can never cause the control system to oscillate, thus improving the system response without affecting stability. Feed forward can be based on the setpoint and on extra measured disturbances. Setpoint weighting is a simple form of feed forward.\nFor example, in most motion control systems, in order to accelerate a mechanical load under control, more force is required from the actuator. If a velocity loop PID controller is being used to control the speed of the load and command the force being applied by the actuator, then it is beneficial to take the desired instantaneous acceleration, scale that value appropriately and add it to the output of the PID velocity loop controller. This means that whenever the load is being accelerated or decelerated, a proportional amount of force is commanded from the actuator regardless of the feedback value. The PID loop in this situation uses the feedback information to change the combined output to reduce the remaining difference between the process setpoint and the feedback value. Working together, the combined open-loop feed-forward controller and closed-loop PID controller can provide a more responsive control system.\nBumpless operation.\nPID controllers are often implemented with a \"bumpless\" initialization feature that recalculates the integral accumulator term to maintain a consistent process output through parameter changes. A partial implementation is to store the integral gain times the error rather than storing the error and postmultiplying by the integral gain, which prevents discontinuous output when the I gain is changed, but not the P or D gains.\nOther improvements.\nIn addition to feed-forward, PID controllers are often enhanced through methods such as PID gain scheduling (changing parameters in different operating conditions), fuzzy logic, or computational verb logic. Further practical application issues can arise from instrumentation connected to the controller. A high enough sampling rate, measurement precision, and measurement accuracy are required to achieve adequate control performance. Another new method for improvement of PID controller is to increase the degree of freedom by using fractional order. The order of the integrator and differentiator add increased flexibility to the controller.\nCascade control.\nOne distinctive advantage of PID controllers is that two PID controllers can be used together to yield better dynamic performance. This is called cascaded PID control. Two controllers are in cascade when they are arranged so that one regulates the set point of the other. A PID controller acts as outer loop controller, which controls the primary physical parameter, such as fluid level or velocity. The other controller acts as inner loop controller, which reads the output of outer loop controller as setpoint, usually controlling a more rapid changing parameter, flowrate or acceleration. It can be mathematically proven that the working frequency of the controller is increased and the time constant of the object is reduced by using cascaded PID controllers..\nFor example, a temperature-controlled circulating bath has two PID controllers in cascade, each with its own thermocouple temperature sensor. The outer controller controls the temperature of the water using a thermocouple located far from the heater, where it accurately reads the temperature of the bulk of the water. The error term of this PID controller is the difference between the desired bath temperature and measured temperature. Instead of controlling the heater directly, the outer PID controller sets a heater temperature goal for the inner PID controller. The inner PID controller controls the temperature of the heater using a thermocouple attached to the heater. The inner controller's error term is the difference between this heater temperature setpoint and the measured temperature of the heater. Its output controls the actual heater to stay near this setpoint.\nThe proportional, integral, and differential terms of the two controllers will be very different. The outer PID controller has a long time constant – all the water in the tank needs to heat up or cool down. The inner loop responds much more quickly. Each controller can be tuned to match the physics of the system \"it\" controls – heat transfer and thermal mass of the whole tank or of just the heater – giving better total response.\nAlternative nomenclature and forms.\nStandard versus parallel (ideal) form.\nThe form of the PID controller most often encountered in industry, and the one most relevant to tuning algorithms is the \"standard form\". In this form the formula_44 gain is applied to the formula_72, and formula_73 terms, yielding:\nwhere\nIn this standard form, the parameters have a clear physical meaning. In particular, the inner summation produces a new single error value which is compensated for future and past errors. The proportional error term is the current error. The derivative components term attempts to predict the error value at formula_76 seconds (or samples) in the future, assuming that the loop control remains unchanged. The integral component adjusts the error value to compensate for the sum of all past errors, with the intention of completely eliminating them in formula_75 seconds (or samples). The resulting compensated single error value is then scaled by the single gain formula_44 to compute the control variable.\nIn the parallel form, shown in the controller theory section\nthe gain parameters are related to the parameters of the standard form through formula_81 and formula_82. This parallel form, where the parameters are treated as simple gains, is the most general and flexible form. However, it is also the form where the parameters have the weakest relationship to physical behaviors and is generally reserved for theoretical treatment of the PID controller. The standard form, despite being slightly more complex mathematically, is more common in industry.\nReciprocal gain, a.k.a. proportional band.\nIn many cases, the manipulated variable output by the PID controller is a dimensionless fraction between 0 and 100% of some maximum possible value, and the translation into real units (such as pumping rate or watts of heater power) is outside the PID controller. The process variable, however, is in dimensioned units such as temperature. It is common in this case to express the gain formula_44 not as \"output per degree\", but rather in the reciprocal form of a \"proportional band\" formula_84, which is \"degrees per full output\": the range over which the output changes from 0 to 1 (0% to 100%). Beyond this range, the output is saturated, full-off or full-on. The narrower this band, the higher the proportional gain.\nBasing derivative action on PV.\nIn most commercial control systems, derivative action is based on process variable rather than error. That is, a change in the setpoint does not affect the derivative action. This is because the digitized version of the algorithm produces a large unwanted spike when the setpoint is changed. If the setpoint is constant then changes in the PV will be the same as changes in error. Therefore, this modification makes no difference to the way the controller responds to process disturbances.\nBasing proportional action on PV.\nMost commercial control systems offer the \"option\" of also basing the proportional action solely on the process variable. This means that only the integral action responds to changes in the setpoint. The modification to the algorithm does not affect the way the controller responds to process disturbances.\nBasing proportional action on PV eliminates the instant and possibly very large change in output caused by a sudden change to the setpoint. Depending on the process and tuning this may be beneficial to the response to a setpoint step.\nKing describes an effective chart-based method.\nLaplace form.\nSometimes it is useful to write the PID regulator in Laplace transform form:\nHaving the PID controller written in Laplace form and having the transfer function of the controlled system makes it easy to determine the closed-loop transfer function of the system.\nSeries/interacting form.\nAnother representation of the PID controller is the series, or \"interacting\" form\nwhere the parameters are related to the parameters of the standard form through\nwith\nThis form essentially consists of a PD and PI controller in series. As the integral is required to calculate the controller's bias this form provides the ability to track an external bias value which is required to be used for proper implementation of multi-controller advanced control schemes.\nDiscrete implementation.\nThe analysis for designing a digital implementation of a PID controller in a microcontroller (MCU) or FPGA device requires the standard form of the PID controller to be \"discretized\". Approximations for first-order derivatives are made by backward finite differences. formula_6 and formula_1 are discretized with a sampling period formula_94, k is the sample index.\nDifferentiating both sides of PID equation using Newton's notation gives:\nformula_95\nDerivative terms are approximated as,\nSo,\nApplying backward difference again gives,\nBy simplifying and regrouping terms of the above equation, an algorithm for an implementation of the discretized PID controller in a MCU is finally obtained:\nor:\ns.t. formula_101\nNote: This method solves in fact formula_102 where formula_103 is a constant independent of t. This constant is useful when you want to have a start and stop control on the regulation loop. For instance, setting Kp,Ki and Kd to 0 will keep u(t) constant. Likewise, when you want to start a regulation on a system where the error is already close to 0 with u(t) non null, it prevents from sending the output to 0.\nPseudocode.\nHere is a very simple and explicit group of pseudocode that can be easily understood by the layman:\n previous_error := 0\n integral := 0\n loop:\n error := setpoint − measured_value\n proportional := error;\n integral := integral + error × dt\n derivative := (error − previous_error) / dt\n output := Kp × proportional + Ki × integral + Kd × derivative\n previous_error := error\n wait(dt)\n goto loop\nHere is a more complicated and much less explicit software loop that implements a PID algorithm:\n A0 := Kp + Ki*dt + Kd/dt\n A1 := -Kp - 2*Kd/dt\n A2 := Kd/dt\n error[2] := 0 // e(t-2)\n error[1] := 0 // e(t-1)\n error[0] := 0 // e(t)\n output := u0 // Usually the current value of the actuator\n loop:\n error[2] := error[1]\n error[1] := error[0]\n error[0] := setpoint − measured_value\n output := output + A0 * error[0] + A1 * error[1] + A2 * error[2]\n wait(dt)\n goto loop\nHere, Kp is a dimensionless number, Ki is expressed in formula_104 and Kd is expressed in s. When doing a regulation where the actuator and the measured value are not in the same unit (ex. temperature regulation using a motor controlling a valve), Kp, Ki and Kd may be corrected by a unit conversion factor. It may also be interesting to use Ki in its reciprocal form (integration time). The above implementation allows to perform a I-only controller which may be useful in some cases.\nIn the real world, this is D-to-A converted and passed into the process under control as the manipulated variable (MV). The current error is stored elsewhere for re-use in the next differentiation, the program then waits until dt seconds have passed since start, and the loop begins again, reading in new values for the PV and the setpoint and calculating a new value for the error.\nNote that for real code, the use of \"wait(dt)\" might be inappropriate because it doesn't account for time taken by the algorithm itself during the loop, or more importantly, any preemption delaying the algorithm.\nA common issue when using formula_43 is the response to the derivative of a rising or falling edge of the setpoint as shown below:\nA typical workaround is to filter the derivative action using a low pass filter of time constant formula_106 where formula_107:\nA variant of the above algorithm using an infinite impulse response (IIR) filter for the derivative:\n A0 := Kp + Ki*dt\n A1 := -Kp\n error[2] := 0 // e(t-2)\n error[1] := 0 // e(t-1)\n error[0] := 0 // e(t)\n output := u0 // Usually the current value of the actuator\n A0d = Kd/dt\n A1d = - 2.0*Kd/dt\n A2d = Kd/dt\n N := 5\n tau := Kd / (Kp*N) // IIR filter time constant\n alpha = dt / (2*tau)\n d0 := 0\n d1 := 0\n fd0 := 0\n fd1 := 0\n loop:\n error[2] := error[1]\n error[1] := error[0]\n error[0] := setpoint − measured_value\n // PI\n output := output + A0 * error[0] + A1 * error[1]\n // Filtered D\n d1 = d0\n d0 = A0d * error[0] + A1d * error[1] + A2d * error[2]\n fd1 = fd0\n fd0 = ((alpha) / (alpha + 1)) * (d0 + d1) - ((alpha - 1) / (alpha + 1)) * fd1\n output := output + fd0 \n wait(dt)\n goto loop", "Automation-Control": 0.8586223722, "Qwen2": "Yes"} {"id": "66294", "revid": "7852030", "url": "https://en.wikipedia.org/wiki?curid=66294", "title": "Reinforcement learning", "text": "Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning.\nReinforcement learning differs from supervised learning in not needing labelled input/output pairs to be presented, and in not needing sub-optimal actions to be explicitly corrected. Instead the focus is on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).\nThe environment is typically stated in the form of a Markov decision process (MDP), because many reinforcement learning algorithms for this context use dynamic programming techniques. The main difference between the classical dynamic programming methods and reinforcement learning algorithms is that the latter do not assume knowledge of an exact mathematical model of the MDP and they target large MDPs where exact methods become infeasible.\nIntroduction.\nDue to its generality, reinforcement learning is studied in many disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, and statistics. In the operations research and control literature, reinforcement learning is called \"approximate dynamic programming,\" or \"neuro-dynamic programming.\" The problems of interest in reinforcement learning have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment. In economics and game theory, reinforcement learning may be used to explain how equilibrium may arise under bounded rationality.\nBasic reinforcement learning is modeled as a Markov decision process (MDP):\nThe purpose of reinforcement learning is for the agent to learn an optimal, or nearly-optimal, policy that maximizes the \"reward function\" or other user-provided reinforcement signal that accumulates from the immediate rewards. This is similar to processes that appear to occur in animal psychology. For example, biological brains are hardwired to interpret signals such as pain and hunger as negative reinforcements, and interpret pleasure and food intake as positive reinforcements. In some circumstances, animals can learn to engage in behaviors that optimize these rewards. This suggests that animals are capable of reinforcement learning.\nA basic reinforcement learning agent AI interacts with its environment in discrete time steps. At each time , the agent receives the current state formula_10 and reward formula_11. It then chooses an action formula_12 from the set of available actions, which is subsequently sent to the environment. The environment moves to a new state formula_13 and the reward formula_14 associated with the \"transition\" formula_15 is determined. The goal of a reinforcement learning agent is to learn a \"policy\": formula_16, formula_17 which maximizes the expected cumulative reward.\nFormulating the problem as an MDP assumes the agent directly observes the current environmental state; in this case the problem is said to have \"full observability\". If the agent only has access to a subset of states, or if the observed states are corrupted by noise, the agent is said to have \"partial observability\", and formally the problem must be formulated as a Partially observable Markov decision process. In both cases, the set of actions available to the agent can be restricted. For example, the state of an account balance could be restricted to be positive; if the current value of the state is 3 and the state transition attempts to reduce the value by 4, the transition will not be allowed.\nWhen the agent's performance is compared to that of an agent that acts optimally, the difference in performance gives rise to the notion of \"regret\". In order to act near optimally, the agent must reason about the long-term consequences of its actions (i.e., maximize future income), although the immediate reward associated with this might be negative.\nThus, reinforcement learning is particularly well-suited to problems that include a long-term versus short-term reward trade-off. It has been applied successfully to various problems, including energy storage operation, robot control, elevator scheduling, telecommunications, photovoltaic generators dispatch, backgammon, checkers and Go (AlphaGo).\nTwo elements make reinforcement learning powerful: the use of samples to optimize performance and the use of function approximation to deal with large environments. Thanks to these two key components, reinforcement learning can be used in large environments in the following situations:\nThe first two of these problems could be considered planning problems (since some form of model is available), while the last one could be considered to be a genuine learning problem. However, reinforcement learning converts both planning problems to machine learning problems.\nExploration.\nThe exploration vs. exploitation trade-off has been most thoroughly studied through the multi-armed bandit problem and for finite state space MDPs in Burnetas and Katehakis (1997).\nReinforcement learning requires clever exploration mechanisms; randomly selecting actions, without reference to an estimated probability distribution, shows poor performance. The case of (small) finite MDPs is relatively well understood. However, due to the lack of algorithms that scale well with the number of states (or scale to problems with infinite state spaces), simple exploration methods are the most practical.\nOne such method is formula_18-greedy, where formula_19 is a parameter controlling the amount of exploration vs. exploitation. With probability formula_20, exploitation is chosen, and the agent chooses the action that it believes has the best long-term effect (ties between actions are broken uniformly at random). Alternatively, with probability formula_18, exploration is chosen, and the action is chosen uniformly at random. formula_18 is usually a fixed parameter but can be adjusted either according to a schedule (making the agent explore progressively less), or adaptively based on heuristics.\nAlgorithms for control learning.\nEven if the issue of exploration is disregarded and even if the state was observable (assumed hereafter), the problem remains to use past experience to find out which actions lead to higher cumulative rewards.\nCriterion of optimality.\nPolicy.\nThe agent's action selection is modeled as a map called \"policy\":\nThe policy map gives the probability of taking action formula_5 when in state formula_3. There are also deterministic policies.\nState-value function.\nThe value function formula_27 is defined as, \"expected return\" starting with state formula_3, i.e. formula_29, and successively following policy formula_30. Hence, roughly speaking, the value function estimates \"how good\" it is to be in a given state.\nwhere the random variable formula_32 denotes the return, and is defined as the sum of future discounted rewards:\nwhere formula_11 is the reward at step formula_2, formula_36 is the discount-rate. Gamma is less than 1, so events in the distant future are weighted less than events in the immediate future.\nThe algorithm must find a policy with maximum expected return. From the theory of MDPs it is known that, without loss of generality, the search can be restricted to the set of so-called \"stationary\" policies. A policy is \"stationary\" if the action-distribution returned by it depends only on the last state visited (from the observation agent's history). The search can be further restricted to \"deterministic\" stationary policies. A \"deterministic stationary\" policy deterministically selects actions based on the current state. Since any such policy can be identified with a mapping from the set of states to the set of actions, these policies can be identified with such mappings with no loss of generality.\nBrute force.\nThe brute force approach entails two steps:\nOne problem with this is that the number of policies can be large, or even infinite. Another is that the variance of the returns may be large, which requires many samples to accurately estimate the return of each policy.\nThese problems can be ameliorated if we assume some structure and allow samples generated from one policy to influence the estimates made for others. The two main approaches for achieving this are value function estimation and direct policy search.\nValue function.\nValue function approaches attempt to find a policy that maximizes the return by maintaining a set of estimates of expected returns for some policy (usually either the \"current\" [on-policy] or the optimal [off-policy] one).\nThese methods rely on the theory of Markov decision processes, where optimality is defined in a sense that is stronger than the above one: A policy is called optimal if it achieves the best-expected return from \"any\" initial state (i.e., initial distributions play no role in this definition). Again, an optimal policy can always be found amongst stationary policies.\nTo define optimality in a formal manner, define the value of a policy formula_30 by\nwhere formula_32 stands for the return associated with following formula_30 from the initial state formula_3. Defining formula_42 as the maximum possible value of formula_43, where formula_30 is allowed to change,\nA policy that achieves these optimal values in each state is called \"optimal\". Clearly, a policy that is optimal in this strong sense is also optimal in the sense that it maximizes the expected return formula_46, since formula_47, where formula_48 is a state randomly sampled from the distribution formula_49 of initial states (so formula_50).\nAlthough state-values suffice to define optimality, it is useful to define action-values. Given a state formula_3, an action formula_5 and a policy formula_30, the action-value of the pair formula_54 under formula_30 is defined by\nwhere formula_32 now stands for the random return associated with first taking action formula_5 in state formula_3 and following formula_30, thereafter.\nThe theory of MDPs states that if formula_61 is an optimal policy, we act optimally (take the optimal action) by choosing the action from formula_62 with the highest value at each state, formula_3. The \"action-value function\" of such an optimal policy (formula_64) is called the \"optimal action-value function\" and is commonly denoted by formula_65. In summary, the knowledge of the optimal action-value function alone suffices to know how to act optimally.\nAssuming full knowledge of the MDP, the two basic approaches to compute the optimal action-value function are value iteration and policy iteration. Both algorithms compute a sequence of functions formula_66 (formula_67) that converge to formula_65. Computing these functions involves computing expectations over the whole state-space, which is impractical for all but the smallest (finite) MDPs. In reinforcement learning methods, expectations are approximated by averaging over samples and using function approximation techniques to cope with the need to represent value functions over large state-action spaces.\nMonte Carlo methods.\nMonte Carlo methods can be used in an algorithm that mimics policy iteration. Policy iteration consists of two steps: \"policy evaluation\" and \"policy improvement\".\nMonte Carlo is used in the policy evaluation step. In this step, given a stationary, deterministic policy formula_30, the goal is to compute the function values formula_70 (or a good approximation to them) for all state-action pairs formula_54. Assume (for simplicity) that the MDP is finite, that sufficient memory is available to accommodate the action-values and that the problem is episodic and after each episode a new one starts from some random initial state. Then, the estimate of the value of a given state-action pair formula_54 can be computed by averaging the sampled returns that originated from formula_54 over time. Given sufficient time, this procedure can thus construct a precise estimate formula_74 of the action-value function formula_75. This finishes the description of the policy evaluation step.\nIn the policy improvement step, the next policy is obtained by computing a \"greedy\" policy with respect to formula_74: Given a state formula_3, this new policy returns an action that maximizes formula_78. In practice lazy evaluation can defer the computation of the maximizing actions to when they are needed.\nProblems with this procedure include:\n1. The procedure may spend too much time evaluating a suboptimal policy.\n2. It uses samples inefficiently in that a long trajectory improves the estimate only of the \"single\" state-action pair that started the trajectory.\n3. When the returns along the trajectories have \"high variance\", convergence is slow.\n4. It works in episodic problems only.\n5. It works in small, finite MDPs only.\nTemporal difference methods.\nThe first problem is corrected by allowing the procedure to change the policy (at some or all states) before the values settle. This too may be problematic as it might prevent convergence. Most current algorithms do this, giving rise to the class of \"generalized policy iteration\" algorithms. Many \"actor-critic\" methods belong to this category.\nThe second issue can be corrected by allowing trajectories to contribute to any state-action pair in them. This may also help to some extent with the third problem, although a better solution when returns have high variance is Sutton's temporal difference (TD) methods that are based on the recursive Bellman equation. The computation in TD methods can be incremental (when after each transition the memory is changed and the transition is thrown away), or batch (when the transitions are batched and the estimates are computed once based on the batch). Batch methods, such as the least-squares temporal difference method, may use the information in the samples better, while incremental methods are the only choice when batch methods are infeasible due to their high computational or memory complexity. Some methods try to combine the two approaches. Methods based on temporal differences also overcome the fourth issue.\nAnother problem specific to TD comes from their reliance on the recursive Bellman equation. Most TD methods have a so-called formula_79 parameter formula_80 that can continuously interpolate between Monte Carlo methods that do not rely on the Bellman equations and the basic TD methods that rely entirely on the Bellman equations. This can be effective in palliating this issue.\nFunction approximation methods.\nIn order to address the fifth issue, \"function approximation methods\" are used. \"Linear function approximation\" starts with a mapping formula_81 that assigns a finite-dimensional vector to each state-action pair. Then, the action values of a state-action pair formula_54 are obtained by linearly combining the components of formula_83 with some \"weights\" formula_84:\nThe algorithms then adjust the weights, instead of adjusting the values associated with the individual state-action pairs. Methods based on ideas from nonparametric statistics (which can be seen to construct their own features) have been explored.\nValue iteration can also be used as a starting point, giving rise to the Q-learning algorithm and its many variants. Including Deep Q-learning methods when a neural network is used to represent Q, with various applications in stochastic search problems.\nThe problem with using action-values is that they may need highly precise estimates of the competing action values that can be hard to obtain when the returns are noisy, though this problem is mitigated to some extent by temporal difference methods. Using the so-called compatible function approximation method compromises generality and efficiency.\nDirect policy search.\nAn alternative method is to search directly in (some subset of) the policy space, in which case the problem becomes a case of stochastic optimization. The two approaches available are gradient-based and gradient-free methods.\nGradient-based methods (\"policy gradient methods\") start with a mapping from a finite-dimensional (parameter) space to the space of policies: given the parameter vector formula_84, let formula_87 denote the policy associated to formula_84. Defining the performance function by\nunder mild conditions this function will be differentiable as a function of the parameter vector formula_84. If the gradient of formula_91 was known, one could use gradient ascent. Since an analytic expression for the gradient is not available, only a noisy estimate is available. Such an estimate can be constructed in many ways, giving rise to algorithms such as Williams' REINFORCE method (which is known as the likelihood ratio method in the simulation-based optimization literature). Policy search methods have been used in the robotics context. Many policy search methods may get stuck in local optima (as they are based on local search).\nA large class of methods avoids relying on gradient information. These include simulated annealing, cross-entropy search or methods of evolutionary computation. Many gradient-free methods can achieve (in theory and in the limit) a global optimum.\nPolicy search methods may converge slowly given noisy data. For example, this happens in episodic problems when the trajectories are long and the variance of the returns is large. Value-function based methods that rely on temporal differences might help in this case. In recent years, \"actor–critic methods\" have been proposed and performed well on various problems.\nModel-based algorithms.\nFinally, all of the above methods can be combined with algorithms that first learn a model. For instance, the Dyna algorithm learns a model from experience, and uses that to provide more modelled transitions for a value function, in addition to the real transitions. Such methods can sometimes be extended to use of non-parametric models, such as when the transitions are simply stored and 'replayed' to the learning algorithm.\nThere are other ways to use models than to update a value function. For instance, in model predictive control the model is used to update the behavior directly.\nTheory.\nBoth the asymptotic and finite-sample behaviors of most algorithms are well understood. Algorithms with provably good online performance (addressing the exploration issue) are known.\nEfficient exploration of MDPs is given in Burnetas and Katehakis (1997). Finite-time performance bounds have also appeared for many algorithms, but these bounds are expected to be rather loose and thus more work is needed to better understand the relative advantages and limitations.\nFor incremental algorithms, asymptotic convergence issues have been settled. Temporal-difference-based algorithms converge under a wider set of conditions than was previously possible (for example, when used with arbitrary, smooth function approximation).\nResearch.\nResearch topics include: \nComparison of reinforcement learning algorithms.\nAssociative reinforcement learning.\nAssociative reinforcement learning tasks combine facets of stochastic learning automata tasks and supervised learning pattern classification tasks. In associative reinforcement learning tasks, the learning system interacts in a closed loop with its environment.\nDeep reinforcement learning.\nThis approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space. The work on learning ATARI games by Google DeepMind increased attention to deep reinforcement learning or end-to-end reinforcement learning.\nAdversarial deep reinforcement learning.\nAdversarial deep reinforcement learning is an active area of research in reinforcement learning focusing on vulnerabilities of learned policies. In this research area some studies initially showed that reinforcement learning policies are susceptible to imperceptible adversarial manipulations. While some methods have been proposed to overcome these susceptibilities, in the most recent studies it has been shown that these proposed solutions are far from providing an accurate representation of current vulnerabilities of deep reinforcement learning policies.\nFuzzy reinforcement learning.\nBy introducing fuzzy inference in RL, approximating the state-action value function with fuzzy rules in continuous space becomes possible. The IF - THEN form of fuzzy rules make this approach suitable for expressing the results in a form close to natural language. Extending FRL with Fuzzy Rule Interpolation allows the use of reduced size sparse fuzzy rule-bases to emphasize cardinal rules (most important state-action values).\nInverse reinforcement learning.\nIn inverse reinforcement learning (IRL), no reward function is given. Instead, the reward function is inferred given an observed behavior from an expert. The idea is to mimic observed behavior, which is often optimal or close to optimal.\nSafe reinforcement learning.\nSafe reinforcement learning (SRL) can be defined as the process of learning policies that maximize the expectation of the return in problems in which it is important to ensure reasonable system performance and/or respect safety constraints during the learning and/or deployment processes.", "Automation-Control": 0.6576714516, "Qwen2": "Yes"} {"id": "23898325", "revid": "9755426", "url": "https://en.wikipedia.org/wiki?curid=23898325", "title": "Yaw bearing", "text": "The yaw bearing is the most crucial and cost intensive component of a yaw system found on modern horizontal axis wind turbines. The yaw bearing must cope with enormous static and dynamic loads and moments during the wind turbine operation, and provide smooth rotation characteristics for the orientation of the nacelle under all weather conditions. It has also to be corrosion and wear resistant and extremely long lasting. It should last for the service life of the wind turbine) while being cost effective.\nHistory.\nWindmills of the 18th century began implementing rotatable nacelles to capture wind coming from different directions. The yaw systems of these \"primitive\" windmills were surprisingly similar to the ones on modern wind turbines. The nacelles rotated by means of wind driven yaw drives known as fantails, or by animal power, and were mounted on the windmill towers by means of an axial gliding bearing. \nThese gliding bearings consisted of multiple gliding blocks fixed on the windmill tower structure. These blocks maintained sliding contact with a gliding ring on the nacelle. The gliding blocks were wooden cube-like pieces with convex gliding surface covered with animal fat, or even lined with copper (or brass) sheet as a friction reduction means. These wooden blocks were fixed in wooden slots, carved in the wooden bearing substructure, by means of nails or wedges and were carefully leveled to create a flat surface where the nacelle gliding ring could glide. The gliding blocks, despite the lubrication would wear quite often and would have to be exchanged. This operation was relatively simple due to the wedge-based connection between substructure and gliding blocks. The gliding blocks were further locked via movable locking devices which, in a different form, remain as a technical solution in modern gliding yaw bearings. \nThe gliding ring of the windmill nacelle was made from multiple wooden parts and, despite the old construction techniques, was usually quite level, allowing the nacelle to rotate smoothly around the tower axis. \nThe \"hybrid yaw bearing system\" combines the solutions old windmills used. This system comprises multiple removable radial gliding pads in combination with an axial roller bearing.\nTypes.\nThe main categories of yaw bearings are: \nRoller yaw bearing.\nThe roller yaw bearing is a common technical yaw bearing solution followed by many wind turbine manufacturers as it offers low turning friction and smooth rotation of the nacelle. The low turning friction permits the implementation of slightly smaller yaw drives (compared to the gliding bearing solution), but on the other hand requires a yaw braking system. \nSome manufacturers use a plurality of smaller yaw drives (usually six) to facilitate easy replacement. Such a configuration with plurality of yaw drives often offers the possibility of active yaw braking using differential torque from the yaw drives. In this case half of the yaw drives apply a small amount of torque for clockwise rotation and the other half apply torque in the opposite direction and then activate the internal magnetic brakes of the electric motor. In this way the pinion-gear rim backlash is eliminated and the nacelle is fixed in place.\nGliding yaw bearing.\nThe gliding yaw bearing is a combined axial and radial bearing, which serves as a rotatable connection of the wind turbine nacelle and the tower. Contrary to the old windmill concept, the modern yaw bearings support the nacelle also from the to thus restraining the nacelle from being rotated by the Y-axis due to the moments induced by the upper half of the rotor sweep disk and the X-axis due to the torque of the drive train (i.e. rotor, shaft, generator, etc. ). \nPrincipally, the simplest way to accomplish the yaw bearing tasks with gliding elements is with two gliding planes for the axial loads (top and bottom) and a radial gliding surface for the radial loads. Consequently, the gliding yaw bearing comprises three general surfaces covered with multiple gliding pads. These gliding pads come in sliding contact with a steel disk, which is usually equipped with gear teeth to form a gliding-disk/gear-rim. The teeth may be located at the inner or the outer cylindrical face of the disk, while the arrangement of the gliding pads and their exact number and location vary strongly among the existing designs. To assemble the gliding yaw bearings, their cages split in several segments that are assembled together during wind turbine installation or manufacturing. \nIn its simplest form, the gliding yaw bearing uses pads (usually made out of polymers) distributed around the three contact surfaces to provide a proper guiding system for the radial and axial movement with relatively low friction coefficient. Such systems are economical and very robust but do not allow individual adjustment of the axial and radial gliding elements. This function importantly minimizes the axial and radial \"play\" of the gliding bearing due to manufacturing tolerances as well as due to wear of the gliding pads during operation. \nTo solve this problem, yaw systems incorporate pre-tensioned gliding bearings. These bearings have gliding pads that are pressed via pressure elements against the gliding disk to stabilize the nacelle against undesirable movement. The pressure elements can be simple steel springs, pneumatic, or hydraulic pre-tension elements, etc. The use of pneumatic or hydraulic pre-tension elements allows active control of the yaw bearing pre-tension, which provides yaw brake function.\nWear and lubrication.\nIn all gliding bearings wear is an issue of concern, as well as lubrication. Conventional gliding yaw bearings incorporate gliding elements manufactured out of polymer plastics such as polyoxymethylene plastic (POM) or polyamide (PA). To reduce friction, wear, and avoid stick-slip effects (often present in such high friction slow moving systems), lubrication is often introduced. This solution generally solves the gliding issues, but introduces more components to the systems and increases the general complication (e.g., difficult maintenance procedures for removal of used lubricant). Some wind turbine manufacturers now use self lubricating gliding elements instead of a central lubrication system. These gliding elements are manufactured from low friction materials or composites (e.t.g polytetrafluoroethylene (Teflon)) that allow reliable operation of dry (non-lubricated) gliding yaw systems.\nMaintenance and repair.\nDespite the fact that the gliding yaw bearings and their components are designed and constructed to last the service life of the wind turbine, it should be possible to replace worn out yaw bearing gliding elements or other components of the yaw system. To allow for replace-ability of worn out components, the yaw systems are designed in segments. Usually one or more gliding planes comprise several sub-elements that contain a number of gliding elements (radial or axial or a combination). These sub-elements can be individually removed and repaired, re-fit or replaced. In this way the yaw bearing can be serviced without the need of dis-assembly of the whole gliding yaw bearing (e.g., in case of a roller yaw bearing, dis-assembly of the whole wind turbine). This rep-arability offered by the segmented design of the gliding yaw bearing is one of the most important advantages of this system against the roller yaw bearing solution. \nThe only remaining issue is the replacement of the gliding elements of the gliding yaw bearing surface, which is not segmented. This is usually the top axial surface of the gliding bearing, which constantly supports the weight of the whole nacelle-rotor assembly. For the gliding elements of this gliding surface to be replaced, the nacelle-rotor assembly must be lifted by an external crane. An alternative solution to this problem is the use of mechanical or hydraulic jacks able to partially or fully lift the nacelle-rotor assembly while the gliding yaw bearing is still in place. In this way and by providing a small clearance between the gliding elements and the gliding disk, it is possible to exchange the sliding elements without dismantling the gliding yaw bearing.\nBearing Adjustment.\nWhen the wind turbine nacelle is positioned on the tower and the yaw bearing assembly is completed it is necessary to adjust the pressure on the individual gliding pads of the bearing. This is necessary in order to avoid un-even wear of the gliding pads and excessive loading on some sectors of the yaw bearing. In order to achieve that, an adjustment mechanism is necessary, which enables the technicians to adjust the contact pressure of each individual gliding element in a controllable and secure way. The most common solution is the utilization of bottom bearing plates equipped with large opening, which accommodate the adjustable gliding bearing systems. These adjustable gliding bearings comprise a gliding unit (i.e. gliding pad) and an adjustable pressure distribution plate. In between the gliding pad and the pressure plate several spring (pre-tension) elements are located. The vertical position of the pressure plates is usually controlled by an adjustment screw. This adjustment screw presses against the pressure plate while being retained by a counter-pressure support plate, fixed on the bearing assembly with strong bolts. In this way it is possible to apply various levels of contact pressure among the different gliding pads and therefore to ensure that each gliding component of the yaw bearing arrangement is performing as anticipated.", "Automation-Control": 0.8105364442, "Qwen2": "Yes"} {"id": "62443864", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=62443864", "title": "Neural tangent kernel", "text": "In the study of artificial neural networks (ANNs), the neural tangent kernel (NTK) is a kernel that describes the evolution of deep artificial neural networks during their training by gradient descent. It allows ANNs to be studied using theoretical tools from kernel methods.\nIn general, a kernel is a positive-semidefinite symmetric function of two inputs which represents some notion of similarity between the two inputs. The NTK is a specific kernel derived from a given neural network; in general, when the neural network parameters change during training, the NTK evolves as well. However, in the limit of large layer width the NTK becomes constant, revealing a duality between training the wide neural network and kernel methods: gradient descent in the infinite-width limit is fully equivalent to kernel gradient descent with the NTK. As a result, using gradient descent to minimize least-square loss for neural networks yields the same mean estimator as ridgeless kernel regression with the NTK. This duality enables simple closed form equations describing the training dynamics, generalization, and predictions of wide neural networks.\nThe NTK was introduced in 2018 by Arthur Jacot, Franck Gabriel and Clément Hongler, who used it to study the convergence and generalization properties of fully connected neural networks. Later works extended the NTK results to other neural network architectures. In fact, the phenomenon behind NTK is not specific to neural networks and can be observed in generic nonlinear models, usually by a suitable scaling.\nMain results (informal).\nLet formula_1 denote the scalar function computed by a given neural network with parameters formula_2 on input formula_3. Then the neural tangent kernel is defined asformula_4Since it is written as a dot product between mapped inputs (with the gradient of the neural network function serving as the feature map), we are guaranteed that the NTK is symmetric and positive semi-definite. The NTK is thus a valid kernel function.\nConsider a fully connected neural network whose parameters are chosen i.i.d. according to any mean-zero distribution. This random initialization of formula_2 induces a distribution over formula_1 whose statistics we will analyze, both at initialization and throughout training (gradient descent on a specified dataset). We can visualize this distribution via a neural network ensemble which is constructed by drawing many times from the initial distribution over formula_1 and training each draw according to the same training procedure.\nThe number of neurons in each layer is called the layer’s width. Consider taking the width of every hidden layer to infinity and training the neural network with gradient descent (with a suitably small learning rate). In this infinite-width limit, several nice properties emerge:\nApplications.\nRidgeless kernel regression and kernel gradient descent.\nKernel methods are machine learning algorithms which use only pairwise relations between input points. Kernel methods do not depend on the concrete values of the inputs; they only depend on the relations between the inputs and other inputs (such as the training set). These pairwise relations are fully captured by the kernel function: a symmetric, positive-semidefinite function of two inputs which represents some notion of similarity between the two inputs. A fully equivalent condition is that there exists some feature map formula_14 such that the kernel function can be written as a dot product of the mapped inputsformula_15The properties of a kernel method depend on the choice of kernel function. (Note that formula_16 may have higher dimension than formula_17.) As a relevant example, consider linear regression. This is the task of estimating formula_18 given formula_19 samples formula_20 generated from formula_21, where each formula_22 is drawn according to some input data distribution. In this setup, formula_18 is the weight vector which defines the true function formula_24; we wish to use the training samples to develop a model formula_25 which approximates formula_18. We do this by minimizing the mean-square error between our model and the training samples:formula_27There exists an explicit solution for formula_25 which minimizes the squared error: formula_29, where formula_30 is the matrix whose columns are the training inputs, and formula_31 is the vector of training outputs. Then, the model can make predictions on new inputs: formula_32.\nHowever, this result can be rewritten as: formula_33. Note that this dual solution is expressed solely in terms of the inner products between inputs. This motivates extending linear regression to settings in which, instead of directly taking inner products between inputs, we first transform the inputs according to a chosen feature map and then evaluate the inner products between the transformed inputs. As discussed above, this can be captured by a kernel function formula_34, since all kernel functions are inner products of feature-mapped inputs. This yields the ridgeless kernel regression estimator:formula_35If the kernel matrix formula_36 is singular, one uses the Moore-Penrose pseudoinverse. The regression equations are called \"ridgeless\" because they lack a ridge regularization term.\nIn this view, linear regression is a special case of kernel regression with the identity feature map: formula_37. Equivalently, kernel regression is simply linear regression in the feature space (i.e. the range of the feature map defined by the chosen kernel). Note that kernel regression is typically a \"nonlinear\" regression in the input space, which is a major strength of the algorithm.\nJust as it’s possible to perform linear regression using iterative optimization algorithms such as gradient descent, one can perform kernel regression using kernel gradient descent. This is equivalent to performing gradient descent in the feature space. It’s known that if the weight vector is initialized close to zero, least-squares gradient descent converges to the minimum-norm solution, i.e., the final weight vector has the minimum Euclidean norm of all the interpolating solutions. In the same way, kernel gradient descent yields the minimum-norm solution with respect to the RKHS norm. This is an example of the implicit regularization of gradient descent.\nThe NTK gives a rigorous connection between the inference performed by infinite-width ANNs and that performed by kernel methods: when the loss function is the least-squares loss, the inference performed by an ANN is in expectation equal to ridgeless kernel regression with respect to the NTK. This suggests that the performance of large ANNs in the NTK parametrization can be replicated by kernel methods for suitably chosen kernels.\nOverparametrization, interpolation, and generalization.\nIn overparametrized models, the number of tunable parameters exceeds the number of training samples. In this case, the model is able to memorize (perfectly fit) the training data. Therefore, overparametrized models interpolate the training data, achieving essentially zero training error.\nKernel regression is typically viewed as a non-parametric learning algorithm, since there are no explicit parameters to tune once a kernel function has been chosen. An alternate view is to recall that kernel regression is simply linear regression in feature space, so the “effective” number of parameters is the dimension of the feature space. Therefore, studying kernels with high-dimensional feature maps can provide insights about strongly overparametrized models.\nAs an example, consider the problem of generalization. According to classical statistics, memorization should cause models to fit noisy signals in the training data, harming their performance on unseen data. To mitigate this, machine learning algorithms often introduce regularization to mitigate noise-fitting tendencies. Surprisingly, modern neural networks (which tend to be strongly overparametrized) seem to generalize well, even in the absence of explicit regularization. To study the generalization properties of overparametrized neural networks, one can exploit the infinite-width duality with ridgeless kernel regression. Recent works have derived equations describing the expected generalization error of high-dimensional kernel regression; these results immediately explain the generalization of sufficiently wide neural networks trained to convergence on least-squares.\nConvergence to a global minimum.\nFor a convex loss functional formula_38 with a global minimum, if the NTK remains positive-definite during training, the loss of the ANN formula_39 converges to that minimum as formula_40. This positive-definiteness property has been shown in a number of cases, yielding the first proofs that large-width ANNs converge to global minima during training.\nExtensions and limitations.\nThe NTK can be studied for various ANN architectures, in particular convolutional neural networks (CNNs), recurrent neural networks (RNNs) and transformers. In such settings, the large-width limit corresponds to letting the number of parameters grow, while keeping the number of layers fixed: for CNNs, this involves letting the number of channels grow.\nIndividual parameters of a wide neural network in the kernel regime change negligibly during training. However, this implies that infinite-width neural networks cannot exhibit feature learning, which is widely considered to be an important property of realistic deep neural networks. This is not a generic feature of infinite-width neural networks and is largely due to a specific choice of the scaling by which the width is taken to the infinite limit; indeed several works have found alternate infinite-width scaling limits of neural networks in which there is no duality with kernel regression and feature learning occurs during training. Others introduce a \"neural tangent hierarchy\" to describe finite-width effects, which may drive feature learning.\nNeural Tangents is a free and open-source Python library used for computing and doing inference with the infinite width NTK and neural network Gaussian process (NNGP) corresponding to various common ANN architectures. In addition, there exists a scikit-learn compatible implementation of the infinite width NTK for Gaussian processes called scikit-ntk.\nDetails.\nWhen optimizing the parameters formula_41 of an ANN to minimize an empirical loss through gradient descent, the NTK governs the dynamics of the ANN output function formula_42 throughout the training.\nCase 1: Scalar output.\nAn ANN with scalar output consists of a family of functions formula_43 parametrized by a vector of parameters formula_41.\nThe NTK is a kernel formula_45 defined byformula_46In the language of kernel methods, the NTK formula_47 is the kernel associated with the feature map formula_48. To see how this kernel drives the training dynamics of the ANN, consider a dataset formula_49 with scalar labels formula_50 and a loss function formula_51. Then the associated empirical loss, defined on functions formula_52, is given byformula_53When the ANN formula_54 is trained to fit the dataset (i.e. minimize formula_55) via continuous-time gradient descent, the parameters formula_56 evolve through the ordinary differential equation:\nDuring training the ANN output function follows an evolution differential equation given in terms of the NTK:\nThis equation shows how the NTK drives the dynamics of formula_59 in the space of functions formula_60 during training.\nCase 2: Vector output.\nAn ANN with vector output of size formula_61 consists in a family of functions formula_62 parametrized by a vector of parameters formula_41.\nIn this case, the NTK formula_64 is a matrix-valued kernel, with values in the space of formula_65 matrices, defined byformula_66Empirical risk minimization proceeds as in the scalar case, with the difference being that the loss function takes vector inputs formula_67. The training of formula_68 through continuous-time gradient descent yields the following evolution in function space driven by the NTK:formula_69This generalizes the equation shown in case 1 for scalar outputs.\nInterpretation.\nThe NTK formula_70 represents the influence of the loss gradient formula_71 with respect to example formula_72 on the evolution of ANN output formula_73 through a gradient descent step: in the scalar case, this readsformula_74In particular, each data point formula_75 influences the evolution of the output formula_73 for each formula_3 throughout the training, in a way that is captured by the NTK formula_70.\nWide fully-connected ANNs have a deterministic NTK, which remains constant throughout training.\nConsider an ANN with fully-connected layers formula_79 of widths formula_80, so that formula_81, where formula_82 is the composition of an affine transformation formula_83 with the pointwise application of a nonlinearity formula_84, where formula_2 parametrizes the maps formula_86. The parameters formula_41 are initialized randomly, in an independent, identically distributed way.\nAs the widths grow, the NTK's scale is affected by the exact parametrization of the formula_83's and by the parameter initialization. This motivates the so-called NTK parametrization formula_89. This parametrization ensures that if the parameters formula_41 are initialized as standard normal variables, the NTK has a finite nontrivial limit. In the large-width limit, the NTK converges to a deterministic (non-random) limit formula_91, which stays constant in time.\nThe NTK formula_91 is explicitly given by formula_93, where formula_94 is determined by the set of recursive equations:\nwhere formula_96 denotes the kernel defined in terms of the Gaussian expectation:\nIn this formula the kernels formula_98 are the ANN's so-called activation kernels.\nWide fully connected networks are linear in their parameters throughout training.\nThe NTK describes the evolution of neural networks under gradient descent in function space. Dual to this perspective is an understanding of how neural networks evolve in parameter space, since the NTK is defined in terms of the gradient of the ANN's outputs with respect to its parameters. In the infinite width limit, the connection between these two perspectives becomes especially interesting. The NTK remaining constant throughout training at large widths co-occurs with the ANN being well described throughout training by its first order Taylor expansion around its parameters at initialization:", "Automation-Control": 0.944147706, "Qwen2": "Yes"} {"id": "17707632", "revid": "42522270", "url": "https://en.wikipedia.org/wiki?curid=17707632", "title": "Margin (machine learning)", "text": "In machine learning the margin of a single data point is defined to be the distance from the data point to a decision boundary. Note that there are many distances and decision boundaries that may be appropriate for certain datasets and goals. A margin classifier is a classifier that explicitly utilizes the margin of each example while learning a classifier. There are theoretical justifications (based on the VC dimension) as to why maximizing the margin (under some suitable constraints) may be beneficial for machine learning and statistical inferences algorithms.\nThere are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, or margin, between the two classes. So we choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as the \"maximum-margin hyperplane\" and the linear classifier it defines is known as a \"maximum margin classifier\"; or equivalently, the \"perceptron of optimal stability.\"", "Automation-Control": 0.6516609192, "Qwen2": "Yes"} {"id": "2020708", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=2020708", "title": "Adaptive control", "text": "Adaptive control is the control method used by a controller which must adapt to a controlled system with parameters which vary, or are initially uncertain. For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumption; a control law is needed that adapts itself to such changing conditions. Adaptive control is different from robust control in that it does not need \"a priori\" information about the bounds on these uncertain or time-varying parameters; robust control guarantees that if the changes are within given bounds the control law need not be changed, while adaptive control is concerned with control law changing itself.\nParameter estimation.\nThe foundation of adaptive control is parameter estimation, which is a branch of system identification. Common methods of estimation include recursive least squares and gradient descent. Both of these methods provide update laws that are used to modify estimates in real-time (i.e., as the system operates). Lyapunov stability is used to derive these update laws and show convergence criteria (typically persistent excitation; relaxation of this condition are studied in Concurrent Learning adaptive control). Projection and normalization are commonly used to improve the robustness of estimation algorithms.\nClassification of adaptive control techniques.\nIn general, one should distinguish between:\nas well as between\nDirect methods are ones wherein the estimated parameters are those directly used in the adaptive controller. In contrast, indirect methods are those in which the estimated parameters are used to calculate required controller parameters. Hybrid methods rely on both estimation of parameters and direct modification of the control law.\nThere are several broad categories of feedback adaptive control (classification can vary):\nSome special topics in adaptive control can be introduced as well:\nIn recent times, adaptive control has been merged with intelligent techniques such as fuzzy and neural networks to bring forth new concepts such as fuzzy adaptive control.\nApplications.\nWhen designing adaptive control systems, special consideration is necessary of convergence and robustness issues. Lyapunov stability is typically used to derive control adaptation laws and show .\nUsually these methods adapt the controllers to both the process statics and dynamics. In special cases the adaptation can be limited to the static behavior alone, leading to adaptive control based on characteristic curves for the steady-states or to extremum value control, optimizing the steady state. Hence, there are several ways to apply adaptive control algorithms.\nA particularly successful application of adaptive control has been adaptive flight control. This body of work has focused on guaranteeing stability of a model reference adaptive control scheme using Lyapunov arguments. Several successful flight-test demonstrations have been conducted, including fault tolerant adaptive control.", "Automation-Control": 0.9936728477, "Qwen2": "Yes"} {"id": "1216721", "revid": "2810812", "url": "https://en.wikipedia.org/wiki?curid=1216721", "title": "Wiener filter", "text": "In signal processing, the Wiener filter is a filter used to produce an estimate of a desired or target random process by linear time-invariant (LTI) filtering of an observed noisy process, assuming known stationary signal and noise spectra, and additive noise. The Wiener filter minimizes the mean square error between the estimated random process and the desired process.\nDescription.\nThe goal of the Wiener filter is to compute a statistical estimate of an unknown signal using a related signal as an input and filtering that known signal to produce the estimate as an output. For example, the known signal might consist of an unknown signal of interest that has been corrupted by additive noise. The Wiener filter can be used to filter out the noise from the corrupted signal to provide an estimate of the underlying signal of interest. The Wiener filter is based on a statistical approach, and a more statistical account of the theory is given in the minimum mean square error (MMSE) estimator article.\nTypical deterministic filters are designed for a desired frequency response. However, the design of the Wiener filter takes a different approach. One is assumed to have knowledge of the spectral properties of the original signal and the noise, and one seeks the linear time-invariant filter whose output would come as close to the original signal as possible. Wiener filters are characterized by the following:\nThis filter is frequently used in the process of deconvolution; for this application, see Wiener deconvolution.\nWiener filter solutions.\nLet formula_1 be an unknown signal which must be estimated from a measurement signal formula_2. Where alpha is a tunable parameter. formula_3 is known as prediction, formula_4 is known as filtering, and formula_5 is known as smoothing (see Wiener filtering chapter of for more details). \nThe Wiener filter problem has solutions for three possible cases: one where a noncausal filter is acceptable (requiring an infinite amount of both past and future data), the case where a causal filter is desired (using an infinite amount of past data), and the finite impulse response (FIR) case where only input data is used (i.e. the result or output is not fed back into the filter as in the IIR case). The first case is simple to solve but is not suited for real-time applications. Wiener's main accomplishment was solving the case where the causality requirement is in effect; Norman Levinson gave the FIR solution in an appendix of Wiener's book.\nNoncausal solution.\nwhere formula_7 are spectral densities. Provided that formula_8 is optimal, then the minimum mean-square error equation reduces to \nand the solution formula_8 is the inverse two-sided Laplace transform of formula_11.\nCausal solution.\nwhere\nThis general formula is complicated and deserves a more detailed explanation. To write down the solution formula_23 in a specific case, one should follow these steps:\nFinite impulse response Wiener filter for discrete series.\nThe causal finite impulse response (FIR) Wiener filter, instead of using some given data matrix X and output vector Y, finds optimal tap weights by using the statistics of the input and output signals. It populates the input matrix X with estimates of the auto-correlation of the input signal (T) and populates the output vector Y with estimates of the cross-correlation between the output and input signals (V).\nIn order to derive the coefficients of the Wiener filter, consider the signal \"w\"[\"n\"] being fed to a Wiener filter of order (number of past taps) \"N\" and with coefficients formula_34. The output of the filter is denoted \"x\"[\"n\"] which is given by the expression\nThe residual error is denoted \"e\"[\"n\"] and is defined as \"e\"[\"n\"] = \"x\"[\"n\"] − \"s\"[\"n\"] (see the corresponding block diagram). The Wiener filter is designed so as to minimize the mean square error (MMSE criteria) which can be stated concisely as follows:\nwhere formula_37 denotes the expectation operator. In the general case, the coefficients formula_38 may be complex and may be derived for the case where \"w\"[\"n\"] and \"s\"[\"n\"] are complex as well. With a complex signal, the matrix to be solved is a Hermitian Toeplitz matrix, rather than symmetric Toeplitz matrix. For simplicity, the following considers only the case where all these quantities are real. The mean square error (MSE) may be rewritten as:\nTo find the vector formula_40 which minimizes the expression above, calculate its derivative with respect to each formula_41\nAssuming that \"w\"[\"n\"] and \"s\"[\"n\"] are each stationary and jointly stationary, the sequences formula_43 and formula_44 known respectively as the autocorrelation of \"w\"[\"n\"] and the cross-correlation between \"w\"[\"n\"] and \"s\"[\"n\"] can be defined as follows:\nThe derivative of the MSE may therefore be rewritten as:\nNote that for real formula_47, the autocorrelation is symmetric:formula_48Letting the derivative be equal to zero results in:\nwhich can be rewritten (using the above symmetric property) in matrix form\nThese equations are known as the Wiener–Hopf equations. The matrix T appearing in the equation is a symmetric Toeplitz matrix. Under suitable conditions on formula_51, these matrices are known to be positive definite and therefore non-singular yielding a unique solution to the determination of the Wiener filter coefficient vector, formula_52. Furthermore, there exists an efficient algorithm to solve such Wiener–Hopf equations known as the Levinson-Durbin algorithm so an explicit inversion of T is not required.\nIn some articles, the cross correlation function is defined in the opposite way:formula_53Then, the formula_54 matrix will contain formula_55; this is just a difference in notation.\nWhichever notation is used, note that for real formula_56:formula_57\nRelationship to the least squares filter.\nThe realization of the causal Wiener filter looks a lot like the solution to the least squares estimate, except in the signal processing domain. The least squares solution, for input matrix formula_58 and output vector formula_59 is\nThe FIR Wiener filter is related to the least mean squares filter, but minimizing the error criterion of the latter does not rely on cross-correlations or auto-correlations. Its solution converges to the Wiener filter solution.\nComplex signals.\nFor complex signals, the derivation of the complex Wiener filter is performed by minimizing formula_61 =formula_62. This involves computing partial derivatives with respect to both the real and imaginary parts of formula_38, and requiring them both to be zero.\nThe resulting Wiener-Hopf equations are:\nwhich can be rewritten in matrix form:\nNote here that:formula_66\nThe Wiener coefficient vector is then computed as:formula_67\nApplications.\nThe Wiener filter has a variety of applications in signal processing, image processing, control systems, and digital communications. These applications generally fall into one of four main categories:\nFor example, the Wiener filter can be used in image processing to remove noise from a picture. For example, using the Mathematica function:\ncodice_1 on the first image on the right, produces the filtered image below it.\nIt is commonly used to denoise audio signals, especially speech, as a preprocessor before speech recognition.\nHistory.\nThe filter was proposed by Norbert Wiener during the 1940s and published in 1949. The discrete-time equivalent of Wiener's work was derived independently by Andrey Kolmogorov and published in 1941. Hence the theory is often called the \"Wiener–Kolmogorov\" filtering theory (\"cf.\" Kriging). The Wiener filter was the first statistically designed filter to be proposed and subsequently gave rise to many others including the Kalman filter.", "Automation-Control": 0.9913710952, "Qwen2": "Yes"} {"id": "4786318", "revid": "1161224116", "url": "https://en.wikipedia.org/wiki?curid=4786318", "title": "Linear dynamical system", "text": "Linear dynamical systems are dynamical systems whose evolution functions are linear. While dynamical systems, in general, do not have closed-form solutions, linear dynamical systems can be solved exactly, and they have a rich set of mathematical properties. Linear systems can also be used to understand the qualitative behavior of general dynamical systems, by calculating the equilibrium points of the system and approximating it as a linear system around each such point.\nIntroduction.\nIn a linear dynamical system, the variation of a state vector \n(an formula_1-dimensional vector denoted formula_2) equals a constant matrix\n(denoted formula_3) multiplied by \nformula_2. This variation can take two forms: either \nas a flow, in which formula_2 varies \ncontinuously with time\nor as a mapping, in which \nformula_2 varies in discrete steps\nThese equations are linear in the following sense: if \nformula_9 and formula_10 \nare two valid solutions, then so is any linear combination \nof the two solutions, e.g., \nformula_11 \nwhere formula_12 and formula_13\nare any two scalars. The matrix formula_3 \nneed not be symmetric.\nLinear dynamical systems can be solved exactly, in contrast to most nonlinear ones. Occasionally, a nonlinear system can be solved exactly by a change of variables to a linear system. Moreover, the solutions of (almost) any nonlinear system can be well-approximated by an equivalent linear system near its fixed points. Hence, understanding linear systems and their solutions is a crucial first step to understanding the more complex nonlinear systems.\nSolution of linear dynamical systems.\nIf the initial vector formula_15\nis aligned with a right eigenvector formula_16 of \nthe matrix formula_3, the dynamics are simple\nwhere formula_19 is the corresponding eigenvalue;\nthe solution of this equation is \nas may be confirmed by substitution.\nIf formula_3 is diagonalizable, then any vector in an formula_1-dimensional space can be represented by a linear combination of the right and left eigenvectors (denoted formula_23) of the matrix formula_3.\nTherefore, the general solution for formula_9 is \na linear combination of the individual solutions for the right\neigenvectors\nSimilar considerations apply to the discrete mappings.\nClassification in two dimensions.\nThe roots of the characteristic polynomial det(A - λI) are the eigenvalues of A. The sign and relation of these roots, formula_28, to each other may be used to determine the stability of the dynamical system \nFor a 2-dimensional system, the characteristic polynomial is of the form formula_30 where formula_31 is the trace and formula_32 is the determinant of A. Thus the two roots are in the form:\nand formula_35 and formula_36. Thus if formula_37 then the eigenvalues are of opposite sign, and the fixed point is a saddle. If formula_38 then the eigenvalues are of the same sign. Therefore, if formula_39 both are positive and the point is unstable, and if formula_40 then both are negative and the point is stable. The discriminant will tell you if the point is nodal or spiral (i.e. if the eigenvalues are real or complex).", "Automation-Control": 0.8798577189, "Qwen2": "Yes"} {"id": "27015376", "revid": "910180", "url": "https://en.wikipedia.org/wiki?curid=27015376", "title": "Automatic server discovery", "text": "Automatic server discovery is a software licensing feature that allows client applications to find license servers automatically on the network, thus eliminating the need for end users to manually configure server information and allowing system administrators to perform their tasks more easily and efficiently. If you have, for example, over 70 machines then the configuration process will take a long time if it is done manually.\nAutomatic server discovery often uses Multicast UDP to send broadcasts, to which available license servers respond with information about their network location. When a licensed server is discovered, the information is locally cached on the client machine, so automatic server discovery does not have to be performed at each application startup.\nThe newest version of NTP also supports Automatic server discovery. There are three schemes provided by NTPv4:\nAutomatic server discovery typically works only on local networks, and will not work on WAN or VPN connections.", "Automation-Control": 0.9911415577, "Qwen2": "Yes"} {"id": "2211723", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=2211723", "title": "Sampled data system", "text": "In systems science, a sampled-data system is a control system in which a continuous-time plant is controlled with a digital device. Under periodic sampling, the sampled-data system is time-varying but also periodic; thus, it may be modeled by a simplified discrete-time system obtained by discretizing the plant. However, this discrete model does not capture the inter-sample behavior of the real system, which may be critical in a number of applications.\nThe analysis of sampled-data systems incorporating full-time information leads to challenging control problems with a rich mathematical structure. Many of these problems have only been solved recently.", "Automation-Control": 1.0000050068, "Qwen2": "Yes"} {"id": "34366633", "revid": "844888612", "url": "https://en.wikipedia.org/wiki?curid=34366633", "title": "Class kappa-ell function", "text": "In control theory, it is often required to check if a nonautonomous system is stable or not. To cope with this it is necessary to use some special comparison functions. Class formula_1 functions belong to this family:\nDefinition: A continuous function formula_2 is said to belong to class formula_1 if:", "Automation-Control": 0.7467296124, "Qwen2": "Yes"} {"id": "29324", "revid": "5066583", "url": "https://en.wikipedia.org/wiki?curid=29324", "title": "Signal processing", "text": "Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing \"signals\", such as sound, images, potential fields, seismic signals, altimetry processing, and scientific measurements. Signal processing techniques are used to optimize transmissions, digital storage efficiency, correcting distorted signals, subjective video quality and to also detect or pinpoint components of interest in a measured signal.\nHistory.\nAccording to Alan V. Oppenheim and Ronald W. Schafer, the principles of signal processing can be found in the classical numerical analysis techniques of the 17th century. They further state that the digital refinement of these techniques can be found in the digital control systems of the 1940s and 1950s.\nIn 1948, Claude Shannon wrote the influential paper \"A Mathematical Theory of Communication\" which was published in the \"Bell System Technical Journal\". The paper laid the groundwork for later development of information communication systems and the processing of signals for transmission.\nSignal processing matured and flourished in the 1960s and 1970s, and digital signal processing became widely used with specialized digital signal processor chips in the 1980s.\nDefinition Signal.\nA signal is a function formula_1, where this function is either\nCategories.\nAnalog.\nAnalog signal processing is for signals that have not been digitized, as in most 20th-century radio, telephone, and television systems. This involves linear electronic circuits as well as nonlinear ones. The former are, for instance, passive filters, active filters, additive mixers, integrators, and delay lines. Nonlinear circuits include compandors, multipliers (frequency mixers, voltage-controlled amplifiers), voltage-controlled filters, voltage-controlled oscillators, and phase-locked loops.\nContinuous time.\nContinuous-time signal processing is for signals that vary with the change of continuous domain (without considering some individual interrupted points).\nThe methods of signal processing include time domain, frequency domain, and complex frequency domain. This technology mainly discusses the modeling of a linear time-invariant continuous system, integral of the system's zero-state response, setting up system function and the continuous time filtering of deterministic signals\nDiscrete time.\nDiscrete-time signal processing is for sampled signals, defined only at discrete points in time, and as such are quantized in time, but not in magnitude.\n\"Analog discrete-time signal processing\" is a technology based on electronic devices such as sample and hold circuits, analog time-division multiplexers, analog delay lines and analog feedback shift registers. This technology was a predecessor of digital signal processing (see below), and is still used in advanced processing of gigahertz signals.\nThe concept of discrete-time signal processing also refers to a theoretical discipline that establishes a mathematical basis for digital signal processing, without taking quantization error into consideration.\nDigital.\nDigital signal processing is the processing of digitized discrete-time sampled signals. Processing is done by general-purpose computers or by digital circuits such as ASICs, field-programmable gate arrays or specialized digital signal processors (DSP chips). Typical arithmetical operations include fixed-point and floating-point, real-valued and complex-valued, multiplication and addition. Other typical operations supported by the hardware are circular buffers and lookup tables. Examples of algorithms are the fast Fourier transform (FFT), finite impulse response (FIR) filter, Infinite impulse response (IIR) filter, and adaptive filters such as the Wiener and Kalman filters.\nNonlinear.\nNonlinear signal processing involves the analysis and processing of signals produced from nonlinear systems and can be in the time, frequency, or spatiotemporal domains. Nonlinear systems can produce highly complex behaviors including bifurcations, chaos, harmonics, and subharmonics which cannot be produced or analyzed using linear methods. \nPolynomial signal processing is a type of non-linear signal processing, where polynomial systems may be interpreted as conceptually straightforward extensions of linear systems to the non-linear case.\nStatistical.\nStatistical signal processing is an approach which treats signals as stochastic processes, utilizing their statistical properties to perform signal processing tasks. Statistical techniques are widely used in signal processing applications. For example, one can model the probability distribution of noise incurred when photographing an image, and construct techniques based on this model to reduce the noise in the resulting image.\nApplication fields.\nIn communication systems, signal processing may occur at:", "Automation-Control": 0.604272902, "Qwen2": "Yes"} {"id": "15291723", "revid": "6908984", "url": "https://en.wikipedia.org/wiki?curid=15291723", "title": "Hierarchical control system", "text": "A hierarchical control system (HCS) is a form of control system in which a set of devices and governing software is arranged in a hierarchical tree. When the links in the tree are implemented by a computer network, then that hierarchical control system is also a form of networked control system.\nOverview.\nA human-built system with complex behavior is often organized as a hierarchy. For example, a command hierarchy has among its notable features the organizational chart of superiors, subordinates, and lines of organizational communication. Hierarchical control systems are organized similarly to divide the decision making responsibility.\nEach element of the hierarchy is a linked node in the tree. Commands, tasks and goals to be achieved flow down the tree from superior nodes to subordinate nodes, whereas sensations and command results flow up the tree from subordinate to superior nodes. Nodes may also exchange messages with their siblings. The two distinguishing features of a hierarchical control system are related to its layers.\nBesides artificial systems, an animal's control systems are proposed to be organized as a hierarchy. In perceptual control theory, which postulates that an organism's behavior is a means of controlling its perceptions, the organism's control systems are suggested to be organized in a hierarchical pattern as their perceptions are constructed so.\nControl system structure.\nThe accompanying diagram is a general hierarchical model which shows functional manufacturing levels using computerised control of an industrial control system.\nReferring to the diagram;\nApplications.\nManufacturing, robotics and vehicles.\nAmong the robotic paradigms is the hierarchical paradigm in which a robot operates in a top-down fashion, heavy on planning, especially motion planning. Computer-aided production engineering has been a research focus at NIST since the 1980s. Its Automated Manufacturing Research Facility was used to develop a five layer production control model. In the early 1990s DARPA sponsored research to develop distributed (i.e. networked) intelligent control systems for applications such as military command and control systems. NIST built on earlier research to develop its Real-Time Control System (RCS) and Real-time Control System Software which is a generic hierarchical control system that has been used to operate a manufacturing cell, a robot crane, and an automated vehicle.\nIn November 2007, DARPA held the Urban Challenge. The winning entry, Tartan Racing employed a hierarchical control system, with layered mission planning, motion planning, behavior generation, perception, world modelling, and mechatronics.\nArtificial intelligence.\nSubsumption architecture is a methodology for developing artificial intelligence that is heavily associated with behavior based robotics. This architecture is a way of decomposing complicated intelligent behavior into many \"simple\" behavior modules, which are in turn organized into layers. Each layer implements a particular goal of the software agent (i.e. system as a whole), and higher layers are increasingly more abstract. Each layer's goal subsumes that of the underlying layers, e.g. the decision to move forward by the eat-food layer takes into account the decision of the lowest obstacle-avoidance layer. Behavior need not be planned by a superior layer, rather behaviors may be triggered by sensory inputs and so are only active under circumstances where they might be appropriate.\nReinforcement learning has been used to acquire behavior in a hierarchical control system in which each node can learn to improve its behavior with experience.\nJames Albus, while at NIST, developed a theory for intelligent system design named the Reference Model Architecture (RMA), which is a hierarchical control system inspired by RCS. Albus defines each node to contain these components.\nAt its lowest levels, the RMA can be implemented as a subsumption architecture, in which the world model is mapped directly to the controlled process or real world, avoiding the need for a mathematical abstraction, and in which time-constrained reactive planning can be implemented as a finite state machine. Higher levels of the RMA however, may have sophisticated mathematical world models and behavior implemented by automated planning and scheduling. Planning is required when certain behaviors cannot be triggered by current sensations, but rather by predicted or anticipated sensations, especially those that come about as result of the node's actions.", "Automation-Control": 0.8693055511, "Qwen2": "Yes"} {"id": "15291863", "revid": "877724446", "url": "https://en.wikipedia.org/wiki?curid=15291863", "title": "Real-time Control System Software", "text": "The Real-time Control System (RCS) is a software system developed by NIST based on the Real-time Control System Reference Model Architecture, that implements a generic Hierarchical control system. The RCS Software Library is an archive of free C++, Java and Ada code, scripts, tools, makefiles, and documentation developed to aid programmers of software to be used in real-time control systems (especially those using the Reference Model Architecture for Intelligent Systems Design).\nIntroduction.\nRCS has been used in automated manufacturing, robotics, and automated vehicle research at NIST. The software consists of a C++ library and GUI and configuration tools written in a variety of software languages. The Software Library is offering the following RCS tools:", "Automation-Control": 0.9975748658, "Qwen2": "Yes"} {"id": "10897444", "revid": "37167220", "url": "https://en.wikipedia.org/wiki?curid=10897444", "title": "Riveting machine", "text": "A riveting machine is used to automatically set (squeeze) rivets in order to join materials together. The riveting machine offers greater consistency, productivity, and lower cost when compared to manual riveting.\nTypes.\nAutomatic feed riveting machines include a hopper and feed track which automatically delivers and presents the rivet to the setting tools which overcomes the need for the operator to position the rivet. The downward force required to deform the rivet with an automatic riveting machine is created by a motor and flywheel combination, pneumatic cylinder, or hydraulic cylinder. Manual feed riveting machines usually have a mechanical lever to deliver the setting force from a foot pedal or hand lever.\nRiveting machines can be sub-divided into two broad groups — impact riveting machines and orbital (or radial) riveting machines.\nImpact riveting.\nImpact riveting machines set the rivet by driving the rivet downwards, through the materials to be joined and on into a forming tool (known as a rollset). This action causes the end of the rivet to roll over in the rollset which causes the end of the rivet to flare out and thus join the materials together. Impact riveting machines are very fast and a cycle time of 0.5 seconds is typical.\nOrbital riveting.\nOrbital riveting machines have a spinning forming tool (known as a peen) which is gradually lowered into the rivet which spreads the material of the rivet into a desired shape depending upon the design of the tool. Orbital forming machines offer the user more control over the riveting cycle but the trade off is in cycle time which can be 2 or 3 seconds.\nThere are different types of riveting machines. Each type of machine has unique features and benefits. The orbital riveting process is different from impact riveting and spiralform riveting. Orbital riveting requires less downward force than impact or spiral riveting. Also, orbital riveting tooling typically lasts longer.\nOrbital riveting machines are used in a wide range of applications including brake linings for commercial vehicles, aircraft, and locomotives, textile and leather goods, metal brackets, window and door furniture, latches and even mobile phones. Many materials can be riveted together using orbital riveting machines including delicate and brittle materials, and sensitive electrical or electronic components.\nThe orbital riveting process uses a forming tool mounted at a 3 or 6° angle. The forming tool contacts the material and then presses it while rotating until the final form is achieved. The final form often has height and/or diameter specifications.\nPneumatic orbital riveting machines typically provide downward force in the range. Hydraulic orbital riveting machines typically provide downward force in the range.\nRadial (Spiralform) riveting.\nRadial riveting is subtly different from orbital forming. In most cases however, where high-quality joints are demanded, the radial riveting technology is the appropriate procedure due to the low cyle time, the little force needed and the high quality results obtained.\nThe riveting peen describes a rose-petal path. The rivet is deformed in three directions. Radially outwards, radially inwards and overlying also tangentially.\nExcellent surface structure of the closing head: With the Radial riveting process, the tool itself does not rotate. The friction between tool and work-piece is thus at a minimum. The result is an excellent surface structure.\nLow workpiece loading: Even bakelite or ceramic parts can be riveted. Lateral forces are negligible. Clamping is usually unnecessary.\nRollerform riveting.\nRollerforming is a subset of orbital forming. Rollerforming uses the same powerhead as orbital forming but instead of a peen has multiple wheels that circle the workpiece and combine two similar or non-similar materials together with a seamless and smooth gentle bonding via downward pressure as the rollers move downward or inward on the piece.\nAutomatic drilling and riveting machine.\nThese machines take the automation one step farther by clamping the material and drilling or countersinking the hole in addition to riveting. They are commonly used in the aerospace industry because of the large number of holes and rivets required to assemble the aircraft skin.\nApplications.\nRiveting machines are used in a wide range of applications including brake linings for commercial vehicles, aircraft, and locomotives, textile and leather goods, metal brackets, window and door furniture, latches and even mobile phones. Many materials can be riveted together using riveting machines including delicate and brittle materials, and sensitive electrical or electronic components.", "Automation-Control": 0.9982761741, "Qwen2": "Yes"} {"id": "30125368", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=30125368", "title": "Deneb Robotics", "text": "Deneb Robotics was a company founded by Scott Walter, Jay Harrison, Nathan Yoffa, and Rakesh Mahajan in 1985. The company pioneered graphics-based 3D factory simulation software for simulation and digital manufacturing tools. The application area reached from concept development to shop floor implementation, including off-line programming. Deneb Robotics is known for their IGRIP, Quest, Ultra and VirtualNC software packages.\nThe company was acquired by Dassault Systèmes in 1997 and is branded DELMIA. Major companies such as Boeing, and General Motors used Deneb’s suite of tools to optimize their design and manufacturing processes.", "Automation-Control": 0.998909533, "Qwen2": "Yes"} {"id": "49033623", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=49033623", "title": "Robust fuzzy programming", "text": "Robust fuzzy programming (ROFP) is a powerful mathematical optimization approach to deal with optimization problems under uncertainty. This approach is firstly introduced at 2012 by Pishvaee, Razmi & Torabi in the Journal of Fuzzy Sets and Systems. ROFP enables the decision makers to be benefited from the capabilities of both fuzzy mathematical programming and robust optimization approaches. At 2016 Pishvaee and Fazli put a significant step forward by extending the ROFP approach to handle flexibility of constraints and goals. ROFP is able to achieve a \"robust solution\" for an optimization problem under uncertainty.\nDefinition of robust solution.\nRobust solution is defined as a solution which has \"both \"feasibility robustness\" and \"optimality robustness\"; Feasibility robustness means that the solution should remain feasible for (almost) all possible values of uncertain parameters and flexibility degrees of constraints and optimality robustness means that the value of objective function for the solution should remain close to optimal value or have minimum (undesirable) deviation from the optimal value for (almost) all possible values of uncertain parameters and flexibility degrees on target value of goals\".\nClassification of ROFP methods.\nAs fuzzy mathematical programming is categorized into \"Possibilistic programming\" and \"Flexible programming\", ROFP also can be classified into:\nThe first category is used to deal with imprecise input parameters in optimization problems while the second one is employed to cope with flexible constraints and goals. Also, the last category is capable to handle both uncertain parameters and flexibility in goals and constraints.\nFrom another point of view, it can be said that different ROFP models developed in the literature can be classified in three categories according to degree of conservatism against uncertainty. These categories include:\nHard worst case ROFP has the most conservative nature among ROFP methods since it provides maximum safety or immunity against uncertainty. Ignoring the chance of infeasibility, this method immunizes the solution for being infeasible for all possible values of uncertain parameters. Regarding the optimality robustness, this method minimizes the worst possible value of objective function (min-max logic). On the other hand, Soft worst case ROFP method behaves similar to hard worst case method regarding optimality robustness, however does not satisfy the constraints in their extreme worst case. Lastly, realistic method establishes a reasonable trade-off between the robustness, the cost of robustness and other objectives such as improving the average system performance (cost-benefit logic).\nApplications.\nROFP is successfully implemented in different practical application areas such as the following ones.", "Automation-Control": 0.9860630631, "Qwen2": "Yes"} {"id": "16133497", "revid": "1754504", "url": "https://en.wikipedia.org/wiki?curid=16133497", "title": "Festo", "text": "Festo is a German automation company based in Esslingen am Neckar, Germany. Festo produces and sells pneumatic and electrical control systems and drive technology for factories and process automation. Festo Didactic also offers industrial education and consultation services and is one of the sponsors and partners of the WorldSkills Mechatronics Competitions. Sales subsidiaries, distribution centres and factories of Festo are located in 61 countries worldwide. The company was named after its founders Albert Fezer and Gottlieb Stoll.\nAnimal robots.\nFesto is known for making moving robots that move like animals, such as the seagull-like SmartBird, jellyfish, butterflies and the BionicKangaroo. In 2018 they also added a flying fox and a rolling spider to the list. Festo calls their Bionic Flying Fox an “ultra-lightweight flying object with intelligent kinematics.”", "Automation-Control": 0.6594321728, "Qwen2": "Yes"} {"id": "9676663", "revid": "1153089999", "url": "https://en.wikipedia.org/wiki?curid=9676663", "title": "STANKIN", "text": "The Moscow State University of Technology \"STANKIN\" (MSUT \"STANKIN\") (Russian: Московский Государственный Технологический Университет \"СТАНКИН\" (МГТУ \"СТАНКИН\")), previously the Moscow Machine and Tool Institute (Russian: Московский станкоинструментальный институт, tr. \"Moskovsky stankoinstrumental'ny institut\"), the name of which is still preserved in the acronym STANKIN (Russian: СТАНКИН), is a Russian technical higher education institution founded in 1930. Today STANKIN trains specialists in machinery, robotics, CNC's, electronics, automation and control systems, economics of enterprises, informatics and measurement systems.\nHistory.\nThe university was founded as the Moscow Machine Tool Institute in 1930 to provide the machine tool industry with qualified specialists.\nToday STANKIN is a scientific industrial complex with the Technological Design Institute of Informatic of the Russian Academy of Sciences (RSA). There is a network of scientific, educational and industrial centers. It has relations with universities and firms from Austria, Brazil, Germany, Hungary, Italy, China, USA, South Korea and other countries.\nThere are more than 600 professors and scientists working at Stankin today. Stankin has its own newspaper \"Stankinovskiy vestnik\" and a peer-reviewed journal with an international editorial board \"Vestnik MGTU STANKIN\" from 2009 \"(English: MSUT STANKIN Messenger)\" included in Web of Science databases.\nEducation.\nStudies and corresponding research cover the areas of automation of technological processes and manufactures; engineering ecology and security in machine building; information and marketing in machine building; information systems;\nquality of production and management; computerization of computation durability of machine building construction; computer modeling in instrumental technics; computer control system in production and business; tool engineering and computer modeling;\nlaser technology; presses and metal treatment technology; metal cutting machines and tools; design and computer modeling of plastic deformation system; robotics and mechatronic systems; system production of automated technological machinery; computer-aided design systems; metrological production system; computer control machine tools; technological information of automated manufacturing technology; technology and business in no-waste production; technology and management in instrumental manufacturing; mechanical engineering; physics of high concentrated energy; economy and control of production.", "Automation-Control": 0.9963892102, "Qwen2": "Yes"} {"id": "383703", "revid": "1219859", "url": "https://en.wikipedia.org/wiki?curid=383703", "title": "Root locus analysis", "text": "In control theory and stability theory, root locus analysis is a graphical method for examining how the roots of a system change with variation of a certain system parameter, commonly a gain within a feedback system. This is a technique used as a stability criterion in the field of classical control theory developed by Walter R. Evans which can determine stability of the system. The root locus plots the poles of the closed loop transfer function in the complex \"s\"-plane as a function of a gain parameter (see pole–zero plot).\nEvans also invented in 1948 an analog computer to compute root loci, called a \"Spirule\" (after \"spiral\" and \"slide rule\"); it found wide use before the advent of digital computers.\nUses.\nIn addition to determining the stability of the system, the root locus can be used to design the damping ratio (\"ζ\") and natural frequency (\"ω\"\"n\") of a feedback system. Lines of constant damping ratio can be drawn radially from the origin and lines of constant natural frequency can be drawn as arccosine whose center points coincide with the origin. By selecting a point along the root locus that coincides with a desired damping ratio and natural frequency, a gain \"K\" can be calculated and implemented in the controller. More elaborate techniques of controller design using the root locus are available in most control textbooks: for instance, lag, lead, PI, PD and PID controllers can be designed approximately with this technique.\nThe definition of the damping ratio and natural frequency presumes that the overall feedback system is well approximated by a second order system; i.e. the system has a dominant pair of poles. This is often not the case, so it is good practice to simulate the final design to check if the project goals are satisfied.\nDefinition.\nThe root locus of a feedback system is the graphical representation in the complex \"s\"-plane of the possible locations of its closed-loop poles for varying values of a certain system parameter. The points that are part of the root locus satisfy the angle condition. The value of the parameter for a certain point of the root locus can be obtained using the magnitude condition.\nSuppose there is a feedback system with input signal formula_1 and output signal formula_2. The forward path transfer function is formula_3; the feedback path transfer function is formula_4.\nFor this system, the closed-loop transfer function is given by\nThus, the closed-loop poles of the closed-loop transfer function are the roots of the characteristic equation formula_6. The roots of this equation may be found wherever formula_7.\nIn systems without pure delay, the product formula_8 is a rational polynomial function and may be expressed as\nwhere formula_10 are the formula_11 zeros, formula_12 are the formula_13 poles, and formula_14 is a scalar gain. Typically, a root locus diagram will indicate the transfer function's pole locations for varying values of the parameter formula_14. A root locus plot will be all those points in the \"s\"-plane where formula_7 for any value of formula_14.\nThe factoring of formula_14 and the use of simple monomials means the evaluation of the rational polynomial can be done with vector techniques that add or subtract angles and multiply or divide magnitudes. The vector formulation arises from the fact that each monomial term formula_19 in the factored formula_8 represents the vector from formula_21 to formula_22 in the s-plane. The polynomial can be evaluated by considering the magnitudes and angles of each of these vectors.\nAccording to vector mathematics, the angle of the result of the rational polynomial is the sum of all the angles in the numerator minus the sum of all the angles in the denominator. So to test whether a point in the \"s\"-plane is on the root locus, only the angles to all the open loop poles and zeros need be considered. This is known as the angle condition.\nSimilarly, the magnitude of the result of the rational polynomial is the product of all the magnitudes in the numerator divided by the product of all the magnitudes in the denominator. It turns out that the calculation of the magnitude is not needed to determine if a point in the s-plane is part of the root locus because formula_14 varies and can take an arbitrary real value. For each point of the root locus a value of formula_14 can be calculated. This is known as the magnitude condition.\nThe root locus only gives the location of closed loop poles as the gain formula_14 is varied. The value of formula_14 does not affect the location of the zeros. The open-loop zeros are the same as the closed-loop zeros.\nAngle condition.\nA point formula_22 of the complex \"s\"-plane satisfies the angle condition if\nwhich is the same as saying that\nthat is, the sum of the angles from the open-loop zeros to the point formula_22 (measured per zero w.r.t. a horizontal running through that zero) minus the angles from the open-loop poles to the point formula_22 (measured per pole w.r.t. a horizontal running through that pole) has to be equal to formula_32, or 180 degrees. Note that these interpretations should not be mistaken for the angle differences between the point formula_22 and the zeros/poles.\nMagnitude condition.\nA value of formula_14 satisfies the magnitude condition for a given formula_22 point of the root locus if\nwhich is the same as saying that\nSketching root locus.\nUsing a few basic rules, the root locus method can plot the overall shape of the path (locus) traversed by the roots as the value of formula_14 varies. The plot of the root locus then gives an idea of the stability and dynamics of this feedback system for different values of formula_14. The rules are the following:\nLet \"P\" be the number of poles and \"Z\" be the number of zeros:\nThe asymptotes intersect the real axis at formula_41 (which is called the centroid) and depart at angle formula_42 given by:\nwhere formula_45 is the sum of all the locations of the poles, formula_46 is the sum of all the locations of the explicit zeros and formula_47 denotes that we are only interested in the real part.\nThe breakaway points are located at the roots of the following equation:\nOnce you solve for \"z\", the real roots give you the breakaway/reentry points. Complex roots correspond to a lack of breakaway/reentry.\nPlotting root locus.\nGiven the general closed-loop denominator rational polynomial\nthe characteristic equation can be simplified to\nThe solutions of formula_22 to this equation are the root loci of the closed-loop transfer function.\nExample.\nGiven\nwe will have the characteristic equation\nThe following MATLAB code will plot the root locus of the closed-loop transfer function as formula_14 varies using the described manual method as well as the codice_1 built-in function:\n% Manual method\nK_array = (0:0.1:220).'; % .' is a transpose. Looking up in Matlab documentation.\nNK = length(K_array);\nx_array = zeros(NK, 3);\ny_array = zeros(NK, 3);\nfor nK = 1:NK\n K = K_array(nK);\n C = [1, 3, (5 + K), (1 + 3*K)];\n r = roots(C).';\n x_array(nK,:) = real(r);\n y_array(nK,:) = imag(r);\nend\nfigure;\nplot(x_array, y_array);\ngrid on;\n% Built-in method\nsys = tf([1, 3], [1, 3, 5, 1]);\nfigure;\nrlocus(sys);\n\"z\"-plane versus \"s\"-plane.\nThe root locus method can also be used for the analysis of sampled data systems by computing the root locus in the \"z\"-plane, the discrete counterpart of the \"s\"-plane. The equation maps continuous \"s\"-plane poles (not zeros) into the \"z\"-domain, where is the sampling period. The stable, left half \"s\"-plane maps into the interior of the unit circle of the \"z\"-plane, with the \"s\"-plane origin equating to \"|z|\" = 1 (because \"e\"0 = 1). A diagonal line of constant damping in the \"s\"-plane maps around a spiral from (1,0) in the \"z\" plane as it curves in toward the origin. The Nyquist aliasing criteria is expressed graphically in the \"z\"-plane by the \"x\"-axis, where . The line of constant damping just described spirals in indefinitely but in sampled data systems, frequency content is aliased down to lower frequencies by integral multiples of the Nyquist frequency. That is, the sampled response appears as a lower frequency and better damped as well since the root in the \"z\"-plane maps equally well to the first loop of a different, better damped spiral curve of constant damping. Many other interesting and relevant mapping properties can be described, not least that \"z\"-plane controllers, having the property that they may be directly implemented from the \"z\"-plane transfer function (zero/pole ratio of polynomials), can be imagined graphically on a \"z\"-plane plot of the open loop transfer function, and immediately analyzed utilizing root locus.\nSince root locus is a graphical angle technique, root locus rules work the same in the and planes.\nThe idea of a root locus can be applied to many systems where a single parameter is varied. For example, it is useful to sweep any system parameter for which the exact value is uncertain in order to determine its behavior.", "Automation-Control": 0.971286118, "Qwen2": "Yes"} {"id": "9532096", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=9532096", "title": "Software independence", "text": "The term \"software independence\" (SI) was coined by Dr. Ron Rivest and NIST researcher John Wack. A software independent voting machine is one whose tabulation record does not rely solely on software. The goal of an SI system is to definitively determine whether all votes were recorded legitimately or in error.\nThe technical definition of SI is:\n\"A voting system is software-independent if an undetected change or error in its software cannot cause an undetectable change or error in an election outcome.\"\nSI has been redefined as a global property for a tabulation of votes rather than of each individual vote, aiming to detect rather than prevent error and fraud through human processes.\nTGDC Resolution.\nThe Election Assistance Commission's Technical Guidelines Development Committee adopted an SI resolution for the next iteration of the Voluntary Voting System Guidelines (VVSG):\n\"Election officials and vendors have appropriately responded to the growing complexity of voting systems by adding more stringent access controls, encryption, testing, and physical security to election procedures and systems. The TGDC has considered current threats to voting systems and, at this time, finds that security concerns do not warrant replacing deployed voting systems where EAC Best Practices are used.\"\n\"To provide auditability and proactively address the increasing difficulty of protecting against all prospective threats, the TGDC directs STS to write requirements for the next version of the VVSG requiring the next generation of voting systems to be software independent. The TGDC directs STS and HFP to draft usability and accessibility requirements to ensure that all voters can verify the independent voting record.\"\n\"The TGDC further directs STS and Core Requirements and Testing Subcommittees (CRT) to draft requirements to ensure that systems that produce independently verifiable voting records are reliable and provide adequate support for audits.\"\nExample systems.\nExamples of software-independent voting systems are optical scan voting systems and direct recording electronic voting computers (DRE) with a voter verified paper audit trail.", "Automation-Control": 0.6336023808, "Qwen2": "Yes"} {"id": "12693735", "revid": "27823944", "url": "https://en.wikipedia.org/wiki?curid=12693735", "title": "Winnow (algorithm)", "text": "The winnow algorithm is a technique from machine learning for learning a linear classifier from labeled examples. It is very similar to the perceptron algorithm. However, the perceptron algorithm uses an additive weight-update scheme, while Winnow uses a multiplicative scheme that allows it to perform much better when many dimensions are irrelevant (hence its name winnow). It is a simple algorithm that scales well to high-dimensional data. During training, Winnow is shown a sequence of positive and negative examples. From these it learns a decision hyperplane that can then be used to label novel examples as positive or negative. The algorithm can also be used in the online learning setting, where the learning and the classification phase are not clearly separated.\nAlgorithm.\nThe basic algorithm, Winnow1, is as follows. The instance space is formula_1, that is, each instance is described as a set of Boolean-valued features. The algorithm maintains non-negative weights formula_2 for formula_3, which are initially set to 1, one weight for each feature. When the learner is given an example formula_4, it applies the typical prediction rule for linear classifiers:\nHere formula_6 is a real number that is called the \"threshold\". Together with the weights, the threshold defines a dividing hyperplane in the instance space. Good bounds are obtained if formula_7 (see below).\nFor each example with which it is presented, the learner applies the following update rule:\nA typical value for is 2.\nThere are many variations to this basic approach. \"Winnow2\" is similar except that in the demotion step the weights are divided by instead of being set to 0. \"Balanced Winnow\" maintains two sets of weights, and thus two hyperplanes. This can then be generalized for multi-label classification.\nMistake bounds.\nIn certain circumstances, it can be shown that the number of mistakes Winnow makes as it learns has an upper bound that is independent of the number of instances with which it is presented. If the Winnow1 algorithm uses formula_14 and formula_15 on a target function that is a formula_16-literal monotone disjunction given by formula_17, then for any sequence of instances the total number of mistakes is bounded by:\nformula_18.", "Automation-Control": 0.7570683956, "Qwen2": "Yes"} {"id": "46871755", "revid": "6046731", "url": "https://en.wikipedia.org/wiki?curid=46871755", "title": "Hole drilling method", "text": "The hole drilling method is a method for measuring residual stresses, in a material. Residual stress occurs in a material in the absence of external loads. Residual stress interacts with the applied loading on the material to affect the overall strength, fatigue, and corrosion performance of the material. Residual stresses are measured through experiments. The hole drilling method is one of the most used methods for residual stress measurement.\nThe hole drilling method can measure macroscopic residual stresses near the material surface. The principle is based on drilling of a small hole into the material. When the material containing residual stress is removed the remaining material reaches a new equilibrium state. The new equilibrium state has associated deformations around the drilled hole. The deformations are related to the residual stress in the volume of material that was removed through drilling. The deformations around the hole are measured during the experiment using strain gauges or optical methods. The original residual stress in the material is calculated from the measured deformations. The hole drilling method is popular for its simplicity and it is suitable for a wide range of applications and materials.\nKey advantages of the hole drilling method include rapid preparation, versatility of the technique for different materials, and reliability. Conversely, the hole drilling method is limited in depth of analysis and specimen geometry, and is at least semi-destructive. \nHistory and development.\nThe idea of measuring the residual stress by drilling a hole and registering the change of the hole diameter was first proposed by Mathar in 1934. In 1966 Rendler and Vignis introduced a systematic and repeatable procedure of hole drilling to measure the residual stress. In the following period the method was further developed in terms of drilling techniques, measuring the relieved deformations, and the residual stress evaluation itself. A very important milestone is the use of finite element method to compute the calibration coefficients and to evaluate the residual stresses from the measured relieved deformations (Schajer, 1981). That allowed especially the evaluation of residual stresses which are not constant along the depth. It also brought further possibilities of using the method, e.g., for inhomogeneous materials, coatings, etc. The measurement and evaluation procedure is standardised by the norm ASTM E837 of the American Society for Testing and Materials which also contributed to the popularity of the method. The hole drilling is currently one of the most widespread methods of measuring the residual stress. Modern computational methods are used for the evaluation. The method is being developed especially in terms of drilling techniques and the possibilities of measuring the deformations.\nFundamental principles.\nThe hole drilling method of measuring the residual stresses is based on drilling a small hole in the material surface. This relieves the residual stresses and the associated deformations around the hole. The relieved deformations are measured in at least three independent directions around the hole. The original residual stress in the material is then evaluated based on the measured deformations and using the so-called calibration coefficients. The hole is made by a cylindrical end mill or by alternative techniques. Deformations are most often measured using strain gauges (strain gauge rosettes).\nThe biaxial stress in the surface plane can be measured. The method is often referred to as semi-destructive thanks to the small material damage. The method is relatively simple, fast, the measuring device is usually portable. Disadvantages include the destructive character of the technique, limited resolution, and a lower accuracy of the evaluation in the case of nonuniform stresses or inhomogeneous material properties.\nThe so-called calibration coefficients play an important role in the residual stress evaluation. They are used to convert the relieved deformations to the original residual stress in the material. The coefficients can be theoretically derived for a through hole and a homogeneous stress. Then they depend only on the material properties, hole radius, and the distance from the hole. In the vast majority of practical applications, however, the preconditions for using the theoretically derived coefficients are not met, e.g., the integral deformation over the tensometer area is not included, the hole is blind instead of through, etc. Therefore, coefficients taking into account the practical aspects of measuring are used. They are mostly determined by a numerical computation using the finite element method. They express the relation between the relieved deformations and the residual stresses, taking into account the hole size, hole depth, shape of the tensometric rosette, material, and other parameters.\nThe evaluation of the residual stresses depends on the method used to calculate them from the measured relieved deformations. All the evaluation methods are built on the basic principles. They differ in the preconditions for use, the accuracy requirements on the calibration coefficients, or the possibility to take additional influences into account. In general, the hole is made in successive steps and the relieved deformations are measured after each step.\nEvaluation methods for the residual stress.\nSeveral methods have been developed for the evaluation of residual stresses from the relieved deformations. The fundamental method is the \"equivalent uniform stress method\". The coefficients for particular hole diameter, rosette type, and hole depth are published in the norm ASTM E837. The method is suitable for a constant or little changing stress along the depth. It can be used as a guideline for non-constant stresses, however, the method may give highly distorted results.\nThe most general method is the \"integral method\". It calculates the influence of the relieved stress in the given depth which, however, changes with the total depth of the hole. The calibration coefficients are expressed as matrices. The evaluation leads to a system of equations whose solution is a vector of residual stresses in particular depths. A numerical simulation is required to get the calibration coefficients. The integral method and its coefficients are defined in the norm ASTM E837.\nThere are other evaluation methods that have lower demands on the calibration coefficients and on the evaluation process itself. These include \"the average stress method\" and \"the incremental strain method\". Both the methods are based on the assumption that the change in deformation is caused solely by the relieved stress on the drilled increment. They are suitable only if there are small changes in the stress profiles. Both the methods give numerically correct results for uniform stresses.\n\"The power series method\" and \"the spline method\" are other modifications of the integral method. They both take into account both the distance of the stress effect from the surface and the total hole depth. Contrary to the integral method, the resulting stress values are approximated by a polynomial or a spline. The power series method is very stable but it cannot capture rapidly changing stress values. The spline method is more stable and less susceptible to errors than the integral method. It can capture the actual stress values better than the power series method. The main disadvantage are the complicated mathematical calculations needed to solve a system of nonlinear equations.\nUsing the hole drilling method.\nThe hole drilling method finds its use in many industrial areas dealing with material production and processing. The most important technologies include heat treatment, mechanical and thermal surface finishing, machining, welding, coating, or manufacturing composites. Despite its relative universality, the method requires these fundamental preconditions to be met: the possibility to drill the material, the possibility to apply the tensometric rosettes (or other means of measuring the deformations), and the knowledge of the material properties. Additional conditions can affect the accuracy and repeatability of the measuring. These include especially the size and shape of the sample, distance of the measured area from the edges, homogeneity of the material, presence of residual stress gradients, etc. Hole drilling can be performed in the laboratory or as a field measurement, making it ideal for measuring actual stresses in large components that cannot be moved.", "Automation-Control": 0.6038018465, "Qwen2": "Yes"} {"id": "3003010", "revid": "1147286221", "url": "https://en.wikipedia.org/wiki?curid=3003010", "title": "Electroforming", "text": "Electroforming is a metal forming process in which parts are fabricated through electrodeposition on a model, known in the industry as a mandrel. Conductive (metallic) mandrels are treated to create a mechanical parting layer, or are chemically passivated to limit electroform adhesion to the mandrel and thereby allow its subsequent separation. Non-conductive (glass, silicon, plastic) mandrels require the deposition of a conductive layer prior to electrodeposition. Such layers can be deposited chemically, or using vacuum deposition techniques (e.g., gold sputtering). The outer surface of the mandrel forms the inner surface of the form. \nThe process involves passing direct current through an electrolyte containing salts of the metal being electroformed. The anode is the solid metal being electroformed, and the cathode is the mandrel, onto which the electroform gets plated (deposited). The process continues until the required electroform thickness is achieved. The mandrel is then either separated intact, melted away, or chemically dissolved.\nThe surface of the finished part that was in intimate contact with the mandrel is replicated in fine detail with respect to the original, and is not subject to the shrinkage that would normally be experienced in a foundry cast metal object, or the tool marks of a milled part. The solution side of the part is less well defined, and that loss of definition increases with thickness of the deposit. In extreme cases, where a thickness of several millimetres is required, there is preferential build-up of material on sharp outside edges and corners. This tendency can be reduced by shielding, or a process known as periodic reverse, where the electroforming current is reversed for short periods and the excess is preferentially dissolved electrochemically. The finished form can either be the finished part, or can be used in a subsequent process to produce a positive of the original mandrel shape, such as with vinyl records or CD and DVD stamper manufacture.\nIn recent years, due to its ability to replicate a mandrel surface with practically no loss of fidelity, electroforming has taken on new importance in the fabrication of micro and nano-scale metallic devices and in producing precision injection molds with micro- and nano-scale features for production of non-metallic micro-molded objects.\nProcess.\nIn the basic electroforming process, an electrolytic bath is used to deposit nickel or other electroformable metal onto a conductive surface of a model (mandrel). Once the deposited material has been built up to the desired thickness, the electroform is parted from the substrate. This process allows precise replication of the mandrel surface texture and geometry at low unit cost with high repeatability and excellent process control.\nIf the mandrel is made of a non-conductive material it can be coated with a thin conductive layer.\nAdvantages and disadvantages.\nThe main advantage of electroforming is that it accurately replicates the external shape of the mandrel. Generally, machining a cavity accurately is more challenging than machining a convex shape, however the opposite holds true for electroforming because the mandrel's exterior can be accurately machined and then used to electroform a precision cavity.\nCompared to other basic metal forming processes (casting, forging, stamping, deep drawing, machining and fabricating) electroforming is very effective when requirements call for extreme tolerances, complexity or light weight. The precision and resolution inherent in the photo-lithographically produced conductive patterned substrate, allows finer geometries to be produced to tighter tolerances while maintaining superior edge definition with a near optical finish. Electroformed metal can be extremely pure, with superior properties over wrought metal due to its refined crystal structure. Multiple layers of electroformed metals can be bonded together, or to different substrate materials to produce complex structures with \"grown-on\" flanges and bosses.\nTolerances of 1.5 to 3 nanometres have been reported.\nA wide variety of shapes and sizes can be made by electroforming, the principal limitation being the need to part the product from the mandrel. Since the fabrication of a product requires only a single model or mandrel, low production quantities can be made economically.", "Automation-Control": 0.7850214243, "Qwen2": "Yes"} {"id": "73127816", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=73127816", "title": "Exploration-exploitation dilemma", "text": "The exploration-exploitation dilemma, also known as the explore-exploit tradeoff, is a fundamental concept in decision-making that arises in many domains. It is depicted as the balancing act between two opposing strategies. Exploitation involves choosing the best-known option based on past experiences, while exploration involves trying out new options that may lead to better outcomes in the future. Finding the optimal balance between these two strategies is a crucial challenge in many decision-making situations, where the goal is to maximize long-term benefits.\nApplication in machine learning.\nIn the context of machine learning, the exploration-exploitation tradeoff is often encountered in reinforcement learning, a type of machine learning that involves training agents to make decisions based on feedback from the environment. The agent must decide whether to exploit the current best-known policy or explore new policies to improve its performance. Various algorithms have been developed to address this challenge, such as epsilon-greedy, Thompson sampling, and the upper confidence bound.", "Automation-Control": 0.9989483356, "Qwen2": "Yes"} {"id": "6693240", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=6693240", "title": "Pencil milling", "text": "Pencil milling is a cleanup toolpath generated by computer-aided manufacturing (CAM) programs to machine internal corners and fillets with smaller radius tools to remove the remaining material that are inaccessible with larger tools used for previous roughing, semi-finishing, and finishing toolpaths. The name comes from the way that a pencil could naturally be drawn along these corners. It is sometimes called a rolling ball toolpath.\nOften a constant step-over passes are derived from single pencil pass to create parallel pencil passes that are very good for cleaning up corners and fillets where excess material remains from a bigger cutter.\nGenerating pencil toolpaths.\nThere are several alternative algorithms for generating pencil passes. The method most commonly published in the academic literature involves creating tool surface offsets of the model surfaces and intersecting them to find the common line where the cutter would be in contact with two surfaces at once. An example of this implementation uses the ZMap method described by Park, et al.\nThe industrial method, used in commercial CAM software, differs substantially and works by detecting double-contact points and linking them up into a chain to form a toolpath. A double-contact point is a pair of cutter locations displaced by a tiny distance horizontally, but with a large difference in height or sudden change in contact point. These positions can be located very precisely by binary subdivision, where a cutter location created between a pair of close cutter locations will almost always be continuous with one or the other side.", "Automation-Control": 0.9985457659, "Qwen2": "Yes"} {"id": "6547678", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=6547678", "title": "Dual control theory", "text": "Dual control theory is a branch of control theory that deals with the control of systems whose characteristics are initially unknown. It is called \"dual\" because in controlling such a system the controller's objectives are twofold: \nThese two objectives may be partly in conflict.\nIn the context of reinforcement learning, this is known as the exploration-exploitation trade-off (e.g. Multi-armed bandit#Empirical motivation).\nDual control theory was developed by Alexander Aronovich Fel'dbaum in 1960. He showed that in principle the optimal solution can be found by dynamic programming, but this is often impractical; as a result a number of methods for designing sub-optimal dual controllers have been devised.\nExample.\nTo use an analogy: if you are driving a new car you want to get to your destination cheaply and smoothly, but you also want to see how well the car accelerates, brakes and steers so as to get a better feel for how to drive it, so you will do some test manoeuvers for this purpose. Similarly a dual controller will inject a so-called probing (or exploration) signal into the system that may detract from short-term performance but will improve control in the future.", "Automation-Control": 0.9604145885, "Qwen2": "Yes"} {"id": "8704171", "revid": "589223", "url": "https://en.wikipedia.org/wiki?curid=8704171", "title": "Outline of machines", "text": "Machine – mechanical system that provides the useful application of power to achieve movement. A machine consists of a power source, or engine, and a mechanism or transmission for the controlled use of this power. The combination of force and movement, known as power, is an important characteristic of a machine.\nMachine theory.\nThe mathematical tools for the analysis of movement in machines:\nMachine elements.\nMovement in a machine is controlled by mechanism elements that the shape forces and movement and structural elements that support these mechanisms.\nGeneral machine-related concepts.\nMechanical components.\nAirfoil.\nAirfoil", "Automation-Control": 0.9924964309, "Qwen2": "Yes"} {"id": "545863", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=545863", "title": "Step response", "text": "The step response of a system in a given initial state consists of the time evolution of its outputs when its control inputs are Heaviside step functions. In electronic engineering and control theory, step response is the time behaviour of the outputs of a general system when its inputs change from zero to one in a very short time. The concept can be extended to the abstract mathematical notion of a dynamical system using an evolution parameter.\nFrom a practical standpoint, knowing how the system responds to a sudden input is important because large and possibly fast deviations from the long term steady state may have extreme effects on the component itself and on other portions of the overall system dependent on this component. In addition, the overall system cannot act until the component's output settles down to some vicinity of its final state, delaying the overall system response. Formally, knowing the step response of a dynamical system gives information on the stability of such a system, and on its ability to reach one stationary state when starting from another.\nFormal mathematical description.\nThis section provides a formal mathematical definition of step response in terms of the abstract mathematical concept of a dynamical system formula_1: all notations and assumptions required for the following description are listed here. \nNonlinear dynamical system.\nFor a general dynamical system, the step response is defined as follows:\nIt is the evolution function when the control inputs (or source term, or forcing inputs) are Heaviside functions: the notation emphasizes this concept showing \"H\"(\"t\") as a subscript.\nLinear dynamical system.\nFor a linear time-invariant (LTI) black box, let formula_9 for notational convenience: the step response can be obtained by convolution of the Heaviside step function control and the impulse response \"h\"(\"t\") of the system itself\nwhich for an LTI system is equivalent to just integrating the latter. Conversely, for an LTI system, the derivative of the step response yields the impulse response:\nHowever, these simple relations are not true for a non-linear or time-variant system.\nTime domain versus frequency domain.\nInstead of frequency response, system performance may be specified in terms of parameters describing time-dependence of response. The step response can be described by the following quantities related to its \"time behavior\",\nIn the case of linear dynamic systems, much can be inferred about the system from these characteristics. Below the step response of a simple two-pole amplifier is presented, and some of these terms are illustrated.\nIn LTI systems, the function that has the steepest slew rate that doesn't create overshoot or ringing is the Gaussian function. This is because it is the only function whose Fourier transform has the same shape.\nFeedback amplifiers.\nThis section describes the step response of a simple negative feedback amplifier shown in Figure 1. The feedback amplifier consists of a main open-loop amplifier of gain \"A\"OL and a feedback loop governed by a feedback factor β. This feedback amplifier is analyzed to determine how its step response depends upon the time constants governing the response of the main amplifier, and upon the amount of feedback used.\nA negative-feedback amplifier has gain given by (see negative feedback amplifier):\nwhere \"A\"OL = open-loop gain, \"A\"FB = closed-loop gain (the gain with negative feedback present) and \"β\" = feedback factor.\nWith one dominant pole.\nIn many cases, the forward amplifier can be sufficiently well modeled in terms of a single dominant pole of time constant τ, that it, as an open-loop gain given by:\nwith zero-frequency gain \"A\"0 and angular frequency ω = 2π\"f\". This forward amplifier has unit step response\nan exponential approach from 0 toward the new equilibrium value of \"A\"0.\nThe one-pole amplifier's transfer function leads to the closed-loop gain:\nThis closed-loop gain is of the same form as the open-loop gain: a one-pole filter. Its step response is of the same form: an exponential decay toward the new equilibrium value. But the time constant of the closed-loop step function is \"τ\" / (1 + \"β\" \"A\"0), so it is faster than the forward amplifier's response by a factor of 1 + \"β\" \"A\"0:\nAs the feedback factor \"β\" is increased, the step response will get faster, until the original assumption of one dominant pole is no longer accurate. If there is a second pole, then as the closed-loop time constant approaches the time constant of the second pole, a two-pole analysis is needed.\nTwo-pole amplifiers.\nIn the case that the open-loop gain has two poles (two time constants, \"τ\"1, \"τ\"2), the step response is a bit more complicated. The open-loop gain is given by:\nwith zero-frequency gain \"A\"0 and angular frequency \"ω\" = 2\"πf\".\nAnalysis.\nThe two-pole amplifier's transfer function leads to the closed-loop gain:\nThe time dependence of the amplifier is easy to discover by switching variables to \"s\" = \"j\"ω, whereupon the gain becomes:\nThe poles of this expression (that is, the zeros of the denominator) occur at:\nwhich shows for large enough values of \"βA\"0 the square root becomes the square root of a negative number, that is the square root becomes imaginary, and the pole positions are complex conjugate numbers, either \"s\"+ or \"s\"−; see Figure 2:\nwith\nand\nUsing polar coordinates with the magnitude of the radius to the roots given by |\"s\"| (Figure 2):\nand the angular coordinate φ is given by:\nTables of Laplace transforms show that the time response of such a system is composed of combinations of the two functions:\nwhich is to say, the solutions are damped oscillations in time. In particular, the unit step response of the system is:\nwhich simplifies to\nwhen \"A\"0 tends to infinity and the feedback factor \"β\" is one.\nNotice that the damping of the response is set by ρ, that is, by the time constants of the open-loop amplifier. In contrast, the frequency of oscillation is set by μ, that is, by the feedback parameter through β\"A\"0. Because ρ is a sum of reciprocals of time constants, it is interesting to notice that ρ is dominated by the \"shorter\" of the two.\nResults.\nFigure 3 shows the time response to a unit step input for three values of the parameter μ. It can be seen that the frequency of oscillation increases with μ, but the oscillations are contained between the two asymptotes set by the exponentials [ 1 − exp(−\"ρt\") ] and [ 1 + exp(−ρt) ]. These asymptotes are determined by ρ and therefore by the time constants of the open-loop amplifier, independent of feedback.\nThe phenomenon of oscillation about the final value is called ringing. The overshoot is the maximum swing above final value, and clearly increases with μ. Likewise, the undershoot is the minimum swing below final value, again increasing with μ. The settling time is the time for departures from final value to sink below some specified level, say 10% of final value.\nThe dependence of settling time upon μ is not obvious, and the approximation of a two-pole system probably is not accurate enough to make any real-world conclusions about feedback dependence of settling time. However, the asymptotes [ 1 − exp(−\"ρt\") ] and [ 1 + exp (−\"ρt\") ] clearly impact settling time, and they are controlled by the time constants of the open-loop amplifier, particularly the shorter of the two time constants. That suggests that a specification on settling time must be met by appropriate design of the open-loop amplifier.\nThe two major conclusions from this analysis are: \nAs an aside, it may be noted that real-world departures from this linear two-pole model occur due to two major complications: first, real amplifiers have more than two poles, as well as zeros; and second, real amplifiers are nonlinear, so their step response changes with signal amplitude.\nControl of overshoot.\nHow overshoot may be controlled by appropriate parameter choices is discussed next.\nUsing the equations above, the amount of overshoot can be found by differentiating the step response and finding its maximum value. The result for maximum step response \"S\"max is:\nThe final value of the step response is 1, so the exponential is the actual overshoot itself. It is clear the overshoot is zero if \"μ\" = 0, which is the condition:\nThis quadratic is solved for the ratio of time constants by setting \"x\" = (\"τ\"1 / \"τ\"2)1/2 with the result\nBecause β \"A\"0 ≫ 1, the 1 in the square root can be dropped, and the result is\nIn words, the first time constant must be much larger than the second. To be more adventurous than a design allowing for no overshoot we can introduce a factor \"α\" in the above relation:\nand let α be set by the amount of overshoot that is acceptable.\nFigure 4 illustrates the procedure. Comparing the top panel (α = 4) with the lower panel (α = 0.5) shows lower values for α increase the rate of response, but increase overshoot. The case α = 2 (center panel) is the \"maximally flat\" design that shows no peaking in the Bode gain vs. frequency plot. That design has the rule of thumb built-in safety margin to deal with non-ideal realities like multiple poles (or zeros), nonlinearity (signal amplitude dependence) and manufacturing variations, any of which can lead to too much overshoot. The adjustment of the pole separation (that is, setting α) is the subject of frequency compensation, and one such method is pole splitting.\nControl of settling time.\nThe amplitude of ringing in the step response in Figure 3 is governed by the damping factor exp(−\"ρt\"). That is, if we specify some acceptable step response deviation from final value, say Δ, that is:\nthis condition is satisfied regardless of the value of β \"A\"OL provided the time is longer than the settling time, say \"t\"S, given by:\nwhere the τ1 ≫ τ2 is applicable because of the overshoot control condition, which makes \"τ\"1 = \"αβA\"OL τ2. Often the settling time condition is referred to by saying the settling period is inversely proportional to the unity gain bandwidth, because 1/(2\"π\" \"τ\"2) is close to this bandwidth for an amplifier with typical dominant pole compensation. However, this result is more precise than this rule of thumb. As an example of this formula, if the settling time condition is \"t\"S = 8 \"τ\"2.\nIn general, control of overshoot sets the time constant ratio, and settling time \"t\"S sets τ2.\nSystem Identification using the Step Response: System with two real poles.\nThis method uses significant points of the step response. There is no need to guess tangents to the measured Signal. The equations are derived using numerical simulations, determining some significant ratios and fitting parameters of nonlinear equations. See also.\nHere the steps:\nPhase margin.\nNext, the choice of pole ratio \"τ\"1/\"τ\"2 is related to the phase margin of the feedback amplifier. The procedure outlined in the Bode plot article is followed. Figure 5 is the Bode gain plot for the two-pole amplifier in the range of frequencies up to the second pole position. The assumption behind Figure 5 is that the frequency \"f\"0 dB lies between the lowest pole at \"f\"1 = 1/(2πτ1) and the second pole at \"f\"2 = 1/(2πτ2). As indicated in Figure 5, this condition is satisfied for values of α ≥ 1.\nUsing Figure 5 the frequency (denoted by \"f\"0 dB) is found where the loop gain β\"A\"0 satisfies the unity gain or 0 dB condition, as defined by:\nThe slope of the downward leg of the gain plot is (20 dB/decade); for every factor of ten increase in frequency, the gain drops by the same factor:\nThe phase margin is the departure of the phase at \"f\"0 dB from −180°. Thus, the margin is:\nBecause \"f\"0 dB / \"f\"1 = \"βA\"0 ≫ 1, the term in \"f\"1 is 90°. That makes the phase margin:\nIn particular, for case \"α\" = 1, \"φ\"m = 45°, and for \"α\" = 2, \"φ\"m = 63.4°. Sansen recommends \"α\" = 3, \"φ\"m = 71.6° as a \"good safety position to start with\".\nIf α is increased by shortening \"τ\"2, the settling time \"t\"S also is shortened. If \"α\" is increased by lengthening \"τ\"1, the settling time \"t\"S is little altered. More commonly, both \"τ\"1 \"and\" \"τ\"2 change, for example if the technique of pole splitting is used.\nAs an aside, for an amplifier with more than two poles, the diagram of Figure 5 still may be made to fit the Bode plots by making \"f\"2 a fitting parameter, referred to as an \"equivalent second pole\" position.", "Automation-Control": 0.7731214166, "Qwen2": "Yes"} {"id": "548131", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=548131", "title": "Minimum phase", "text": "In control theory and signal processing, a linear, time-invariant system is said to be minimum-phase if the system and its inverse are causal and stable.\nThe most general causal LTI transfer function can be uniquely factored into a series of an all-pass and a minimum phase system. The system function is then the product of the two parts, and in the time domain the response of the system is the convolution of the two part responses. The difference between a minimum phase and a general transfer function is that a minimum phase system has all of the poles and zeroes of its transfer function in the left half of the s-plane representation (in discrete time, respectively, inside the unit circle of the z-plane). Since inverting a system function leads to poles turning to zeroes and vice versa, and poles on the right side (s-plane imaginary line) or outside (z-plane unit circle) of the complex plane lead to unstable systems, only the class of minimum phase systems is closed under inversion. Intuitively, the minimum phase part of a general causal system implements its amplitude response with minimum group delay, while its all pass part corrects its phase response alone to correspond with the original system function.\nThe analysis in terms of poles and zeroes is exact only in the case of transfer functions which can be expressed as ratios of polynomials. In the continuous time case, such systems translate into networks of conventional, idealized LCR networks. In discrete time, they conveniently translate into approximations thereof, using addition, multiplication, and unit delay. It can be shown that in both cases, system functions of rational form with increasing order can be used to efficiently approximate any other system function; thus even system functions lacking a rational form, and so possessing an infinitude of poles and/or zeroes, can in practice be implemented as efficiently as any other.\nIn the context of causal, stable systems, we would in theory be free to choose whether the zeroes of the system function are outside of the stable range (to the right or outside) if the closure condition wasn't an issue. However, inversion is of great practical importance, just as theoretically perfect factorizations are in their own right. (Cf. the spectral symmetric/antisymmetric decomposition as another important example, leading e.g. to Hilbert transform techniques.) Many physical systems also naturally tend towards minimum phase response, and sometimes have to be inverted using other physical systems obeying the same constraint.\nInsight is given below as to why this system is called minimum-phase, and why the basic idea applies even when the system function cannot be cast into a rational form that could be implemented.\nInverse system.\nA system formula_1 is invertible if we can uniquely determine its input from its output. I.e., we can find a system formula_2 such that if we apply formula_1 followed by formula_2, we obtain the identity system formula_5. (See Inverse matrix for a finite-dimensional analog). That is,\nformula_6\nSuppose that formula_7 is input to system formula_1 and gives output formula_9.\nformula_10\nApplying the inverse system formula_2 to formula_9 gives the following\nformula_13\nSo we see that the inverse system formula_14 allows us to determine uniquely the input formula_7 from the output formula_9.\nDiscrete-time example.\nSuppose that the system formula_1 is a discrete-time, linear, time-invariant (LTI) system described by the impulse response formula_18 for in . Additionally, suppose formula_2 has impulse response formula_20. The cascade of two LTI systems is a convolution. In this case, the above relation is the following:\nformula_21\nwhere formula_22 is the Kronecker delta or the identity system in the discrete-time case. (Changing the order of formula_23 and formula_24 is allowed because of commutativity of the convolution operation.) Note that this inverse system formula_2 need not be unique.\nMinimum phase system.\nWhen we impose the constraints of causality and stability, the inverse system is unique; and the system formula_1 and its inverse formula_2 are called minimum-phase. The causality and stability constraints in the discrete-time case are the following (for time-invariant systems where is the system's impulse response):\nCausality.\nformula_28\nand\nformula_29\nStability.\nformula_30\nand\nformula_31\nSee the article on stability for the analogous conditions for the continuous-time case.\nFrequency analysis.\nDiscrete-time frequency analysis.\nPerforming frequency analysis for the discrete-time case will provide some insight. The time-domain equation is the following:\nformula_32\nApplying the Z-transform gives the following relation in the z-domain\nformula_33\nFrom this relation, we realize that\nformula_34\nFor simplicity, we consider only the case of a rational transfer function . Causality and stability imply that all poles of must be strictly inside the unit circle (See stability). Suppose\nformula_35\nwhere and are polynomial in . Causality and stability imply that the poles – the roots of – must be strictly inside the unit circle. We also know that\nformula_36\nSo, causality and stability for formula_37 imply that its poles – the roots of – must be inside the unit circle. These two constraints imply that both the zeros and the poles of a minimum phase system must be strictly inside the unit circle.\nContinuous-time frequency analysis.\nAnalysis for the continuous-time case proceeds in a similar manner except that we use the Laplace transform for frequency analysis. The time-domain equation is the following.\nformula_38\nwhere formula_39 is the Dirac delta function. The Dirac delta function is the identity operator in the continuous-time case because of the sifting property with any signal .\nformula_40\nApplying the Laplace transform gives the following relation in the s-plane.\nformula_41\nFrom this relation, we realize that\nformula_42\nAgain, for simplicity, we consider only the case of a rational transfer function . Causality and stability imply that all poles of must be strictly inside the left-half s-plane (See stability). Suppose\nformula_43\nwhere and are polynomial in . Causality and stability imply that the poles – the roots of – must be inside the left-half s-plane. We also know that\nformula_44\nSo, causality and stability for formula_45 imply that its poles – the roots of – must be strictly inside the left-half s-plane. These two constraints imply that both the zeros and the poles of a minimum phase system must be strictly inside the left-half s-plane.\nRelationship of magnitude response to phase response.\nA minimum-phase system, whether discrete-time or continuous-time, has an additional useful property that the natural logarithm of the magnitude of the frequency response (the \"gain\" measured in nepers which is proportional to dB) is related to the phase angle of the frequency response (measured in radians) by the Hilbert transform. That is, in the continuous-time case, let\nformula_46\nbe the complex frequency response of system . Then, only for a minimum-phase system, the phase response of is related to the gain by\nformula_47\nwhere formula_48 denotes the Hilbert transform, and, inversely,\nformula_49\nStated more compactly, let\nformula_50\nwhere formula_51 and formula_52 are real functions of a real variable. Then\nformula_53\nand\nformula_54\nThe Hilbert transform operator is defined to be\nformula_55\nAn equivalent corresponding relationship is also true for discrete-time minimum-phase systems.\nMinimum phase in the time domain.\nFor all causal and stable systems that have the same magnitude response, the minimum phase system has its energy concentrated near the start of the impulse response. i.e., it minimizes the following function which we can think of as the delay of energy in the impulse response.\nformula_56\nMinimum phase as minimum group delay.\nFor all causal and stable systems that have the same magnitude response, the minimum phase system has the minimum group delay. The following proof illustrates this idea of minimum group delay.\nSuppose we consider one zero formula_57 of the transfer function formula_58. Let's place this zero formula_57 inside the unit circle (formula_60) and see how the group delay is affected.\nformula_61\nSince the zero formula_57 contributes the factor formula_63 to the transfer function, the phase contributed by this term is the following.\nformula_64\nformula_65 contributes the following to the group delay.\nformula_66\nThe denominator and formula_67 are invariant to reflecting the zero formula_57 outside of the unit circle, i.e., replacing formula_57 with formula_70. However, by reflecting formula_57 outside of the unit circle, we increase the magnitude of formula_72 in the numerator. Thus, having formula_57 inside the unit circle minimizes the group delay contributed by the factor formula_63. We can extend this result to the general case of more than one zero since the phase of the multiplicative factors of the form formula_75 is additive. I.e., for a transfer function with formula_76 zeros,\nformula_77\nSo, a minimum phase system with all zeros inside the unit circle minimizes the group delay since the group delay of each individual zero is minimized.\nNon-minimum phase.\nSystems that are causal and stable whose inverses are causal and unstable are known as \"non-minimum-phase\" systems. A given non-minimum phase system will have a greater phase contribution than the minimum-phase system with the equivalent magnitude response.\nMaximum phase.\nA \"maximum-phase\" system is the opposite of a minimum phase system. A causal and stable LTI system is a \"maximum-phase\" system if its inverse is causal and unstable. That is,\nSuch a system is called a \"maximum-phase system\" because it has the maximum group delay of the set of systems that have the same magnitude response. In this set of equal-magnitude-response systems, the maximum phase system will have maximum energy delay.\nFor example, the two continuous-time LTI systems described by the transfer functions\nformula_78\nhave equivalent magnitude responses; however, the second system has a much larger contribution to the phase shift. Hence, in this set, the second system is the maximum-phase system and the first system is the minimum-phase system. These systems are also famously known as nonminimum-phase systems that raise many stability concerns in control. One recent solution to these systems is moving the RHP zeros to the LHP using the PFCD method.\nMixed phase.\nA \"mixed-phase\" system has some of its zeros inside the unit circle and has others outside the unit circle. Thus, its group delay is neither minimum or maximum but somewhere between the group delay of the minimum and maximum phase equivalent system.\nFor example, the continuous-time LTI system described by transfer function\nformula_79\nis stable and causal; however, it has zeros on both the left- and right-hand sides of the complex plane. Hence, it is a \"mixed-phase\" system. To control the transfer functions that include these systems some methods such as internal model controller (IMC), generalized Smith's predictor (GSP) and parallel feedforward control with derivative (PFCD) are proposed.\nLinear phase.\nA linear-phase system has constant group delay. Non-trivial linear phase or nearly linear phase systems are also mixed phase.", "Automation-Control": 0.9755668044, "Qwen2": "Yes"} {"id": "548156", "revid": "1149180961", "url": "https://en.wikipedia.org/wiki?curid=548156", "title": "State-space representation", "text": "In control engineering, model based fault detection and system identification a state-space representation is a mathematical model of a physical system specified as a set of input, output and variables related by first-order (not involving second derivatives) differential equations or difference equations. Such variables, called state variables, evolve over time in a way that depends on the values they have at any given instant and on the externally imposed values of input variables. Output variables’ values depend on the values of the state variables.\nThe \"state space\" or \"phase space\" is the geometric space in which the variables on the axes are the state variables. The state of the system can be represented as a vector, the state vector, within state space.\nIf the dynamical system is linear, time-invariant, and finite-dimensional, then the differential and algebraic equations may be written in matrix form.\nThe state-space method is characterized by significant algebraization of general system theory, which makes it possible to use Kronecker vector-matrix structures. The capacity of these structures can be efficiently applied to research systems with modulation or without it. \nThe state-space representation (also known as the \"time-domain approach\") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With formula_1 inputs and formula_2 outputs, we would otherwise have to write down formula_3 Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions.\nThe state-space model can be applied in subjects such as economics, statistics, computer science and electrical engineering, and neuroscience. In econometrics, for example, state-space models can be used to decompose a time series into trend and cycle, compose individual indicators into a composite index, identify turning points of the business cycle, and estimate GDP using latent and unobserved time series. Many applications rely on the Kalman Filter or a state observer to produce estimates of the current unknown state variables using their previous observations.\nState variables.\nThe internal state variables are the smallest possible subset of system variables that can represent the entire state of the system at any given time. The minimum number of state variables required to represent a given system, formula_4, is usually equal to the order of the system's defining differential equation, but not necessarily. If the system is represented in transfer function form, the minimum number of state variables is equal to the order of the transfer function's denominator after it has been reduced to a proper fraction. It is important to understand that converting a state-space realization to a transfer function form may lose some internal information about the system, and may provide a description of a system which is stable, when the state-space realization is unstable at certain points. In electric circuits, the number of state variables is often, though not always, the same as the number of energy storage elements in the circuit such as capacitors and inductors. The state variables defined must be linearly independent, i.e., no state variable can be written as a linear combination of the other state variables, or the system cannot be solved.\nLinear systems.\nThe most general state-space representation of a linear system with formula_1 inputs, formula_2 outputs and formula_4 state variables is written in the following form:\nwhere:\nIn this general formulation, all matrices are allowed to be time-variant (i.e. their elements can depend on time); however, in the common LTI case, matrices will be time invariant. The time variable formula_26 can be continuous (e.g. formula_27) or discrete (e.g. formula_28). In the latter case, the time variable formula_29 is usually used instead of formula_26. Hybrid systems allow for time domains that have both continuous and discrete parts. Depending on the assumptions made, the state-space model representation can assume the following forms:\nExample: continuous-time LTI case.\nStability and natural response characteristics of a continuous-time LTI system (i.e., linear with matrices that are constant with respect to time) can be studied from the eigenvalues of the matrix formula_31. The stability of a time-invariant state-space model can be determined by looking at the system's transfer function in factored form. It will then look something like this:\nThe denominator of the transfer function is equal to the characteristic polynomial found by taking the determinant of formula_33,\nThe roots of this polynomial (the eigenvalues) are the system transfer function's poles (i.e., the singularities where the transfer function's magnitude is unbounded). These poles can be used to analyze whether the system is asymptotically stable or marginally stable. An alternative approach to determining stability, which does not involve calculating eigenvalues, is to analyze the system's Lyapunov stability.\nThe zeros found in the numerator of formula_35 can similarly be used to determine whether the system is minimum phase.\nThe system may still be input–output stable (see BIBO stable) even though it is not internally stable. This may be the case if unstable poles are canceled out by zeros (i.e., if those singularities in the transfer function are removable).\nControllability.\nThe state controllability condition implies that it is possible – by admissible inputs – to steer the states from any initial value to any final value within some finite time window. A continuous time-invariant linear state-space model is controllable if and only if\nwhere rank is the number of linearly independent rows in a matrix, and where \"n\" is the number of state variables.\nObservability.\nObservability is a measure for how well internal states of a system can be inferred by knowledge of its external outputs. The observability and controllability of a system are mathematical duals (i.e., as controllability provides that an input is available that brings any initial state to any desired final state, observability provides that knowing an output trajectory provides enough information to predict the initial state of the system).\nA continuous time-invariant linear state-space model is observable if and only if\nTransfer function.\nThe \"transfer function\" of a continuous time-invariant linear state-space model can be derived in the following way:\nFirst, taking the Laplace transform of \nyields\nNext, we simplify for formula_40, giving\nand thus\nSubstituting for formula_40 in the output equation\ngiving\nAssuming zero initial conditions formula_46 and a single-input single-output (SISO) system, the transfer function is defined as the ratio of output and input formula_47. For a multiple-input multiple-output (MIMO) system, however, this ratio is not defined. Therefore, assuming zero initial conditions, the transfer function matrix is derived from\nusing the method of equating the coefficients which yields\nConsequently, formula_50 is a matrix with the dimension formula_3 which contains transfer functions for each input output combination. Due to the simplicity of this matrix notation, the state-space representation is commonly used for multiple-input, multiple-output systems. The Rosenbrock system matrix provides a bridge between the state-space representation and its transfer function.\nCanonical realizations.\nAny given transfer function which is strictly proper can easily be transferred into state-space by the following approach (this example is for a 4-dimensional, single-input, single-output system):\nGiven a transfer function, expand it to reveal all coefficients in both the numerator and denominator. This should result in the following form:\nThe coefficients can now be inserted directly into the state-space model by the following approach:\nThis state-space realization is called controllable canonical form because the resulting model is guaranteed to be controllable (i.e., because the control enters a chain of integrators, it has the ability to move every state).\nThe transfer function coefficients can also be used to construct another type of canonical form\nThis state-space realization is called observable canonical form because the resulting model is guaranteed to be observable (i.e., because the output exits from a chain of integrators, every state has an effect on the output).\nProper transfer functions.\nTransfer functions which are only proper (and not strictly proper) can also be realised quite easily. The trick here is to separate the transfer function into two parts: a strictly proper part and a constant. \nThe strictly proper transfer function can then be transformed into a canonical state-space realization using techniques shown above. The state-space realization of the constant is trivially formula_58. Together we then get a state-space realization with matrices \"A\", \"B\" and \"C\" determined by the strictly proper part, and matrix \"D\" determined by the constant.\nHere is an example to clear things up a bit:\nwhich yields the following controllable realization\nNotice how the output also depends directly on the input. This is due to the formula_62 constant in the transfer function.\nFeedback.\nA common method for feedback is to multiply the output by a matrix \"K\" and setting this as the input to the system: formula_63.\nSince the values of \"K\" are unrestricted the values can easily be negated for negative feedback.\nThe presence of a negative sign (the common notation) is merely a notational one and its absence has no impact on the end results.\nbecomes\nsolving the output equation for formula_68 and substituting in the state equation results in\nThe advantage of this is that the eigenvalues of \"A\" can be controlled by setting \"K\" appropriately through eigendecomposition of formula_71.\nThis assumes that the closed-loop system is controllable or that the unstable eigenvalues of \"A\" can be made stable through appropriate choice of \"K\".\nExample.\nFor a strictly proper system \"D\" equals zero. Another fairly common situation is when all states are outputs, i.e. \"y\" = \"x\", which yields \"C\" = \"I\", the Identity matrix. This would then result in the simpler equations\nThis reduces the necessary eigendecomposition to just formula_74.\nFeedback with setpoint (reference) input.\nIn addition to feedback, an input, formula_75, can be added such that formula_76.\nbecomes\nsolving the output equation for formula_68 and substituting in the state equation \nresults in\nOne fairly common simplification to this system is removing \"D\", which reduces the equations to\nMoving object example.\nA classical linear system is that of one-dimensional movement of an object (e.g., a cart).\nNewton's laws of motion for an object moving horizontally on a plane and attached to a wall with a spring:\nwhere\nThe state equation would then become\nwhere\nThe controllability test is then\nwhich has full rank for all formula_91 and formula_93. This means, that if initial state of the system is known (formula_87, formula_88, formula_89), and if the formula_91 and formula_93 are constants, then there is a force formula_108 that could move the cart into any other position in the system.\nThe observability test is then\nwhich also has full rank. Therefore, this system is both controllable and observable.\nNonlinear systems.\nThe more general form of a state-space model can be written as two functions.\nThe first is the state equation and the latter is the output equation.\nIf the function formula_112 is a linear combination of states and inputs then the equations can be written in matrix notation like above.\nThe formula_90 argument to the functions can be dropped if the system is unforced (i.e., it has no inputs).\nPendulum example.\nA classic nonlinear system is a simple unforced pendulum\nwhere\nThe state equations are then\nwhere\nInstead, the state equation can be written in the general form\nThe equilibrium/stationary points of a system are when formula_127 and so the equilibrium points of a pendulum are those that satisfy\nfor integers \"n\".", "Automation-Control": 0.9543524384, "Qwen2": "Yes"} {"id": "564695", "revid": "1160831716", "url": "https://en.wikipedia.org/wiki?curid=564695", "title": "Discrete system", "text": "In theoretical computer science, a discrete system is a system with a countable number of states. Discrete systems may be contrasted with continuous systems, which may also be called analog systems. A final discrete system is often modeled with a directed graph and is analyzed for correctness and complexity according to computational theory. Because discrete systems have a countable number of states, they may be described in precise mathematical models.\nA computer is a finite-state machine that may be viewed as a discrete system. Because computers are often used to model not only other discrete systems but continuous systems as well, methods have been developed to represent real-world continuous systems as discrete systems. One such method involves sampling a continuous signal at discrete time intervals.", "Automation-Control": 0.8938979506, "Qwen2": "Yes"} {"id": "564719", "revid": "35465059", "url": "https://en.wikipedia.org/wiki?curid=564719", "title": "Hybrid system", "text": "A hybrid system is a dynamical system that exhibits both continuous and discrete dynamic behavior – a system that can both \"flow\" (described by a differential equation) and \"jump\" (described by a state machine or automaton). Often, the term \"hybrid dynamical system\" is used, to distinguish over hybrid systems such as those that combine neural nets and fuzzy logic, or electrical and mechanical drivelines. A hybrid system has the benefit of encompassing a larger class of systems within its structure, allowing for more flexibility in modeling dynamic phenomena.\nIn general, the \"state\" of a hybrid system is defined by the values of the \"continuous variables\" and a discrete \"mode\". The state changes either continuously, according to a flow condition, or discretely according to a \"control graph\". Continuous flow is permitted as long as so-called \"invariants\" hold, while discrete transitions can occur as soon as given \"jump conditions\" are satisfied. Discrete transitions may be associated with \"events\".\nExamples.\nHybrid systems have been used to model several cyber-physical systems, including physical systems with \"impact\", logic-dynamic controllers, and even Internet congestion.\nBouncing ball.\nA canonical example of a hybrid system is the bouncing ball, a physical system with impact. Here, the ball (thought of as a point-mass) is dropped from an initial height and bounces off the ground, dissipating its energy with each bounce. The ball exhibits continuous dynamics between each bounce; however, as the ball impacts the ground, its velocity undergoes a discrete change modeled after an inelastic collision. A mathematical description of the bouncing ball follows. Let formula_1 be the height of the ball and formula_2 be the velocity of the ball. A hybrid system describing the ball is as follows:\nWhen formula_3, flow is governed by\nformula_4,\nwhere formula_5 is the acceleration due to gravity. These equations state that when the ball is above ground, it is being drawn to the ground by gravity.\nWhen formula_6, jumps are governed by\nformula_7,\nwhere formula_8 is a dissipation factor. This is saying that when the height of the ball is zero (it has impacted the ground), its velocity is reversed and decreased by a factor of formula_9. Effectively, this describes the nature of the inelastic collision.\nThe bouncing ball is an especially interesting hybrid system, as it exhibits Zeno behavior. Zeno behavior has a strict mathematical definition, but can be described informally as the system making an \"infinite\" number of jumps in a \"finite\" amount of time. In this example, each time the ball bounces it loses energy, making the subsequent jumps (impacts with the ground) closer and closer together in time.\nIt is noteworthy that the dynamical model is complete if and only if one adds the contact force between the ground and the ball. Indeed, without forces, one cannot properly define the bouncing ball and the model is, from a mechanical point of view, meaningless. The simplest contact model that represents the interactions between the ball and the ground, is the complementarity relation between the force and the distance (the gap) between the ball and the ground. This is written as\nformula_10\nSuch a contact model does not incorporate magnetic forces, nor gluing effects. When the complementarity relations are in, one can continue to integrate the system after the impacts have accumulated and vanished: the equilibrium of the system is well-defined as the static equilibrium of the ball on the ground, under the action of gravity compensated by the contact force formula_11. One also notices from basic convex analysis that the complementarity relation can equivalently be rewritten as the inclusion into a normal cone, so that the bouncing ball dynamics is a differential inclusion into a normal cone to a convex set. See Chapters 1, 2 and 3 in Acary-Brogliato's book cited below (Springer LNACM 35, 2008). See also the other references on non-smooth mechanics.\nHybrid systems verification.\nThere are approaches to automatically proving properties of hybrid systems (e.g., some of the tools mentioned below). Common techniques for proving safety of hybrid systems are computation of reachable sets, abstraction refinement, and barrier certificates.\nMost verification tasks are undecidable, making general verification algorithms impossible. Instead, the tools are analyzed for their capabilities on benchmark problems. A possible theoretical characterization of this is algorithms that succeed with hybrid systems verification in all robust cases implying that many problems for hybrid systems, while undecidable, are at least quasi-decidable.\nOther modeling approaches.\nTwo basic hybrid system modeling approaches can be classified, an implicit and an explicit one. The explicit approach is often represented by a hybrid automaton, a hybrid program or a hybrid Petri net. The implicit approach is often represented by guarded equations to result in systems of differential algebraic equations (DAEs) where the active equations may change, for example by means of a hybrid bond graph.\nAs a unified simulation approach for hybrid system analysis, there is a method based on DEVS formalism in which integrators for differential equations are quantized into atomic DEVS models. These methods generate traces of system behaviors in discrete event system manner which are different from discrete time systems. Detailed of this approach can be found in references [Kofman2004] [CF2006] [Nutaro2010] and the software tool PowerDEVS.", "Automation-Control": 0.9791348577, "Qwen2": "Yes"} {"id": "564746", "revid": "1156442449", "url": "https://en.wikipedia.org/wiki?curid=564746", "title": "Closed-loop controller", "text": "A closed-loop controller or feedback controller is a control loop which incorporates feedback, in contrast to an \"open-loop controller\" or \"non-feedback controller\".\nA closed-loop controller uses feedback to control states or outputs of a dynamical system. Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is \"fed back\" as input to the process, closing the loop.\nIn the case of linear feedback systems, a control loop including sensors, control algorithms, and actuators is arranged in an attempt to regulate a variable at a setpoint (SP). An everyday example is the cruise control on a road vehicle; where external influences such as hills would cause speed changes, and the driver has the ability to alter the desired set speed. The PID algorithm in the controller restores the actual speed to the desired speed in an optimum way, with minimal delay or overshoot, by controlling the power output of the vehicle's engine.\nControl systems that include some sensing of the results they are trying to achieve are making use of feedback and can adapt to varying circumstances to some extent. Open-loop control systems do not make use of feedback, and run only in pre-arranged ways.\nClosed-loop controllers have the following advantages over open-loop controllers:\nIn some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed \"feedforward\" and serves to further improve reference tracking performance.\nA common closed-loop controller architecture is the PID controller.\nClosed-loop transfer function.\nThe output of the system \"y\"(\"t\") is fed back through a sensor measurement \"F\" to a comparison with the reference value \"r\"(\"t\"). The controller \"C\" then takes the error \"e\" (difference) between the reference and the output to change the inputs \"u\" to the system under control \"P\". This is shown in the figure. This kind of controller is a closed-loop controller or feedback controller.\nThis is called a single-input-single-output (\"SISO\") control system; \"MIMO\" (i.e., Multi-Input-Multi-Output) systems, with more than one input/output, are common. In such cases variables are represented through vectors instead of simple scalar values. For some distributed parameter systems the vectors may be infinite-dimensional (typically functions).\nIf we assume the controller \"C\", the plant \"P\", and the sensor \"F\" are linear and time-invariant (i.e., elements of their transfer function \"C\"(\"s\"), \"P\"(\"s\"), and \"F\"(\"s\") do not depend on time), the systems above can be analysed using the Laplace transform on the variables. This gives the following relations:\nSolving for \"Y\"(\"s\") in terms of \"R\"(\"s\") gives\nThe expression formula_5 is referred to as the \"closed-loop transfer function\" of the system. The numerator is the forward (open-loop) gain from \"r\" to \"y\", and the denominator is one plus the gain in going around the feedback loop, the so-called loop gain. If formula_6, i.e., it has a large norm with each value of \"s\", and if formula_7, then \"Y\"(\"s\") is approximately equal to \"R\"(\"s\") and the output closely tracks the reference input.\nPID feedback control.\nA proportional–integral–derivative controller (PID controller) is a control loop feedback mechanism control technique widely used in control systems.\nA PID controller continuously calculates an \"error value\" as the difference between a desired setpoint and a measured process variable and applies a correction based on proportional, integral, and derivative terms. \"PID\" is an initialism for \"Proportional-Integral-Derivative\", referring to the three terms operating on the error signal to produce a control signal.\nThe theoretical understanding and application dates from the 1920s, and they are implemented in nearly all analogue control systems; originally in mechanical controllers, and then using discrete electronics and later in industrial process computers.\nThe PID controller is probably the most-used feedback control design.\nIf is the control signal sent to the system, is the measured output and is the desired output, and is the tracking error, a PID controller has the general form\nThe desired closed loop dynamics is obtained by adjusting the three parameters , and , often iteratively by \"tuning\" and without specific knowledge of a plant model. Stability can often be ensured using only the proportional term. The integral term permits the rejection of a step disturbance (often a striking specification in process control). The derivative term is used to provide damping or shaping of the response. PID controllers are the most well-established class of control systems: however, they cannot be used in several more complicated cases, especially if MIMO systems are considered.\nApplying Laplace transformation results in the transformed PID controller equation\nwith the PID controller transfer function\nAs an example of tuning a PID controller in the closed-loop system , consider a 1st order plant given by\nwhere and are some constants. The plant output is fed back through\nwhere is also a constant. Now if we set formula_14, and formula_15, we can express the PID controller transfer function in series form as\nPlugging , and into the closed-loop transfer function , we find that by setting\n. With this tuning in this example, the system output follows the reference input exactly.\nHowever, in practice, a pure differentiator is neither physically realizable nor desirable due to amplification of noise and resonant modes in the system. Therefore, a phase-lead compensator type approach or a differentiator with low-pass roll-off are used instead.", "Automation-Control": 0.9979560375, "Qwen2": "Yes"} {"id": "564756", "revid": "6727347", "url": "https://en.wikipedia.org/wiki?curid=564756", "title": "Open-loop controller", "text": "In control theory, an open-loop controller, also called a non-feedback controller, is a control loop part of a control system in which the control action is independent of the \"process output\", which is the process variable that is being controlled. It does not use feedback to determine if its output has achieved the desired goal of the input command or process setpoint.\nThere are many open-loop controls, such as on/off switching of valves, machinery, lights, motors or heaters, where the control result is known to be approximately sufficient under normal conditions without the need for feedback. The advantage of using open-loop control in these cases is the reduction in component count and complexity. However, an open-loop system cannot correct any errors that it makes or correct for outside disturbances, and cannot engage in machine learning, unlike a closed-loop control system.\nApplications.\nAn open-loop controller is often used in simple processes because of its simplicity and low cost, especially in systems where feedback is not critical. A typical example would be an older model domestic clothes dryer, for which the length of time is entirely dependent on the judgement of the human operator, with no automatic feedback of the dryness of the clothes.\nFor example, an irrigation sprinkler system, programmed to turn on at set times could be an example of an open-loop system if it does not measure soil moisture as a form of feedback. Even if rain is pouring down on the lawn, the sprinkler system would activate on schedule, wasting water.\nAnother example is a stepper motor used for control of position. Sending it a stream of electrical pulses causes it to rotate by exactly that many steps, hence the name. If the motor was always assumed to perform each movement correctly, without positional feedback, it would be open-loop control. However, if there is a position encoder, or sensors to indicate the start or finish positions, then that is closed-loop control, such as in many inkjet printers. The drawback of open-loop control of steppers is that if the machine load is too high, or the motor attempts to move too quickly, then steps may be skipped. The controller has no means of detecting this and so the machine continues to run slightly out of adjustment until reset. For this reason, more complex robots and machine tools instead use servomotors rather than stepper motors, which incorporate encoders and closed-loop controllers.\nHowever, open-loop control is very useful and economic for well-defined systems where the relationship between input and the resultant state can be reliably modeled by a mathematical formula. For example, determining the voltage to be fed to an electric motor that drives a constant load, in order to achieve a desired speed would be a good application. But if the load were not predictable and became excessive, the motor's speed might vary as a function of the load not just the voltage, and an open-loop controller would be insufficient to ensure repeatable control of the velocity.\nAn example of this is a conveyor system that is required to travel at a constant speed. For a constant voltage, the conveyor will move at a different speed depending on the load on the motor (represented here by the weight of objects on the conveyor). In order for the conveyor to run at a constant speed, the voltage of the motor must be adjusted depending on the load. In this case, a closed-loop control system would be necessary.\nThus there are many open-loop controls, such as switching valves, lights, motors or heaters on and off, where the result is known to be approximately sufficient without the need for feedback.\nCombination with feedback control.\nA feed back control system, such as a PID controller, can be improved by combining the feedback (or closed-loop control) of a PID controller with feed-forward (or open-loop) control. Knowledge about the system (such as the desired acceleration and inertia) can be fed forward and combined with the PID output to improve the overall system performance. The feed-forward value alone can often provide the major portion of the controller output. The PID controller primarily has to compensate whatever difference or \"error\" remains between the setpoint (SP) and the system response to the open-loop control. Since the feed-forward output is not affected by the process feedback, it can never cause the control system to oscillate, thus improving the system response without affecting stability. Feed forward can be based on the setpoint and on extra measured disturbances. Setpoint weighting is a simple form of feed forward.\nFor example, in most motion control systems, in order to accelerate a mechanical load under control, more force is required from the actuator. If a velocity loop PID controller is being used to control the speed of the load and command the force being applied by the actuator, then it is beneficial to take the desired instantaneous acceleration, scale that value appropriately and add it to the output of the PID velocity loop controller. This means that whenever the load is being accelerated or decelerated, a proportional amount of force is commanded from the actuator regardless of the feedback value. The PID loop in this situation uses the feedback information to change the combined output to reduce the remaining difference between the process setpoint and the feedback value. Working together, the combined open-loop feed-forward controller and closed-loop PID controller can provide a more responsive control system in some situations.", "Automation-Control": 0.9964168668, "Qwen2": "Yes"} {"id": "237629", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=237629", "title": "Distributed artificial intelligence", "text": "Distributed Artificial Intelligence (DAI) also called Decentralized Artificial Intelligence is a subfield of artificial intelligence research dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field of multi-agent systems. \nMulti-agent systems and distributed problem solving are the two main DAI approaches. There are numerous applications and tools.\nDefinition.\nDistributed Artificial Intelligence (DAI) is an approach to solving complex learning, planning, and decision-making problems. It is embarrassingly parallel, thus able to exploit large scale computation and spatial distribution of computing resources. These properties allow it to solve problems that require the processing of very large data sets. DAI systems consist of autonomous learning processing nodes (agents), that are distributed, often at a very large scale. DAI nodes can act independently, and partial solutions are integrated by communication between nodes, often asynchronously. By virtue of their scale, DAI systems are robust and elastic, and by necessity, loosely coupled. Furthermore, DAI systems are built to be adaptive to changes in the problem definition or underlying data sets due to the scale and difficulty in redeployment.\nDAI systems do not require all the relevant data to be aggregated in a single location, in contrast to monolithic or centralized Artificial Intelligence systems which have tightly coupled and geographically close processing nodes. Therefore, DAI systems often operate on sub-samples or hashed impressions of very large datasets. In addition, the source dataset may change or be updated during the course of the execution of a DAI system.\nDevelopment.\nIn 1975 distributed artificial intelligence emerged as a subfield of artificial intelligence that dealt with interactions of intelligent agents. Distributed artificial intelligence systems were conceived as a group of intelligent entities, called agents, that interacted by cooperation, by coexistence or by competition. DAI is categorized into multi-agent systems and distributed problem solving. In multi-agent systems the main focus is how agents coordinate their knowledge and activities. For distributed problem solving the major focus is how the problem is decomposed and the solutions are synthesized.\nGoals.\nThe objectives of Distributed Artificial Intelligence are to solve the reasoning, planning, learning and perception problems of artificial intelligence, especially if they require large data, by distributing the problem to autonomous processing nodes (agents). To reach the objective, DAI requires:\nThere are many reasons for wanting to distribute intelligence or cope with multi-agent systems. Mainstream problems in DAI research include the following:\nApproaches.\nTwo types of DAI has emerged: \nDAI can apply a bottom-up approach to AI, similar to the subsumption architecture as well as the traditional top-down\napproach of AI. In addition, DAI can also be a vehicle for emergence.\nChallenges.\nThe challenges in Distributed AI are:\nApplications and tools.\nAreas where DAI have been applied are:\nDAI integration in tools has included: \nAgents.\nSystems: Agents and multi-agents.\nNotion of Agents: Agents can be described as distinct entities with standard boundaries and interfaces designed for problem solving.\nNotion of Multi-Agents: Multi-Agent system is defined as a network of agents which are loosely coupled working as a single entity like society for problem solving that an individual agent cannot solve.\nSoftware agents.\nThe key concept used in DPS and MABS is the abstraction called software agents. An agent is a virtual (or physical) entity that has an understanding of its environment and acts upon it. An agent is usually able to communicate with other agents in the same system to achieve a common goal, that one agent alone could not achieve. This communication system uses an agent communication language.\nA first classification that is useful is to divide agents into:\nWell-recognized agent architectures that describe how an agent is internally structured are:", "Automation-Control": 0.9587164521, "Qwen2": "Yes"} {"id": "56125056", "revid": "1461430", "url": "https://en.wikipedia.org/wiki?curid=56125056", "title": "Spiral optimization algorithm", "text": "In mathematics, the spiral optimization (SPO) algorithm is a metaheuristic inspired by spiral phenomena in nature.\nThe first SPO algorithm was proposed for two-dimensional unconstrained optimization\nbased on two-dimensional spiral models. This was extended to n-dimensional problems by generalizing the two-dimensional spiral model to an n-dimensional spiral model.\nThere are effective settings for the SPO algorithm: the periodic descent direction setting\nand the convergence setting.\nMetaphor.\nThe motivation for focusing on spiral phenomena was due to the insight that the dynamics that generate logarithmic spirals share the diversification and intensification behavior. The diversification behavior can work for a global search (exploration) and the intensification behavior enables an intensive search around a current found good solution (exploitation).\nAlgorithm.\nThe SPO algorithm is a multipoint search algorithm that has no objective function gradient, which uses multiple spiral models that can be described as deterministic dynamical systems. As search points follow logarithmic \nspiral trajectories towards the common center, defined as the current best point, better solutions can be found and the common center can be updated.\nThe general SPO algorithm for a minimization problem under the maximum iteration formula_1 (termination criterion) is as follows:\n 0) Set the number of search points formula_2 and the maximum iteration number formula_1.\n 1) Place the initial search points formula_4 and determine the center formula_5, formula_6,and then set formula_7.\n 2) Decide the step rate formula_8 by a rule.\n 3) Update the search points: formula_9\n 4) Update the center: formula_10 where formula_11.\n 5) Set formula_12. If formula_13 is satisfied then terminate and output formula_14. Otherwise, return to Step 2).\nSetting.\nThe search performance depends on setting the composite rotation matrix formula_15, the step rate formula_8, and the initial points formula_17. \nThe following settings are new and effective.\nSetting 1 (Periodic Descent Direction Setting).\nThis setting is an effective setting for high dimensional problems under the maximum iteration formula_1. The conditions on formula_15 and formula_17 together ensure that the spiral models generate descent directions periodically. The condition of formula_8 works to utilize the periodic descent directions under the search termination formula_1.\nformula_31\nwhere formula_32. Note that this condition is almost all satisfied by a random placing and thus no check is actually fine.\nSetting 2 (Convergence Setting).\nThis setting ensures that the SPO algorithm converges to a stationary point under the maximum iteration formula_38. The settings of formula_15 and the initial points formula_17 are the same with the above Setting 1. The setting of formula_8 is as follows.\nExtended works.\nMany extended studies have been conducted on the SPO due to its simple structure and concept; these studies have helped improve its global search performance and proposed novel applications.", "Automation-Control": 0.9481989741, "Qwen2": "Yes"} {"id": "56146308", "revid": "1162484740", "url": "https://en.wikipedia.org/wiki?curid=56146308", "title": "Wanfeng Auto Holding Group", "text": "Wanfeng Auto Holding Group Co., Ltd., headquartered in Xinchang County, China, is a parts and equipment manufacturer for the automotive industry, aerospace industry, military industry, and energy saving.\nIt manufactures lightweight metal parts: aluminium wheels and magnesium alloy materials, and provides environmental protection coatings, hybrid vehicle and electric vehicle assembly and powertrain systems, technical services and control systems.\nIt is involved in aircraft manufacturing and research & development, airport construction and management and general aviation operations: aerial sightseeing, air sports and flight training services.\nIt supplies industrial automatic gating systems, low pressure and gravity die cast machines and auxiliary equipment, with consulting and maintenance and repair services, and provides automation for auto and heavy industries and industrial robot intelligent equipment.\nIt provides private equity, hedge fund and finance leasing services.\nIt designs, develops, and constructs real estate properties and services high-rise buildings, multi-story residences, villas, office buildings, community and commercial buildings and government properties.\nIt is amongst the Chinese Private Enterprises Top 500 (#123) and China Auto Industry Top 30 (#18).\nIt is listed on the Shenzhen Stock Exchange and is a part of the SZSE 200 Index.", "Automation-Control": 0.6921766996, "Qwen2": "Yes"} {"id": "1300939", "revid": "7226930", "url": "https://en.wikipedia.org/wiki?curid=1300939", "title": "Infinite-dimensional optimization", "text": "In certain optimization problems the unknown optimal solution might not be a number or a vector, but rather a continuous quantity, for example a function or the shape of a body. Such a problem is an infinite-dimensional optimization problem, because, a continuous quantity cannot be determined by a finite number of certain degrees of freedom.\nExamples.\nInfinite-dimensional optimization problems can be more challenging than finite-dimensional ones. Typically one needs to employ methods from partial differential equations to solve such problems.\nSeveral disciplines which study infinite-dimensional optimization problems are calculus of variations, optimal control and shape optimization.", "Automation-Control": 0.9483268261, "Qwen2": "Yes"} {"id": "67112408", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=67112408", "title": "Empowerment (artificial intelligence)", "text": "Empowerment in the field of artificial intelligence formalises and quantifies (via information theory) the potential an agent perceives that it has to influence its environment. An agent which follows an empowerment maximising policy, acts to maximise future options (typically up to some limited horizon). Empowerment can be used as a (pseudo) utility function that depends only on information gathered from the local environment to guide action, rather than seeking an externally imposed goal, thus is a form of intrinsic motivation. \nThe empowerment formalism depends on a probabilistic model commonly used in artificial intelligence. An autonomous agent operates in the world by taking in sensory information and acting to change its state, or that of the environment, in a cycle of perceiving and acting known as the perception-action loop. Agent state and actions are modelled by random variables (formula_1) and time (formula_2). The choice of action depends on the current state, and the future state depends on the choice of action, thus the perception-action loop unrolled in time forms a causal bayesian network.\nDefinition.\nEmpowerment (formula_3) is defined as the channel capacity (formula_4) of the actuation channel of the agent, and is formalised as the maximal possible information flow between the actions of the agent and the effect of those actions some time later. Empowerment can be thought of as the future potential of the agent to affect its environment, as measured by its sensors.\nformula_5\nIn a discrete time model, Empowerment can be computed for a given number of cycles into the future, which is referred to in the literature as 'n-step' empowerment. \nformula_6\nThe unit of empowerment depends on the logarithm base. Base 2 is commonly used in which case the unit is bits.\nContextual Empowerment.\nIn general the choice of action (action distribution) that maximises empowerment varies from state to state. Knowing the empowerment of an agent in a specific state is useful, for example to construct an empowerment maximising policy. State-specific empowerment can be found using the more general formalism for 'contextual empowerment'. formula_4 is a random variable describing the context (e.g. state).\nformula_8\nApplication.\nEmpowerment maximisation can be used as a pseudo-utility function to enable agents to exhibit intelligent behaviour without requiring the definition of external goals, for example balancing a pole in a cart-pole balancing scenario where no indication of the task is provided to the agent. \nEmpowerment has been applied in studies of collective behaviour and in continuous domains. As is the case with Bayesian methods in general, computation of empowerment becomes computationally expensive as the number of actions and time horizon extends, but approaches to improve efficiency have led to usage in real-time control. Empowerment has been used for intrinsically motivated reinforcement learning agents playing video games, and in the control of underwater vehicles.", "Automation-Control": 0.8360791206, "Qwen2": "Yes"} {"id": "17132402", "revid": "29463730", "url": "https://en.wikipedia.org/wiki?curid=17132402", "title": "Elmer G. Gilbert", "text": "Elmer Grant Gilbert was an American aerospace engineer and a Professor Emeritus of Aerospace Engineering at the University of Michigan. He received his Ph.D. in Instrumentation Engineering from Michigan in 1957.\nHe was a member of the National Academy of Engineering and a recipient of the 1994 IEEE Control Systems Award (the citation reads: \"for pioneering and innovative contributions to linear state space theory and its applications, especially realization and decoupling, as well as to control algorithms\") and the 1996 Richard E. Bellman Control Heritage Award from the American Automatic Control Council.", "Automation-Control": 0.8156809807, "Qwen2": "Yes"} {"id": "17132787", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=17132787", "title": "John C. Lozier", "text": "John C. Lozier is a noted American control engineer. He was responsible for the control of the Telstar ground-tracking antenna installed near Andover, Maine, and the Brittany Peninsula in France. This equipment, involving real-time computer control, enabled the first transatlantic TV operation in 1962.\nLozier served as the president of the American Automatic Control Council from 1960 to 1962 and as the president of the International Federation of Automatic Control from 1972 to 1975.\nHe received numerous awards for his contribution to the field of control theory, including the Richard E. Bellman Control Heritage Award in 1987.", "Automation-Control": 1.0000075102, "Qwen2": "Yes"} {"id": "28451859", "revid": "27823944", "url": "https://en.wikipedia.org/wiki?curid=28451859", "title": "Pneumatic web guides", "text": "A Web is a term used in the Converting Industry that refers to continuous rolls of thin, flat materials like paper, film and plastic. Web guiding systems use a sensor to monitor the position of a web as it enters a production process for lateral tracking. Each type of web guide sensor has an actuator to shift the running web mechanically back on course whenever the sensor detects movement away from the set path. Actuators may be pneumatic or hydraulic cylinders, or some kind of electromechanical device. Because the web may be fragile — particularly at its edge — non-contact sensors are used. \nSensors developed for Web Guiding applications in the Converting Industry may be pneumatic, photoelectric, ultrasonic, or infrared. The system’s controls must process the output signals from the sensors in to a form that can drive an actuator. Many controls today are electronic, typically using an amplifier to convert signals from the sensor, then commanding a special servo motor incorporating a lead or ball screw for guiding actuation. Some electromechanical guiding systems also utilize computers. \nPneumatic web guide systems are typically easier to install, operate and are less expensive than more complex hydraulic and electronic systems. Pneumatic servo controllers are considered explosion-proof, especially in dusty or contaminated environments.", "Automation-Control": 0.7972160578, "Qwen2": "Yes"} {"id": "54150730", "revid": "22619", "url": "https://en.wikipedia.org/wiki?curid=54150730", "title": "Subspace identification method", "text": "In mathematics, specifically in control theory, subspace identification (SID) aims at identifying linear time invariant (LTI) state space models from input-output data. SID does not require that the user parametrizes the system matrices before solving a parametric optimization problem and, as a consequence, SID methods do not suffer from problems related to local minima that often lead to unsatisfactory identification results.\nHistory.\nSID methods are rooted in the work by the German mathematician Leopold Kronecker (1823–1891). Kronecker showed that a power series can be written as a rational function when the rank of the Hankel operator that has the power series as its symbol is finite. The rank determines the order of the polynomials of the rational function.\nIn the 1960s the work of Kronecker inspired a number of researchers in the area of Systems and Control, like Ho and Kalman, Silverman and Youla and Tissi, to store the Markov parameters of an LTI system into a finite dimensional Hankel matrix and derive from this matrix an (A,B,C) realization of the LTI system. The key observation was that when the Hankel matrix is properly dimensioned versus the order of the LTI system, the rank of the Hankel matrix is the order of the LTI system and the SVD of the Hankel matrix provides a basis of the column space observability matrix and row space of the controllability matrix of the LTI system. Knowledge of this key spaces allows to estimate the system matrices via linear least squares.\nAn extension to the stochastic realization problem where we have knowledge only of the Auto-correlation (covariance) function of the output of an LTI system driven by white noise, was derived by researchers like Akaike.\nA second generation of SID methods attempted to make SID methods directly operate on input-output measurements of the LTI system in the decade 1985–1995. One such generalization was presented under the name of the Eigensystem Realization Algorithm (ERA) made use of specific input-output measurements considering the impulse inputs. It has been used for modal analysis of flexible structures, like bridges, space structures, etc. These methods have demonstrated to work in practice for resonant structures they did not work well for other type of systems and an input different from an impulse. A new impulse to the development of SID methods was made for operating directly on generic input-output data and avoiding to first explicitly compute the Markov parameters or estimating the samples of covariance functions prior to realizing the system matrices. Pioneers that contributed to these breakthroughs were Van Overschee and De Moor – introducing the N4SID approach, Verhaegen – introducing the MOESP approach and Larimore – presenting ST in the framework of Canonical Variate Analysis (CVA)", "Automation-Control": 0.9998479486, "Qwen2": "Yes"} {"id": "17729901", "revid": "1534529", "url": "https://en.wikipedia.org/wiki?curid=17729901", "title": "Miyota (watch movement manufacturer)", "text": "Miyota is a brand of mechanical and quartz watch movements manufactured by Citizen Watch Manufacturing Co., Ltd. (CWMJ), a subsidiary of Citizen Watch.\nIn 1959 Miyota Precision Co., Ltd. was established in the town of Miyota, Nagano Prefecture, Japan as an assembly factory for wristwatches. The company renamed Miyota Co., Ltd. in 1991 and Citizen Miyota Co., Ltd. in 2005 and was merged with Citizen Finetech Co., Ltd. to form Citizen Finetech Miyota Co., Ltd. in 2008 and merged with Citizen Seimitsu Co., Ltd. to form Citizen Finedevice Co., Ltd. in 2015. The company produces watch parts such as crystal oscillators and bearing jewels, but its watch movement manufacturing business has been transferred to Citizen Watch Manufacturing Co., Ltd.\nIn 2016 a large movement assembly factory (CWMJ Miyota Saku Factory) in Saku, Nagano Prefecture was opened. Most watch brands do not make their own movements in-house, but rather use standard watch movements manufactured by specialized companies like Miyota and ETA SA.\nProducts.\nMechanical movements.\nMiyota produces various 'standard' and 'premium' grade mechanical movements for automatic wristwatches.\nThe \"Miyota 8215\" is an entry level non-hacking twenty-one jewel three-hand with date automatic wristwatch movement with a uni-directional winding system (left rotation) with an accuracy of -20 to +40 seconds per day, and a power reserve of over 40 hours. It allows hand winding. The date window may be placed at the 3 o'clock position (cal. 8215-33A) or at 6 o'clock (cal. 8215-36A). The diameter of the movement is 26 mm and the thickness is 5.67 mm. It beats at 21,600 BPH or 3 Hz (6 half-cycles per second). The movement has a 49° lift angle.\nThe \"Miyota 9015\" is a more sophisticated hacking twenty-four jewel three-hand with date automatic wristwatch movement with a uni-directional winding system (left rotation). The 9015 automatic movement has a beat rate of 28,800 BPH (= 4 Hz), 51° lift angle, 24 jewels and features (automatic) winding, a ≥42 hours power reserve, manual winding and a hack function (stopping the movement of the second hand). The static accuracy rating is −10 to +30 seconds per day (23±2 °C).\nElectric movements.\nMiyota also produces various electric driven movements for quartz watches. These are categorized in standard, slim, small, multi-function, chronograph, and small second chronograph movements.These movements are used by a variety of watch brands such as Bulova, MVMT and Skagen.\nUsage.\nMiyota movements are used by many watch makers besides Citizen for some of their watches, among them Xeric, Invicta, Boldr, Bulova, Bernhardt, Casio, Corgeut, and Timex.", "Automation-Control": 0.6477247477, "Qwen2": "Yes"} {"id": "17748770", "revid": "25080539", "url": "https://en.wikipedia.org/wiki?curid=17748770", "title": "Open Agent Architecture", "text": "Open Agent Architecture, or OAA for short, is a framework for integrating a community of heterogeneous software agents in a distributed environment. It is also a research project of the SRI International Artificial Intelligence Center.\nRoughly, the architecture is that a central \"blackboard\" server holds a list of tasks while a group of agents executes these tasks based on their specific capabilities.\nAgents working in the structure of an OAA framework are built to universal communication and functional standards and are based on the Interagent Communication Language. The language is platform-independent and allows agents to collaborate by delegating and receiving work requests.\nOpen Agent Architecture was first proposed in the late 1990s and was later used as a foundation for the DARPA-funded CALO artificial intelligence project.", "Automation-Control": 0.9743869305, "Qwen2": "Yes"} {"id": "6574091", "revid": "1161763786", "url": "https://en.wikipedia.org/wiki?curid=6574091", "title": "Gain scheduling", "text": "In control theory, gain scheduling is an approach to control of nonlinear systems that uses a family of linear controllers, each of which provides satisfactory control for a different operating point of the system.\nOne or more observable variables, called the \"scheduling variables\", are used to determine what operating region the system is currently in and to enable the appropriate linear controller. For example, in an aircraft flight control system, the altitude and Mach number might be the scheduling variables, with different linear controller parameters available (and automatically plugged into the controller) for various combinations of these two variables.\nA relatively large scope state of the art about gain scheduling has been published in (Survey of Gain-Scheduling Analysis & Design, D.J.Leith, WE.Leithead).", "Automation-Control": 0.9391537905, "Qwen2": "Yes"} {"id": "2731314", "revid": "1091356944", "url": "https://en.wikipedia.org/wiki?curid=2731314", "title": "Process capability", "text": "The process capability is a measurable property of a process to the specification, expressed as a process capability index (e.g., Cpk or Cpm) or as a process performance index (e.g., Ppk or Ppm). The output of this measurement is often illustrated by a histogram and calculations that predict how many parts will be produced out of specification (OOS).\nTwo parts of process capability are: 1) measure the variability of the output of a process, and 2) compare that variability with a proposed specification or product tolerance.\nCapabilities\nThe input of a process usually has at least one or more measurable characteristics that are used to specify outputs. These can be analyzed statistically; where the output data shows a normal distribution the process can be described by the process mean (average) and the standard deviation.\nA process needs to be established with appropriate process controls in place. A control chart analysis is used to determine whether the process is \"in statistical control\" If the process is not in statistical control then capability has no meaning. Therefore, the process capability involves only common cause variation and not special cause variation.\nA batch of data needs to be obtained from the measured output of the process. The more data that is included the more precise the result, however an estimate can be achieved with as few as 17 data points. This should include the normal variety of production conditions, materials, and people in the process. With a manufactured product, it is common to include at least three different production runs, including start-ups.\nThe process mean (average) and standard deviation are calculated. With a normal distribution, the \"tails\" can extend well beyond plus and minus three standard deviations, but this interval should contain about 99.73% of production output. Therefore, for a normal distribution of data the process capability is often described as the relationship between six standard deviations and the required specification.\nCapability study.\nThe output of a process is expected to meet customer requirements, specifications, or engineering tolerances. Engineers can conduct a process capability study to determine the extent to which the process can meet these expectations.\nThe ability of a process to meet specifications can be expressed as a single number using a process capability index or it can be assessed using control charts. Either case requires running the process to obtain enough measurable output so that engineering is confident that the process is stable and so that the process mean and variability can be reliably estimated. Statistical process control defines techniques to properly differentiate between stable processes, processes that are drifting (experiencing a long-term change in the mean of the output), and processes that are growing more variable. Process capability indices are only meaningful for processes that are stable (in a state of statistical control).", "Automation-Control": 0.8767216802, "Qwen2": "Yes"} {"id": "3929360", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=3929360", "title": "GraphEdit", "text": "GraphEdit is a utility which is part of the Microsoft DirectShow SDK. It is a visual tool for building and testing filter graphs for DirectShow. Filters are displayed as boxes, with a text caption showing the name of the filter. Pins appear as small squares along the edge of the filter. Input pins are shown on the left side of the filter, and output pins are on the right side of the filter. A pin connection appears as an arrow connecting the output pin to the input pin. Connection mediatypes can be viewed as \"properties\" on pins and connections. GraphEdit can automatically build a filter graph that plays a file.", "Automation-Control": 0.9611881375, "Qwen2": "Yes"} {"id": "40655739", "revid": "23646674", "url": "https://en.wikipedia.org/wiki?curid=40655739", "title": "Trailer connectors in Australia", "text": "A number of standards prevail in Australia for trailer connectors, the electrical connectors between vehicles and the trailers they tow that provide a means of control trailer lamps, and in one case, trailer brakes, and also sometimes, manufacturer-specific non-standard functions.\nThe Australian market generally uses its own version of the European connectors, as well as its uniquely own contacts.\nThe only connector used on the Australian market that is fully ISO standard conformant is the 7-pin ABS / EBS plug.\nSince Australia has vehicles from both the North American market and the European market there is a mixture of 12V and 24V.\n7-pin trailer connector (AS 4735) for heavy duty vehicles.\nThis connector is based on both SAE J560 and ISO 1185 and is providing either 12V, 7 x 40A or 24V, 7 x 20A. The voltage varies from vehicle to vehicle.\nRound trailer connectors Type 1.\nThese connectors are based on ISO 1724 in 5-pin and 7-pin versions, but with some difference in the wiring.\nRound 7-pin trailer connector Type 1 (AS 2513).\nIn this pinout for an ISO 1724 connector, the \"position light\" pin is used for electric brakes (Pin 5, 58R), which means that if you connect a trailer with electric brakes to a towing vehicle wired according to ISO 1724 and turn on the position lights the trailer will be braking. Pin 2 (54G) is in the Australian wiring standard the reversing light, which is a minor problem.\nRound 5-pin trailer connector Type 1.\nThis 5-pin connector has been superseded by the 7-pin (AS 2513), but can be found on older vehicles. Note that pins 1 and 4 are missing. Pin placement is identical to the 7-pin ISO 1724 with the absence of these pins. This means that you can connect a trailer with a 5-pin connector to a 7-pin socket or the other way around, but since the pins are wired in a different way the result may be far from what was expected.\nRectangular trailer connectors Type 3.\nThe image of the 7 and 12 pin flat plugs are from the cable entry view (and possibly the images of the round connectors too). Please see reference [2] (VSB1 section 14) for front image. Reference [1] Narva wiring diagrams also have the diagrams cable entry view. If you want the front view images in colour see \nSome trailer manufacturers will wire non-standard features through non-standard pins. Auxiliary power, breakaway sense or hydraulic brake pump should never be wired to pin 2, even if the trailer does not feature reverse lights.", "Automation-Control": 0.752376914, "Qwen2": "Yes"} {"id": "40663593", "revid": "27199084", "url": "https://en.wikipedia.org/wiki?curid=40663593", "title": "Sensor hub", "text": "A sensor hub is a microcontroller unit/coprocessor/DSP set that helps to integrate data from different sensors and process them. This technology can help off-load these jobs from a product's main central processing unit, thus saving battery consumption and providing a performance improvement.\nIntel has the Intel Integrated Sensor Hub. Starting from Cherrytrail and Haswell, many Intel processors offers on package sensor hub. The Samsung Galaxy Note II is the first smart phone with a sensor hub, which was launched in 2012.\nExamples.\nSome devices with Snapdragon 800 series chips, including HTC One (M8), Sony Xperia Z1, LG G2, etc., have a sensor hub, the Qualcomm Snapdragon Sensor Core, and all HiSilicon Kirin 920 devices have sensor hub embedded in the chipset with its successor Kirin 925 integrated an i3 chip with same function into it. Some other devices that are not using these chips but with a sensor hub integrated are listed below:", "Automation-Control": 0.6194858551, "Qwen2": "Yes"} {"id": "15983837", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=15983837", "title": "Pseudospectral optimal control", "text": "Pseudospectral optimal control is a joint theoretical-computational method for solving optimal control problems. It combines pseudospectral (PS) theory with optimal control theory to produce PS optimal control theory. PS optimal control theory has been used in ground and flight systems in military and industrial applications. The techniques have been extensively used to solve a wide range of problems such as those arising in UAV trajectory generation, missile guidance, control of robotic arms, vibration damping, lunar guidance, magnetic control, swing-up and stabilization of an inverted pendulum, orbit transfers, tether libration control, ascent guidance and quantum control.\nOverview.\nThere are a very large number of ideas that fall under the general banner of pseudospectral optimal control. Examples of these are the Legendre pseudospectral method, the Chebyshev pseudospectral method, the Gauss pseudospectral method, the Ross-Fahroo pseudospectral method, the Bellman pseudospectral method, the flat pseudospectral method and many others. Solving an optimal control problem requires the approximation of three types of mathematical objects: the integration in the cost function, the differential equation of the control system, and the state-control constraints. An ideal approximation method should be efficient for all three approximation tasks. A method that is efficient for one of them, for instance an efficient ODE solver, may not be an efficient method for the other two objects. These requirements make PS methods ideal because they are efficient for the approximation of all three mathematical objects. In a pseudospectral method, the continuous functions are approximated at a set of carefully selected quadrature nodes. The quadrature nodes are determined by the corresponding orthogonal polynomial basis used for the approximation. In PS optimal control, Legendre and Chebyshev polynomials are commonly used. Mathematically, quadrature nodes are able to achieve high accuracy with a small number of points. For instance, the interpolating polynomial of any smooth function (Cformula_1) at Legendre–Gauss–Lobatto nodes converges in L2 sense at the so-called spectral rate, faster than any polynomial rate.\nDetails.\nA basic pseudospectral method for optimal control is based on the covector mapping principle. Other pseudospectral optimal control techniques, such as the Bellman pseudospectral method, rely on node-clustering at the initial time to produce optimal controls. The node clusterings occur at all Gaussian points.\nMoreover, their structure can be highly exploited to make them more computationally efficient, as ad-hoc scaling and Jacobian computation methods, involving dual number theory have been developed.\nIn pseudospectral methods, integration is approximated by quadrature rules, which provide the best numerical integration result. For example, with just N nodes, a Legendre-Gauss quadrature integration achieves zero error for any polynomial integrand of degree less than or equal to formula_2. In the PS discretization of the ODE involved in optimal control problems, a simple but highly accurate differentiation matrix is used for the derivatives. Because a PS method enforces the system at the selected nodes, the state-control constraints can be discretized straightforwardly. All these mathematical advantages make pseudospectral methods a straightforward discretization tool for continuous optimal control problems.", "Automation-Control": 0.9817843437, "Qwen2": "Yes"} {"id": "15983843", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=15983843", "title": "DIDO (software)", "text": "DIDO is a MATLAB optimal control toolbox for solving general-purpose optimal control problems. It is widely used in academia, industry, and NASA. Hailed as a breakthrough software, DIDO is based on the pseudospectral optimal control theory of Ross and Fahroo. The latest enhancements to DIDO are described in Ross.\nUsage.\nDIDO utilizes trademarked expressions and objects that facilitate a user to quickly formulate and solve optimal control problems. Rapidity in formulation is achieved through a set of DIDO expressions which are based on variables commonly used in optimal control theory. For example, the state, control and time variables are formatted as:\nThe entire problem is codified using the key words, cost, dynamics, events and path:\nA user runs DIDO using the one-line command:\ncodice_1,\nwhere the object defined by codice_2 allows a user to choose various options. In addition to the cost value and the primal solution, DIDO automatically outputs all the dual variables that are necessary to verify and validate a computational solution. The output codice_3 is computed by an application of the covector mapping principle.\nTheory.\nDIDO implements a spectral algorithm based on pseudospectral optimal control theory founded by Ross and his associates. The covector mapping principle of Ross and Fahroo eliminates the curse of sensitivity associated in solving for the costates in optimal control problems. DIDO generates spectrally accurate solutions whose extremality can be verified using Pontryagin's Minimum Principle. Because no knowledge of pseudospectral methods is necessary to use it, DIDO is often used as a fundamental mathematical tool for solving optimal control problems. That is, a solution obtained from DIDO is treated as a candidate solution for the application of Pontryagin's minimum principle as a necessary condition for optimality.\nApplications.\nDIDO is used world wide in academia, industry and government laboratories. Thanks to NASA, DIDO was flight-proven in 2006. On November 5, 2006, NASA used DIDO to maneuver the International Space Station to perform the zero-propellant maneuver.\nSince this flight demonstration, DIDO was used for the International Space Station and other NASA spacecraft. It is also used in other industries. Most recently, DIDO has been used to solve traveling salesman type problems in aerospace engineering.\nMATLAB optimal control toolbox.\nDIDO is primarily available as a stand-alone MATLAB optimal control toolbox. That is, it does not require any third-party software like SNOPT or IPOPT or other nonlinear programming solvers. In fact, it does not even require the MATLAB Optimization Toolbox.\nThe MATLAB/DIDO toolbox does not require a \"guess\" to run the algorithm. This and other distinguishing features have made DIDO a popular tool to solve optimal control problems.\nThe MATLAB optimal control toolbox has been used to solve problems in aerospace, robotics and search theory.\nHistory.\nThe optimal control toolbox is named after Dido, the legendary founder and first queen of Carthage who is famous in mathematics for her remarkable solution to a constrained optimal control problem even before the invention of calculus. Invented by Ross, DIDO was first produced in 2001. The software is widely cited and has many firsts to its credit:\nVersions.\nThe early versions, widely adopted in academia, have undergone significant changes since 2007. The latest version of DIDO, available from Elissar Global, does not require a \"guess\" to start the problem and eliminates much of the minutia of coding by simplifying the input-output structure. Low-cost student versions and discounted academic versions are also available from Elissar Global.", "Automation-Control": 0.8713361621, "Qwen2": "Yes"} {"id": "22039695", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=22039695", "title": "Shop floor", "text": "The shop floor is the production area, such as in a factory or another working space and is the floor where workers produce goods. The term \"shop floor\" refers to the area of a factory where production takes place. The shop floor excludes the area used or designated for administrative activities.\nShop stewards and Shop Stewards Movement.\nA shop steward is an employee of a company or organization who, as a labor union member and official, represents and defends the interests of their coworkers. During the First World War, the Shop Stewards Movement brought together shop stewards from across the United Kingdom. It began with the Clyde Workers Committee, Britain's first shop stewards committee, which organized in response to the imprisonment of three of their members in 1915.\nShop floor control.\nSystems for managing the various components of the manufacturing process are known as shop floor control (SFC) systems.\nShop floor control is one of the functions of manufacturing control; it is the process of monitoring the production activities as they happen, such as when the product is being processed, assembled, inspected, etc. It is also concerned with the shop floor inventories—short and excessive inventories that may cause losses.\nIntegrated shop floor management.\nThe manufacturing industry is significantly impacted by technological advances such as the Internet, the Web, and intelligent agents. Changing shop floor environments and customer needs are sufficed with new kinds of shop floor control systems that are web-based shop floor control systems, also called e-shop floor or i-shop floor.", "Automation-Control": 0.9985158443, "Qwen2": "Yes"} {"id": "2868248", "revid": "1600672", "url": "https://en.wikipedia.org/wiki?curid=2868248", "title": "SNOPT", "text": "SNOPT, for Sparse Nonlinear OPTimizer, is a software package for solving large-scale nonlinear optimization problems written by Philip Gill, Walter Murray and Michael Saunders. SNOPT is mainly written in Fortran, but interfaces to C, C++, Python and MATLAB are available.\nIt employs a sparse sequential quadratic programming (SQP) algorithm with limited-memory quasi-Newton approximations to the Hessian of the Lagrangian. It is especially effective for nonlinear problems with functions and gradients that are expensive to evaluate. The functions should be smooth but need not be convex.\nSNOPT is used in several trajectory optimization software packages, including Copernicus, AeroSpace Trajectory Optimization and Software (ASTOS), General Mission Analysis Tool, and Optimal Trajectories by Implicit Simulation (OTIS). It is also available in the Astrogator module of Systems Tool Kit.\nSNOPT is supported in the AIMMS, AMPL, APMonitor, General Algebraic Modeling System (GAMS), and TOMLAB modeling systems.", "Automation-Control": 0.9982755184, "Qwen2": "Yes"} {"id": "48448587", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=48448587", "title": "Exact algorithm", "text": "In computer science and operations research, exact algorithms are algorithms that always solve an optimization problem to optimality.\nUnless P = NP, an exact algorithm for an NP-hard optimization problem cannot run in worst-case polynomial time. There has been extensive research on finding exact algorithms whose running time is exponential with a low base.", "Automation-Control": 0.9796996713, "Qwen2": "Yes"} {"id": "4643913", "revid": "4246661", "url": "https://en.wikipedia.org/wiki?curid=4643913", "title": "Inferential programming", "text": "In most computer programming, a programmer keeps a program's intended results in mind and painstakingly constructs a program to achieve those results. Inferential programming refers to (still mostly hypothetical) techniques and technologies enabling the inverse. This would allow describing an intended result to a computer, using a metaphor such as a fitness function, a test specification, or a logical specification, and then the computer, on its own, would construct a program needed to meet the supplied criteria.\nDuring the 1980s, approaches to achieve inferential programming mostly involved techniques for logical inference. Today the term is sometimes used in connection with evolutionary computation techniques that enable a computer to evolve a solution in response to a problem posed as a fitness or reward function.\nIn July 2022, GitHub Copilot was released, which is an example of inferential programming.", "Automation-Control": 0.6377564669, "Qwen2": "Yes"} {"id": "24889742", "revid": "10951369", "url": "https://en.wikipedia.org/wiki?curid=24889742", "title": "Superplastic forming and diffusion bonding", "text": "Superplastic forming and diffusion bonding (SPF/DB) is a technique allowing the manufacture of complex-shaped hollow metallic parts. It combines Superplastic forming (SPF) with a second element \"Diffusion Bonding\" to create the completed structures.\nPrinciple.\nTwo metal sheets are welded together at their edges, then heated within the confines of a female mould tool.\nWhen the part is hot, an inert gas is injected between the two sheets ; the part becomes hollow to the form of the mould. Parts may be welded in other areas than the edges to give an internal structure as the sheets are blown.", "Automation-Control": 0.999573648, "Qwen2": "Yes"} {"id": "2627106", "revid": "7226930", "url": "https://en.wikipedia.org/wiki?curid=2627106", "title": "Multi-agent planning", "text": "In computer science multi-agent planning involves coordinating the resources and activities of multiple \"agents\".\nNASA says, \"multiagent planning is concerned with planning by (and for) multiple agents. It can involve agents planning for a common goal, an agent coordinating the plans (plan merging) or planning of others, or agents refining their own plans while negotiating over tasks or resources. The topic also involves how agents can do this in real time while executing plans (distributed continual planning). Multiagent scheduling differs from multiagent planning the same way planning and scheduling differ: in scheduling often the tasks that need to be performed are already decided, and in practice, scheduling tends to focus on algorithms for specific problem domains\".", "Automation-Control": 0.9986848235, "Qwen2": "Yes"} {"id": "1876875", "revid": "1004605147", "url": "https://en.wikipedia.org/wiki?curid=1876875", "title": "Manufacturing Message Specification", "text": "Manufacturing Message Specification (MMS) is an international standard (ISO 9506) dealing with messaging systems for transferring real time process data and supervisory control information between networked devices or computer applications. The standard is developed and maintained by the ISO Technical Committee 184 (TC184). MMS defines the following\nMMS original communication stack.\nMMS was standardized in 1990 under two separate standards as\nThis version of MMS used seven layers of OSI network protocols as its communication stack:\nMMS stack over TCP/IP.\nBecause the Open Systems Interconnection protocols are challenging to implement, the original MMS stack never became popular. In 1999, Boeing created a new version of MMS using Internet protocols instead of the bottom four layers of the original stack plus RFC 1006 (\"ISO Transport over TCP\") in the transport layer. The top three layers use the same OSI protocols as before.\nIn terms of the seven-layer OSI model, the new MMS stack looks like this:\nWith the new stack, MMS has become a globally accepted standard.", "Automation-Control": 0.9964906573, "Qwen2": "Yes"} {"id": "9409012", "revid": "45333970", "url": "https://en.wikipedia.org/wiki?curid=9409012", "title": "Arcam", "text": "Arcam AB manufactures electron beam melting (EBM) systems for use in additive manufacturing, which create solid parts from metal powders. Arcam also produces metal powder through AP&C and medical implants through DiSanto Technologies.\nArcam AB was founded by innovator Ralf Larson and financier Jarl Assmundson, in 1997.\nArcam AB was a publicly traded company listed on the Stockholm Stock Exchange under ARCM but was also commonly quoted as OTC stock under AMAVF. Arcam AB corporate headquarters are in Mölndal, Sweden. EBM has applications in the medical, aerospace and automotive industries.\nIn September 2016, General Electric announced its plans for acquisition of Arcam AB.", "Automation-Control": 0.6339087486, "Qwen2": "Yes"} {"id": "68369699", "revid": "46194273", "url": "https://en.wikipedia.org/wiki?curid=68369699", "title": "Petri net unfoldings", "text": "Analysis of Petri nets can be performed by means of constructing either reachable state spaces (or reachable markings) or via the process of graph-based unfolding. The prefix of a Petri net unfolding, which is an acyclic Petri net graph, contains the same information about the properties of the Petri net as the reachability graph, plus it contains information about sequence, concurrency and conflict relations between Petri net transitions and Petri net places. The advantages of the use of unfolding in practice are typically associated with the fact that the unfolding prefix is much more compact than the reachability graph of the Petri net being analysed.\nPetri net unfoldings were originally introduced by Ken McMillan. Later they were studied by several authors, who improved the original criterion for producing the prefix of the unfolding in terms of its compactness and hence efficient analysis. \nThere are applications of Petri net unfoldings in the analysis and synthesis of concurrent systems and asynchronous circuits. The latter is normally achieved through the use of Signal transition graphs (STGs).", "Automation-Control": 0.9997801185, "Qwen2": "Yes"} {"id": "34421644", "revid": "42522270", "url": "https://en.wikipedia.org/wiki?curid=34421644", "title": "Ross–Fahroo pseudospectral method", "text": "Introduced by I. Michael Ross and F. Fahroo, the Ross–Fahroo pseudospectral methods are a broad collection of pseudospectral methods for optimal control. Examples of the Ross–Fahroo pseudospectral methods are the pseudospectral knotting method, the flat pseudospectral method, the Legendre-Gauss-Radau pseudospectral method and pseudospectral methods for infinite-horizon optimal control.\nOverview.\nThe Ross–Fahroo methods are based on shifted Gaussian pseudospectral node points. The shifts are obtained by means of a linear or nonlinear transformation while the Gaussian pseudospectral points are chosen from a collection of Gauss-Lobatto or Gauss-Radau distribution arising from Legendre or Chebyshev polynomials. The Gauss-Lobatto pseudospectral points are used for finite-horizon optimal control problems while the Gauss-Radau pseudospectral points are used for infinite-horizon optimal control problems.\nMathematical applications.\nThe Ross–Fahroo methods are founded on the Ross–Fahroo lemma; they can be applied to optimal control problems governed by differential equations, differential-algebraic equations, differential inclusions, and differentially-flat systems. They can also be applied to infinite-horizon optimal control problems by a simple domain transformation technique.\n The Ross–Fahroo pseudospectral methods also form the foundations for the Bellman pseudospectral method.\nFlight applications and awards.\nThe Ross–Fahroo methods have been implemented in many practical applications and laboratories around the world. In 2006, NASA used the Ross–Fahroo method to implement the \"zero propellant maneuver\" on board the International Space Station.\nIn recognition of all these advances, the AIAA presented Ross and Fahroo, the 2010 Mechanics and Control of Flight Award, for \"... changing the landscape of flight mechanics.\" Ross was also elected AAS Fellow for \"his pioneering contributions to pseudospectral optimal control.\"\nDistinctive features.\nA remarkable feature of the Ross–Fahroo methods is that it does away with the prior notions of \"direct\" and \"indirect\" methods. That is, through a collection of theorems put forth by Ross and Fahroo,\nthey showed that it was possible to design pseudospectral methods for optimal control that were equivalent in both the direct and indirect forms. This implied that one could use their methods as simply as a \"direct\" method while automatically generating accurate duals as in \"indirect\" methods. This revolutionized solving optimal control problems leading to widespread use of the Ross–Fahroo techniques.\nSoftware implementation.\nThe Ross–Fahroo methods are implemented in the MATLAB optimal control solver, DIDO.", "Automation-Control": 0.9962511063, "Qwen2": "Yes"} {"id": "3432530", "revid": "6908984", "url": "https://en.wikipedia.org/wiki?curid=3432530", "title": "Cutting tool (machining)", "text": "In the context of machining, a cutting tool or cutter is typically a hardened metal tool that is used to cut, shape, and remove material from a workpiece by means of machining tools as well as abrasive tools by way of shear deformation. The majority of these tools are designed exclusively for metals. \nThere are several different types of single-edge cutting tools that are made from a variety of hardened metal alloys that are ground to a specific shape in order to perform a specific part of the turning process resulting in a finished machined part. Single-edge cutting tools are used mainly in the turning operations performed by a lathe in which they vary in size as well as alloy composition depending on the size and the type of material being turned. These cutting tools are held stationary by what is known as a \"tool post\", which is what manipulates the tools to cut the material into the desired shape. Single-edge cutting tools are also the means of cutting material performed by shaping machines and planing machines, which remove material by means of one cutting edge. \nMilling and drilling tools are often multipoint tools. Drilling is exclusively used to make holes in a workpiece. All drill bits have two cutting edges that are ground into two equally tapered angles which cuts through the material by applying downward rotational force. Endmills or milling bits, which also cut material by rotational force. Although these tools are not made to put holes in a workpiece. They cut by horizontal shear deformation in which the workpiece is brought into the tool as it's rotating. This is known as the tool path which is determined by the axis of the table that is holding the workpiece in place. This table is designed to accept a variety of vises and clamping tools so that it can move into the cutter at various angles and directions while the workpiece remains still. There are several different types of endmills that perform a certain type of milling action. \nGrinding stones are tools that contain several different cutting edges which encompasses the entirety of the stone. Unlike metallic cutting tools, these grinding stones never go dull. In fact the formation of cutting edges of metallic cutting tools are achieved by the use of grinding wheels and other hard abrasives. There are several different types of grinding stone wheels that are used to grind several different types of metals. Although these stones are not metal, they need to be harder than the metal that they grind. In contrast to the grinding stone, if the hardness of the metal exceeds that of the stone, the metal will cut the stone. This is not ideal. Each grain of abrasive functions as a microscopic single-point cutting edge (although of high negative rake angle), and shears a tiny chip. \nCutting tool materials must be harder than the material which is to be cut, and the tool must be able to withstand the heat and force generated in the metal-cutting process. Also, the tool must have a specific geometry, with clearance angles designed so that the cutting edge can contact the workpiece without the rest of the tool dragging on the workpiece surface. The angle of the cutting face is also important, as is the flute width, number of flutes or teeth, and margin size. In order to have a long working life, all of the above must be optimized, plus the speeds and feeds at which the tool is run.\nTypes.\nLinear cutting tools include tool bits (single-point cutting tools) and broaches. Rotary cutting tools include drill bits, countersinks and counterbores, taps and dies, reamers, and cold saw blades. Other cutting tools, such as bandsaw blades, hacksaw blades, and fly cutters, combine aspects of linear and rotary motion \nCutting tools with inserts (indexable tools).\nCutting tools are often designed with inserts or replaceable tips (tipped tools). In these, the cutting edge consists of a separate piece of material, either brazed, welded or clamped on to the tool body. Common materials for tips include cemented carbide, polycrystalline diamond, and cubic boron nitride. Tools using inserts include milling cutters (endmills, fly cutters), tool bits, and saw blades.\nTool setup.\nThe detailed instructions of how to combine the tool assembly out of basic holder, tool and insert can be stored in a tool management solution.\nCutting edge.\nThe cutting edge of a cutting tool is a very important for the performance of the cutting process. The main features of the cutting edge are:\nThe measurement of the cutting edge is performed using a tactile instrument or an instrument using focus variation. To quantify a cutting edge the following parameters are used:\nOne of the most important cutting edge parameters is the K factor. It specifies the form of the cutting edge. 1 means a symmetric cutting edge. If the value is smaller than 1 the form is called a waterfall. If the value is larger than 1 it is called a trumpet. Depending on the material being cut, feed rate and other factors, a cutting tool with the optimum K factor should be used.", "Automation-Control": 0.9911745191, "Qwen2": "Yes"} {"id": "29195439", "revid": "19372301", "url": "https://en.wikipedia.org/wiki?curid=29195439", "title": "Control system security", "text": "Control system security, or industrial control system (ICS) cybersecurity, is the prevention of (intentional or unintentional) interference with the proper operation of industrial automation and control systems. These control systems manage essential services including electricity, petroleum production, water, transportation, manufacturing, and communications. They rely on computers, networks, operating systems, applications, and programmable controllers, each of which could contain security vulnerabilities. The 2010 discovery of the Stuxnet worm demonstrated the vulnerability of these systems to cyber incidents. The United States and other governments have passed cyber-security regulations requiring enhanced protection for control systems operating critical infrastructure.\nControl system security is known by several other names such as \"SCADA security\", \"PCN security\", \"Industrial network security\", \"Industrial control system (ICS) Cybersecurity\", \"Operational Technology (OT) Security, Industrial automation and control systems\" and \"Control System Cyber Security\".\nRisks.\nInsecurity of, or vulnerabilities inherent in industrial automation and control systems (IACS) can lead to severe consequences in categories such as safety, loss of life, personal injury, environmental impact, lost production, equipment damage, information theft, and company image.\nGuidance to assess, evaluate and mitigate these potential risks is provided through the application of many Governmental, regulatory, industry documents and Global Standards, addressed below.\nVulnerability of control systems.\nIndustrial Control Systems (ICS) have become far more vulnerable to security incidents due to the following trends that have occurred over the last 10 to 15 years. \nThe cyber threats and attack strategies on automation systems are changing rapidly. Regulation of industrial control systems for security is rare and is a slow moving process. The United States, for example, only does so for the nuclear power and the chemical industries.\nGovernment efforts.\nThe U.S. Government Computer Emergency Readiness Team (US-CERT) originally instituted a control systems security program (CSSP) now the National Cybersecurity and Communications Integration Center (NCCIC) Industrial Control Systems, which has made available a large set of free National Institute of Standards and Technology (NIST) standards documents regarding control system security. The U.S. Government Joint Capability Technology Demonstration (JCTD) known as MOSIACS (More Situational Awareness for Industrial Control Systems) is the initial demonstration of cybersecurity defensive capability for critical infrastructure control systems. MOSAICS addresses the Department of Defense (DOD) operational need for cyber defense capabilities to defend critical infrastructure control systems from cyber attack, such as power, water and wastewater, and safety controls, affect the physical environment. The MOSAICS JCTD prototype will be shared with commercial industry through Industry Days for further research and development, an approach intended to lead to an innovative, game-changing capabilities for cybersecurity for critical infrastructure control systems.\nIndustrial Cybersecurity Standards.\nThe international standard for cybersecurity in industrial automation is the IEC 62443. In addition, multiple national organizations such as the NIST and NERC in the USA released guidelines and requirements for cybersecurity in control systems.\nIEC 62443.\nThe IEC 62443 cybersecurity standard defines processes, techniques and requirements for Industrial Automation and Control Systems (IACS). Its documents are the result of the IEC standards creation process where all national committees involved agree upon a common standard. The IEC 62443 was influenced by and is partly based on the ANSI/ISA-99 series of standards and the VDI/VDE 2182 guidelines.\n All IEC 62443 standards and technical reports are organized into four general categories called \"General\", \"Policies and Procedures\", \"System\" and \"Component\".\nNERC.\nThe most widely recognized and latest NERC security standard is NERC 1300, which is a modification/update of NERC 1200. The latest version of NERC 1300 is called CIP-002-3 through CIP-009-3, with CIP referring to Critical Infrastructure Protection. These standards are used to secure bulk electric systems although NERC has created standards within other areas. The bulk electric system standards also provide network security administration while still supporting best-practice industry processes.\nNIST.\nThe NIST Cybersecurity Framework (NIST CSF) provides a high-level taxonomy of cybersecurity outcomes and a methodology to assess and manage those outcomes. It is intended to help private sector organizations that provide critical infrastructure with guidance on how to protect it.\nNIST Special Publication 800-82 Rev. 2 \"Guide to Industrial Control System (ICS) Security\" describes how to secure multiple types of Industrial Control Systems against cyber attacks while considering the performance, reliability, and safety requirements specific to ICS.\nControl system security certifications.\nCertifications for control system security have been established by several global Certification Bodies. Most of the schemes are based on the IEC 62443 and describe test methods, surveillance audit policy, public documentation policies, and other specific aspects of their program.", "Automation-Control": 0.9065283537, "Qwen2": "Yes"} {"id": "28801798", "revid": "16528233", "url": "https://en.wikipedia.org/wiki?curid=28801798", "title": "Active learning (machine learning)", "text": "Active learning is a special case of machine learning in which a learning algorithm can interactively query a user (or some other information source) to label new data points with the desired outputs. In statistics literature, it is sometimes also called optimal experimental design. The information source is also called \"teacher\" or \"oracle\".\nThere are situations in which unlabeled data is abundant but manual labeling is expensive. In such a scenario, learning algorithms can actively query the user/teacher for labels. This type of iterative supervised learning is called active learning. Since the learner chooses the examples, the number of examples to learn a concept can often be much lower than the number required in normal supervised learning. With this approach, there is a risk that the algorithm is overwhelmed by uninformative examples. Recent developments are dedicated to multi-label active learning, hybrid active learning and active learning in a single-pass (on-line) context, combining concepts from the field of machine learning (e.g. conflict and ignorance) with adaptive, incremental learning policies in the field of online machine learning.\nLarge-scale active learning projects may benefit from crowdsourcing frameworks such as Amazon Mechanical Turk that include many humans in the active learning loop.\nDefinitions.\nLet be the total set of all data under consideration. For example, in a protein engineering problem, would include all proteins that are known to have a certain interesting activity and all additional proteins that one might want to test for that activity.\nDuring each iteration, is broken up into three subsets\nMost of the current research in active learning involves the best method to choose the data points for .\nQuery strategies.\nAlgorithms for determining which data points should be labeled can be organized into a number of different categories, based upon their purpose:\nA wide variety of algorithms have been studied that fall into these categories.\nMinimum marginal hyperplane.\nSome active learning algorithms are built upon support-vector machines (SVMs) and exploit the structure of the SVM to determine which data points to label. Such methods usually calculate the margin, of each unlabeled datum in and treat as an -dimensional distance from that datum to the separating hyperplane.\nMinimum Marginal Hyperplane methods assume that the data with the smallest are those that the SVM is most uncertain about and therefore should be placed in to be labeled. Other similar methods, such as Maximum Marginal Hyperplane, choose data with the largest . Tradeoff methods choose a mix of the smallest and largest s.", "Automation-Control": 0.6456766129, "Qwen2": "Yes"} {"id": "73462795", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=73462795", "title": "Fractional job scheduling", "text": "Fractional job scheduling is a variant of optimal job scheduling in which it is allowed to break jobs into parts and process each part separately on the same or a different machine. Breaking jobs into parts may allow for improving the overall performance, for example, decreasing the makespan. Moreover, the computational problem of finding an optimal schedule may become easier, as some of the optimization variables become continuous. On the other hand, breaking jobs apart might be costly.\nVariants.\nThere are several variants of job scheduling problems in which it is allowed to break jobs apart. They can be broadly classified into preemption and splitting.\nJob scheduling with preemption.\nVarious problems have been studied in job scheduling with preemption. One of them is generalized multiprocessor scheduling (GMS). It has two variants.\nIn both variants, the goal is to find a schedule that minimizes the makespan subject to the preemption constraints.\nFor identical machines, Shchepin and Vakhania prove that with at most formula_3 total preemptions, the problem is NP-hard, whereas McNaughton shows a linear-time algorithm with formula_4 preemptions.\nFor uniform machines, a polynomial algorithm by Gonzalez and Sahni yields at most formula_5 preemptions. Shachnai, Tamir, and Woeginger proved NP-hardness for the case where the number of preemption is strictly less than formula_5. They also presented a PTAS for GMS with a global preemption bound, and another PTAS for GMS with job-wise preemption bound when the number of machines is a fixed constant.\nSoper and Strusevitch study the special case in which at most one preemption is allowed. They show that makespan minimization is polynomial for two machines.\nMany papers study other variants of preemptive scheduling. For example, Liu and Cheng consider single-machine scheduling with job release and delivery dates, where there is no firm bound on the number of preemptions, but each preemption requires spending time on \"job setup\".\nSome works like Blazewicz \"et al.\" or Deng \"et al.\" study preemptive scheduling for jobs with parallelism where jobs must be processed simultaneously on several processors.\nJob scheduling with splitting.\nVarious objectives have been studied. There are many variants including different setup times. In machine scheduling, the \"setup time\" refers to the time required to prepare a machine for a specific job or task. Sequence-dependent setup time is a situation where the setup time required for a job depends on the job that came before it, rather than being constant for all jobs (independent job setup time).\nSerafini assumes unbounded splittings and preemptions and gives polynomial-time algorithms that minimize the maximum tardiness and the maximum weighted tardiness, for uniform and unrelated machines.\nXing and Zhang allow unbounded splittings, and give polynomial algorithms for many optimality criteria (such as makespan, lateness, tardiness, and more), with identical, uniform, and unrelated machines. For the case with independent job setup time, they give a formula_7 approximation algorithm.\nSon et al. study makespan minimization on a single machine with a machine-availability constraint with a lower bound on the length of each part of a job that is split.\nFor identical machines, Shim and Kim suggest a branch and bound algorithm with the objective to minimize total tardiness with independent job setup time. Yalaoui and Chu propose a heuristic to the same problem, with the objective to minimize the makespan. Kim et al. suggest a two-phase heuristic algorithm with the objective of minimizing total tardiness. With the objective to minimize the makespan, Kim studies another variant with family setup time in which no setup is required when parts from the same job are produced consecutively. And, Wang et al. include a learning property that improves the processing time of a job according to the learning effect. The learning has to be restarted if one job is split and processed by a different machine.\nFor uniform machines, Kim and Lee study a variant with dedicated machines (there are some dedicated machines for each job), sequence-dependent setup times, and limited set-up resources (jobs require setup operators that are limited) with the objective to minimize the makespan. Krysta, Sanders, and Vöcking study makespan minimization using the k-splittable variant, a variant where each job is allowed to be split into most formula_8 different machines. They show that this variant and another more general variant where each job has its own splitability parameter, are NP-hard. They give some approximation algorithms, but their main result is a polynomial-time algorithm for both problems, for a fixed number of machines. They show that allowing a bounded number of splittings reduces the complexity of scheduling.\nIn all these works, there is no global bound on the number of splitting jobs.", "Automation-Control": 0.769295156, "Qwen2": "Yes"} {"id": "352267", "revid": "15860097", "url": "https://en.wikipedia.org/wiki?curid=352267", "title": "Lyapunov function", "text": "In the theory of ordinary differential equations (ODEs), Lyapunov functions, named after Aleksandr Lyapunov, are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Lyapunov functions (also called Lyapunov’s second method for stability) are important to stability theory of dynamical systems and control theory. A similar concept appears in the theory of general state space Markov chains, usually under the name Foster–Lyapunov functions.\nFor certain classes of ODEs, the existence of Lyapunov functions is a necessary and sufficient condition for stability. There is no general technique for constructing Lyapunov functions for ODEs, however, depending on formulation type, a systematic method to construct Lyapunov functions for ordinary differential equations using their most general form in autonomous cases was given by Prof. Cem Civelek. Though, in many specific cases the construction of Lyapunov functions is known. For instance, according to a lot of applied mathematicians, for a dissipative gyroscopic system a Lyapunov function could not be constructed. However, using the method expressed in the publication above, even for such a system a Lyapunov function could be constructed as per related article by C. Civelek and Ö. Cihanbegendi. In addition, quadratic functions suffice for systems with one state; the solution of a particular linear matrix inequality provides Lyapunov functions for linear systems, and conservation laws can often be used to construct Lyapunov functions for physical systems.\nDefinition.\nA Lyapunov function for an autonomous dynamical system\nwith an equilibrium point at formula_2 is a scalar function formula_3 that is continuous, has continuous first derivatives, is strictly positive for formula_4, and for which the time derivative formula_5 is non positive (these conditions are required on some region containing the origin). The (stronger) condition that formula_6 is strictly positive for formula_4 is sometimes stated as formula_6 is \"locally positive definite\", or formula_9 is \"locally negative definite\".\nFurther discussion of the terms arising in the definition.\nLyapunov functions arise in the study of equilibrium points of dynamical systems. In formula_10 an arbitrary autonomous dynamical system can be written as\nfor some smooth formula_12\nAn equilibrium point is a point formula_13 such that formula_14 Given an equilibrium point, formula_15 there always exists a coordinate transformation formula_16 such that:\nThus, in studying equilibrium points, it is sufficient to assume the equilibrium point occurs at formula_18.\nBy the chain rule, for any function, formula_19 the time derivative of the function evaluated along a solution of the dynamical system is\nA function formula_21 is defined to be locally positive-definite function (in the sense of dynamical systems) if both formula_22 and there is a neighborhood of the origin, formula_23, such that:\nBasic Lyapunov theorems for autonomous systems.\nLet formula_25 be an equilibrium of the autonomous system\nand use the notation formula_27 to denote the time derivative of the Lyapunov-candidate-function formula_28:\nLocally asymptotically stable equilibrium.\nIf the equilibrium is isolated, the Lyapunov-candidate-function formula_28 is locally positive definite, and the time derivative of the Lyapunov-candidate-function is locally negative definite:\nfor some neighborhood formula_23 of origin then the equilibrium is proven to be locally asymptotically stable.\nStable equilibrium.\nIf formula_28 is a Lyapunov function, then the equilibrium is Lyapunov stable. The converse is also true, and was proved by J. L. Massera.\nGlobally asymptotically stable equilibrium.\nIf the Lyapunov-candidate-function formula_28 is globally positive definite, radially unbounded, the equilibrium isolated and the time derivative of the Lyapunov-candidate-function is globally negative definite:\nthen the equilibrium is proven to be globally asymptotically stable.\nThe Lyapunov-candidate function formula_36 is radially unbounded if\nExample.\nConsider the following differential equation on formula_38:\nConsidering that formula_40 is always positive around the origin it is a natural candidate to be a Lyapunov function to help us study formula_41. So let formula_42 on formula_43. Then,\nThis correctly shows that the above differential equation, formula_45 is asymptotically stable about the origin. Note that using the same Lyapunov candidate one can show that the equilibrium is also globally asymptotically stable.", "Automation-Control": 0.9921241403, "Qwen2": "Yes"} {"id": "1137250", "revid": "935098993", "url": "https://en.wikipedia.org/wiki?curid=1137250", "title": "Shape coding", "text": "Shape coding is a method of design of a control that allows the control's function to be signified by the shape of the control. It was used successfully by Alphonse Chapanis on airplane controls to improve aviation safety.", "Automation-Control": 0.9182203412, "Qwen2": "Yes"} {"id": "11633530", "revid": "46208898", "url": "https://en.wikipedia.org/wiki?curid=11633530", "title": "Injection molding machine", "text": "An injection molding machine (also spelled as injection moulding machine in BrE), also known as an injection press, is a machine for manufacturing plastic products by the injection molding process. It consists of two main parts, an \"injection unit\" and a \"clamping unit\".\nOperation.\nInjection molding machine molds can be fastened in either a horizontal or vertical position. Most machines are horizontally oriented, but vertical machines are used in some niche applications such as insert molding, allowing the machine to take advantage of gravity. Some vertical machines also do not require the mold to be fastened. There are many ways to fasten the tools to the platens, the most common are manual clamps (both halves are bolted to the platens); however, hydraulic clamps (chocks are used to hold the tool in place) and magnetic clamps are also used. The magnetic and hydraulic clamps are used where fast tool changes are required.\nThe person designing the mold chooses whether the mold uses a cold runner system or a hot runner system to carry the plastic and fillers from the injection unit to the cavities.\nA cold runner is a simple channel carved into the mold.\nThe plastic that fills the cold runner cools as the part cools and is then ejected with the part as a sprue.\nA hot runner system is more complicated, often using cartridge heaters to keep the plastic in the runners hot as the part cools.\nAfter the part is ejected, the plastic remaining in a hot runner is injected into the next part.\nTypes of injection molding machines.\nMachines are classified primarily by the type of driving systems they use: hydraulic, mechanical, electrical, or hybrid\nHydraulic.\nHydraulic machines have historically been the only option available to molders until Nissei Plastic Industrial introduced the first all-electric injection molding machine in 1983. Hydraulic machines, although not nearly as precise, are the predominant type in most of the world, with the exception of Japan.\nMechanical.\nMechanical type machines use the toggle system for building up tonnage on the clamps of the machine. Tonnage is required on all machines so that the clamps of the machine do not open due to the injection pressure. If the mold partially opens up, it will create flashing in the plastic product.\nElectric.\nThe electric press, also known as Electric Machine Technology (EMT), reduces operation costs by cutting energy consumption and also addresses some of the environmental concerns surrounding the hydraulic press. Electric presses have been shown to be quieter, faster, and have a higher accuracy, however the machines are more expensive.\nHybrid injection (sometimes referred to as \"Servo-Hydraulic\") molding machines claim to take advantage of the best features of both hydraulic and electric systems, but in actuality use almost the same amount of electricity to operate as an electric injection molding machine depending on the manufacturer.\nA robotic arm is often used to remove the molded components; either by side or top entry, but it is more common for parts to drop out of the mold, through a chute and into a container.\nMain components of injection molding machine.\nInjection unit.\nConsists of three main components:\nClamping unit.\nConsists of three main components:", "Automation-Control": 0.8798001409, "Qwen2": "Yes"} {"id": "51138368", "revid": "5837138", "url": "https://en.wikipedia.org/wiki?curid=51138368", "title": "Tech Safe Systems", "text": "Tech Safe Systems was a Norfolk-based company that are specialists in the design, engineering and manufacturing of launch and recovery systems (LARS), control cabins, workshops for ROVs, and electric and hydraulic winches, most commonly in the deep water industries. It was acquired by Outreach Ltd in 2014.\nHistory.\nTech Safe began in 1996 when a gap in the market was identified to provide a complete package of LARS, winches, cabins and workshops, integrated and tested with the customer’s ROV as a complete system. This ensured all the equipment worked together prior to going offshore. Outreach Ltd acquired Tech Safe Systems on 31 July 2014.\nSectors.\nTech Safe Systems operated both nationally and, for some of its products, internationally.\nProducts.\nTech Safe Systems operated across the United Kingdom and, for many of its products, across the world and are specialists in the design, engineering and manufacturing of: \nmost commonly in the deep water industries. These products begin life with Autodesk Product Design Suite.\nProjects.\nOWF Project in Wikinger offshore wind farm in the Baltic Sea. Tech Safe developed a Lloyd's Register Certified solution for large diameter drilling (LDD). This included a package of three winches and two control cabins to facilitate the command and control package for their pre-piling template and bubble curtain system.", "Automation-Control": 0.9620565176, "Qwen2": "Yes"} {"id": "648007", "revid": "2414730", "url": "https://en.wikipedia.org/wiki?curid=648007", "title": "Die (manufacturing)", "text": "A die is a specialized machine tool used in manufacturing industries to cut and/or form material to a desired shape or profile. Stamping dies are used with a press, as opposed to drawing dies (used in the manufacture of wire) and casting dies (used in molding) which are not. Like molds, dies are generally customized to the item they are used to create.\nProducts made with dies range from simple paper clips to complex pieces used in advanced technology. Continuous-feed laser cutting may displace the analogous die-based process in the automotive industry, among others.\nDie stamping.\nBlanking and piercing are two die cutting operations, and bending is an example of a die forming operation.\nDie forming.\nForming operations work by deforming materials like sheet metal or plastic using force (compression, tension, or both) and rely on the material's mechanical properties. Forming dies are typically made by tool and die makers and put into production after mounting into a press.\nDifferences between materials.\nFor the vacuum forming of plastic sheet only a single form is used, typically to form transparent plastic containers (called blister packs) for merchandise. Vacuum forming is considered a simple molding thermoforming process but uses the same principles as die forming.\nFor the forming of sheet metal, such as automobile body parts, two parts may be used: one, called the \"punch\", performs the stretching, bending, and/or blanking operation, while another part that is called the \"die block\" securely clamps the workpiece and provides similar stretching, bending, and/or blanking operation. The workpiece may pass through several stages using different tools or operations to obtain the final form. In the case of an automotive component, there will usually be a shearing operation after the main forming is done. Additional crimping or rolling operations may be performed to ensure that all sharp edges are hidden and/or to add rigidity to the panel.\nDie components.\nThe main components of a die set (including press mounting) are as follows. Note that because nomenclature varies between sources, alternate names are in parenthesis:\nSteel-rule die.\n\"Steel-rule\" die, also known as \"cookie cutter\" dies, are used for cutting sheet metal and softer materials, such as plastics, wood, cork, felt, fabrics, and paperboard. The cutting surface of the die is the edge of hardened steel strips, known as \"steel rule\". These steel rules are usually located using saw or laser-cut grooves in plywood. The mating die can be a flat piece of hardwood or steel, a male shape that matches the workpiece profile, or it can have a matching groove that allows the rule to nest into. Rubber strips are wedged in with the steel rule to act as the stripper plate; the rubber compresses on the down-stroke and on the up-stroke it pushes the workpiece out of the die. The main advantage of steel-rule dies is the low cost to make them, as compared to solid dies; however, they are not as robust as solid dies, so they are usually only used for short production runs.\nRotary die.\nIn the broadest sense, a \"rotary die\" is a cylindrical shaped die that may be used in any manufacturing field. However, it most commonly refers to cylindrical shaped dies used to process soft materials, such as paper or cardboard. Two rules are used, cutting and creasing rules. This is for corrugated boards whose thickness is more than 2 mm. Rotary dies are faster than flat dies.\nThe term also refers to dies used in the roll forming process.\nWire pulling.\nWire-making dies have a hole through the middle of them. A wire or rod of steel, copper, other metals, or alloy enters into one side and is lubricated and reduced in size. The leading tip of the wire is usually pointed in the process. The tip of the wire is then guided into the die and rolled onto a block on the opposite side. The block provides the power to pull the wire through the die.\nThe die is divided into several different sections. First is an entrance angle that guides the wire into the die. Next is the approach angle, which brings the wire to the nib, which facilitates the reduction. Next is the bearing and the back relief. Lubrication is added at the entrance angle. The lube can be in powdered soap form. If the lubricant is soap, the friction of the drawing of wire heats the soap to liquid form and coats the wire. The wire should never actually come in contact with the die. A thin coat of lubricant should prevent the metal to metal contact.\nFor pulling a substantial rod down to a fine wire a series of several dies is used to obtain progressive reduction of diameter in stages.\nStandard wire gauges used to refer to the number of dies through which the wire had been pulled. Thus, a higher-numbered wire gauge meant a thinner wire. Typical telephone wires were 22-gauge, while main power cables might be 3- or 4-gauge.", "Automation-Control": 0.9600558281, "Qwen2": "Yes"} {"id": "8953682", "revid": "160367", "url": "https://en.wikipedia.org/wiki?curid=8953682", "title": "Flatness (systems theory)", "text": "Flatness in systems theory is a system property that extends the notion of controllability from linear systems to nonlinear dynamical systems. A system that has the flatness property is called a \"flat system\". Flat systems have a (fictitious) \"flat output\", which can be used to explicitly express all states and inputs in terms of the flat output and a finite number of its derivatives.\nDefinition.\nA nonlinear system\nformula_1\nis flat, if there exists an output\nformula_2\nthat satisfies the following conditions:\nIf these conditions are satisfied at least locally, then the (possibly fictitious) output is called \"flat output\", and the system is \"flat\".\nRelation to controllability of linear systems.\nA linear system \nformula_14\nwith the same signal dimensions for formula_15 as the nonlinear system is flat, if and only if it is controllable. For linear systems both properties are equivalent, hence exchangeable.\nSignificance.\nThe flatness property is useful for both the analysis of and controller synthesis for nonlinear dynamical systems. It is particularly advantageous for solving trajectory planning problems and asymptotical setpoint following control.", "Automation-Control": 1.000005126, "Qwen2": "Yes"} {"id": "63005915", "revid": "46135219", "url": "https://en.wikipedia.org/wiki?curid=63005915", "title": "Material removal rate", "text": "Material removal rate (MRR) is the amount of material removed per time unit (usually per minute) when performing machining operations such as using a lathe or milling machine. The more material removed per minute, the higher the material removal rate. The MRR is a single number that enables you to do this. It is a direct indicator of how efficiently you are cutting, and how profitable you are. MRR is the volume of material removed per minute. The higher your cutting parameters, the higher the MRR.\nPhrased in another way, the MRR is equal to the volume of residue formed as a direct result of the removal from the workpiece per unit of time during a cutting operation.\nThe material removal rate in a work process can be calculated as the depth of the cut, times the width of the cut, times the feed rate. The material removal rate is typically measured in cubic centimeters per minute (cm3/min).", "Automation-Control": 0.7357705235, "Qwen2": "Yes"} {"id": "42247256", "revid": "27823944", "url": "https://en.wikipedia.org/wiki?curid=42247256", "title": "Kernel methods for vector output", "text": "Kernel methods are a well-established tool to analyze the relationship between input data and the corresponding output of a function. Kernels encapsulate the properties of functions in a computationally efficient way and allow algorithms to easily swap functions of varying complexity.\nIn typical machine learning algorithms, these functions produce a scalar output. Recent development of kernel methods for functions with vector-valued output is due, at least in part, to interest in simultaneously solving related problems. Kernels which capture the relationship between the problems allow them to \"borrow strength\" from each other. Algorithms of this type include multi-task learning (also called multi-output learning or vector-valued learning), transfer learning, and co-kriging. Multi-label classification can be interpreted as mapping inputs to (binary) coding vectors with length equal to the number of classes.\nIn Gaussian processes, kernels are called covariance functions. Multiple-output functions correspond to considering multiple processes. See Bayesian interpretation of regularization for the connection between the two perspectives.\nHistory.\nThe history of learning vector-valued functions is closely linked to transfer learning- storing knowledge gained while solving one problem and applying it to a different but related problem. The fundamental motivation for transfer learning in the field of machine learning was discussed in a NIPS-95 workshop on “Learning to Learn,” which focused on the need for lifelong machine learning methods that retain and reuse previously learned knowledge. Research on transfer learning has attracted much attention since 1995 in different names: learning to learn, lifelong learning, knowledge transfer, inductive transfer, multitask learning, knowledge consolidation, context-sensitive learning, knowledge-based inductive bias, metalearning, and incremental/cumulative learning. Interest in learning vector-valued functions was particularly sparked by multitask learning, a framework which tries to learn multiple, possibly different tasks simultaneously.\nMuch of the initial research in multitask learning in the machine learning community was algorithmic in nature, and applied to methods such as neural networks, decision trees and -nearest neighbors in the 1990s. The use of probabilistic models and Gaussian processes was pioneered and largely developed in the context of geostatistics, where prediction over vector-valued output data is known as cokriging. Geostatistical approaches to multivariate modeling are mostly formulated around the linear model of coregionalization (LMC), a generative approach for developing valid covariance functions that has been used for multivariate regression and in statistics for computer emulation of expensive multivariate computer codes. The regularization and kernel theory literature for vector-valued functions followed in the 2000s. While the Bayesian and regularization perspectives were developed independently, they are in fact closely related.\nNotation.\nIn this context, the supervised learning problem is to learn the function formula_1 which best predicts vector-valued outputs formula_2 given inputs (data) formula_3.\nIn general, each component of (formula_2), could have different input data (formula_10) with different cardinality (formula_11) and even different input spaces (formula_12).\nGeostatistics literature calls this case \"heterotopic\", and uses \"isotopic\" to indicate that the each component of the output vector has the same set of inputs.\nHere, for simplicity in the notation, we assume the number and sample space of the data for each output are the same.\nRegularization perspective.\nFrom the regularization perspective, the problem is to learn formula_13 belonging to a reproducing kernel Hilbert space of vector-valued functions (formula_14). This is similar to the scalar case of Tikhonov regularization, with some extra care in the notation.\nformula_15It is possible, though non-trivial, to show that a representer theorem also holds for Tikhonov regularization in the vector-valued setting.\nNote, the matrix-valued kernel formula_16 can also be defined by a scalar kernel formula_17 on the space formula_18. An isometry exists between the Hilbert spaces associated with these two kernels:\nGaussian process perspective.\nThe estimator of the vector-valued regularization framework can also be derived from a Bayesian viewpoint using Gaussian process methods in the case of a finite dimensional Reproducing kernel Hilbert space. The derivation is similar to the scalar-valued case Bayesian interpretation of regularization. The vector-valued function formula_20, consisting of formula_21 outputs formula_22, is assumed to follow a Gaussian process:\nwhere formula_24 is now a vector of the mean functions formula_25 for the outputs and formula_26 is a positive definite matrix-valued function with entry formula_27 corresponding to the covariance between the outputs formula_28 and formula_29.\nFor a set of inputs formula_30, the prior distribution over the vector formula_31 is given by formula_32, where formula_33 is a vector that concatenates the mean vectors associated to the outputs and formula_34 is a block-partitioned matrix. The distribution of the outputs is taken to be Gaussian:\nwhere formula_36 is a diagonal matrix with elements formula_37 specifying the noise for each output. Using this form for the likelihood, the predictive distribution for a new vector formula_38 is:\nwhere formula_40 is the training data, and formula_41 is a set of hyperparameters for formula_42 and formula_43.\nEquations for formula_44 and formula_45 can then be obtained:\nwhere formula_48 has entries formula_49 for formula_50 and formula_51. Note that the predictor formula_52 is identical to the predictor derived in the regularization framework. For non-Gaussian likelihoods different methods such as Laplace approximation and variational methods are needed to approximate the estimators.\nExample kernels.\nSeparable.\nA simple, but broadly applicable, class of multi-output kernels can be separated into the product of a kernel on the input-space and a kernel representing the correlations among the outputs:\nIn matrix form: formula_58   \nwhere formula_59 is a formula_60 symmetric and positive semi-definite matrix. Note, setting formula_59 to the identity matrix treats the outputs as unrelated and is equivalent to solving the scalar-output problems separately.\nFor a slightly more general form, adding several of these kernels yields sum of separable kernels (SoS kernels).\nFrom regularization literature.\nDerived from regularizer.\nOne way of obtaining formula_56 is to specify a regularizer which limits the complexity of formula_1 in a desirable way, and then derive the corresponding kernel. For certain regularizers, this kernel will turn out to be separable.\n\"Mixed-effect regularizer\"\nwhere:\nwhere formula_69 matrix with all entries equal to 1.\nThis regularizer is a combination of limiting the complexity of each component of the estimator (formula_70) and forcing each component of the estimator to be close to the mean of all the components. Setting formula_71 treats all the components as independent and is the same as solving the scalar problems separately. Setting formula_72 assumes all the components are explained by the same function.\n\"Cluster-based regularizer\"\nwhere:\nwhere formula_85\nThis regularizer divides the components into formula_86 clusters and forces the components in each cluster to be similar.\n\"Graph regularizer\"\nwhere formula_88 matrix of weights encoding the similarities between the components\nwhere formula_90,   formula_91\nNote, formula_92 is the graph laplacian. See also: graph kernel.\nLearned from data.\nSeveral approaches to learning formula_59 from data have been proposed. These include: performing a preliminary inference step to estimate formula_59 from the training data, a proposal to learn formula_59 and formula_96 together based on the cluster regularizer, and sparsity-based approaches which assume only a few of the features are needed.\nFrom Bayesian literature.\nLinear model of coregionalization (LMC).\nIn LMC, outputs are expressed as linear combinations of independent random functions such that the resulting covariance function (over all inputs and outputs) is a valid positive semidefinite function. Assuming formula_21 outputs formula_98 with formula_99, each formula_100 is expressed as:\nwhere formula_102 are scalar coefficients and the independent functions formula_103 have zero mean and covariance covformula_104 if formula_105 and 0 otherwise. The cross covariance between any two functions formula_28 and formula_107 can then be written as:\nwhere the functions formula_109, with formula_110 and formula_111 have zero mean and covariance covformula_112 if formula_113 and formula_105. But formula_115 is given by formula_27. Thus the kernel formula_42 can now be expressed as\nwhere each formula_119 is known as a coregionalization matrix. Therefore, the kernel derived from LMC is a sum of the products of two covariance functions, one that models the dependence between the outputs, independently of the input vector formula_120 (the coregionalization matrix formula_121), and one that models the input dependence, independently of formula_98(the covariance function formula_123).\nIntrinsic coregionalization model (ICM).\nThe ICM is a simplified version of the LMC, with formula_124. ICM assumes that the elements formula_125 of the coregionalization matrix formula_126 can be written as formula_127, for some suitable coefficients formula_128. With this form for formula_125:\nwhere\nIn this case, the coefficients\nand the kernel matrix for multiple outputs becomes formula_133. ICM is much more restrictive than the LMC since it assumes that each basic covariance formula_134 contributes equally to the construction of the autocovariances and cross covariances for the outputs. However, the computations required for the inference are greatly simplified.\nSemiparametric latent factor model (SLFM).\nAnother simplified version of the LMC is the semiparametric latent factor model (SLFM), which corresponds to setting formula_135 (instead of formula_136 as in ICM). Thus each latent function formula_137 has its own covariance.\nNon-separable.\nWhile simple, the structure of separable kernels can be too limiting for some problems.\nNotable examples of non-separable kernels in the regularization literature include:\nIn the Bayesian perspective, LMC produces a separable kernel because the output functions evaluated at a point formula_120 only depend on the values of the latent functions at formula_120. A non-trivial way to mix the latent functions is by convolving a base process with a smoothing kernel. If the base process is a Gaussian process, the convolved process is Gaussian as well. We can therefore exploit convolutions to construct covariance functions. This method of producing non-separable kernels is known as process convolution. Process convolutions were introduced for multiple outputs in the machine learning community as \"dependent Gaussian processes\".\nImplementation.\nWhen implementing an algorithm using any of the kernels above, practical considerations of tuning the parameters and ensuring reasonable computation time must be considered.\nRegularization perspective.\nApproached from the regularization perspective, parameter tuning is similar to the scalar-valued case and can generally be accomplished with cross validation. Solving the required linear system is typically expensive in memory and time. If the kernel is separable, a coordinate transform can convert formula_140 to a block-diagonal matrix, greatly reducing the computational burden by solving D independent subproblems (plus the eigendecomposition of formula_59). In particular, for a least squares loss function (Tikhonov regularization), there exists a closed form solution for formula_142:\nBayesian perspective.\nThere are many works related to parameter estimation for Gaussian processes. Some methods such as maximization of the marginal likelihood (also known as evidence approximation, type II maximum likelihood, empirical Bayes), and least squares give point estimates of the parameter vector formula_41. There are also works employing a full Bayesian inference by assigning priors to formula_41 and computing the posterior distribution through a sampling procedure. For non-Gaussian likelihoods, there is no closed form solution for the posterior distribution or for the marginal likelihood. However, the marginal likelihood can be approximated under a Laplace, variational Bayes or expectation propagation (EP) approximation frameworks for multiple output classification and used to find estimates for the hyperparameters.\nThe main computational problem in the Bayesian viewpoint is the same as the one appearing in regularization theory of inverting the matrix\nThis step is necessary for computing the marginal likelihood and the predictive distribution. For most proposed approximation methods to reduce computation, the computational efficiency gained is independent of the particular method employed (e.g. LMC, process convolution) used to compute the multi-output covariance matrix. A summary of different methods for reducing computational complexity in multi-output Gaussian processes is presented in.", "Automation-Control": 0.684803009, "Qwen2": "Yes"} {"id": "42253995", "revid": "18872885", "url": "https://en.wikipedia.org/wiki?curid=42253995", "title": "Kernel adaptive filter", "text": "In signal processing, a kernel adaptive filter is a type of nonlinear adaptive filter. An adaptive filter is a filter that adapts its transfer function to changes in signal properties over time by minimizing an error or loss function that characterizes how far the filter deviates from ideal behavior. The adaptation process is based on learning from a sequence of signal samples and is thus an online algorithm. A nonlinear adaptive filter is one in which the transfer function is nonlinear.\nKernel adaptive filters implement a nonlinear transfer function using kernel methods. In these methods, the signal is mapped to a high-dimensional linear feature space and a nonlinear function is approximated as a sum over kernels, whose domain is the feature space. If this is done in a reproducing kernel Hilbert space, a kernel method can be a universal approximator for a nonlinear function. Kernel methods have the advantage of having convex loss functions, with no local minima, and of being only moderately complex to implement.\nBecause high-dimensional feature space is linear, kernel adaptive filters can be thought of as a generalization of linear adaptive filters. As with linear adaptive filters, there are two general approaches to adapting a filter: the least mean squares filter (LMS) and the recursive least squares filter (RLS).\nSelf organising kernel adaptive filters that use iteration to achieve convex LMS error minimisation address some of the statistical and practical issues of non-linear models that do not arise in the linear case. Regularisation is particularly important feature for non-linear models and also often used in linear adaptive filters to reduce statistical uncertainties. However because nonlinear filters typically have a much higher potential structural complexity (or higher dimensional feature space) compared to the subspace actually required, regularisation of some kind must deal with the under-determined model. Though some specific forms of parameter regularisation such as prescribed by Vapink's SRM & SVM address the dimensionality problem statistically to some extent, there remain further statistical and practical issues for truly adaptive non-linear filters. Adaptive filters are often used for tracking the behaviour of a time-varying system or systems which cannot be fully modelled from the data and structure available, hence the models may not only need to adapt parameters, but structure too. \nWhere structural parameters of kernels are derived directly from data being processed (as in the above \"Support Vector\" approach) there are convenient opportunities for analytically robust methods of self organisation of the kernels available to the filter. The linearised feature space induced by kernels allows linear projection of new samples on to the current structure of the model where novelty in new data can be easily differentiated from noise-born errors which should not result in a change to model structure. Analytical metrics for structure analysis can be used to parsimoniously grow model complexity when required or optimally prune the existing structure when processor resource limits are reached. Structure updates are also relevant when system variation is detected and the long-term memory of the model should be updated as for the Kalman Filter case in linear filters. \nIterative gradient descent that is typically used in adaptive filters has also gained popularity in offline batch-mode support vector based machine learning because of its computational efficiency for large data set processing. Both time series and batch data processing performance is reported to be able to easily handle over 100,000 training examples using as little as 10kB RAM. Data sizes this large are challenging to the original formulations of support vector machines and other kernel methods, which for example relied on constrained optimisation using linear or quadratic programming techniques.", "Automation-Control": 0.8725130558, "Qwen2": "Yes"} {"id": "10174471", "revid": "33467233", "url": "https://en.wikipedia.org/wiki?curid=10174471", "title": "State Threads", "text": "The State Threads library is a small application library which provides a foundation for writing fast and highly scalable Internet applications (such as web servers, proxy servers, mail transfer agents, or any network-data-driven application) on Unix-like platforms.\nThis library combines the simplicity of the multithreaded programming paradigm, in which one thread supports each simultaneous connection, with the performance and scalability of an event-driven state machine architecture. In other words, this library offers a threading API for structuring an Internet application as a state machine.\nThe State Threads library is a derivative of the Netscape Portable Runtime library (NSPR) and therefore is distributed under the Mozilla Public License (MPL) version 1.1 or the GNU General Public License (GPL) version 2 or later.", "Automation-Control": 0.7406221628, "Qwen2": "Yes"} {"id": "29564947", "revid": "1167014007", "url": "https://en.wikipedia.org/wiki?curid=29564947", "title": "Okuma Corporation", "text": " is a machine tool builder based in Ōguchi, Aichi Prefecture, Japan. It has global market share in CNC machine tools such as CNC lathes, machining centers, and turn-mill machining centers. The company also offers FA (factory automation) products and servomotors.\nIt is listed on the Tokyo Stock Exchange and is a component of the Nikkei 225 stock index.\nHistory.\nThe company was founded in 1898, as the Okuma Noodle Machine Co., to manufacture and sell noodle-making machines. Eiichi Okuma, the founder of the original company, was working on how to make udon more effectively. He was using lathe to make \"sticks\", that has an important role in cutting the udon noodle. But the lathes used in those days in Japan were of poor precision. This was one of big reasons which convinced Okuma to start making machine tools. In 1918 Eiichi established Okuma Machinery Works Ltd. and started selling the OS lathe.\nOkuma is a machine tool builder with a history of more than 100 years. Lathes were the main product category in the early days of company. The line now includes many CNC machine tools, including lathes, machining centers (mills), multitasking (turn-mill) machines, and grinding machines. Okuma's Double-Column Machining Center has a large market share in Japan.\nTechnological development.\nMost machine tool builders source their CNC controls from partners such as Fanuc, Mitsubishi Electric, Siemens, and Heidenhain. Several builders have developed their own CNC controls over the years (including Mazak, Okuma, Haas, Dalian Kede and others), but Okuma is unusual among machine tool builders for the degree to which it designs and builds all of its own hardware, software, and machine components. This is the company's \"Single Source\" philosophy.\nOkuma's CNC control is called the \"OSP\" series. It offers closed-loop positioning via its absolute position feedback system. The \"OSP\" name began as an abbreviation for \"Okuma Sampling Pathcontrol\".\nIn an industry that pushes hard for continual technological innovation, Okuma has often been an innovative leader. For example, it has been among the leaders of development for thermal compensation and collision avoidance. Thermal compensation is designing the machine elements and control to minimize the dimensional distortion that results from the heat generated by machining. This is done both by preventing heat buildup (for example, flowing coolant through machine elements formerly not cooled) and by detecting and compensating for temperature rises when they occur (for example, monitoring temperature with a sensor and using the sensor's output signal as input to the control logic). Collision avoidance is designing the machine to predict and prevent interference, for example, having the machine \"know\" the form and location of all fixturing so that it can foresee a crash and stop its own movement before crashing. Recent innovation includes technology to avoid chatter, both by predicting and preventing it and by early automatic detection and correction (via dynamic changes of speeds and feeds) when it does occur.", "Automation-Control": 0.9882850051, "Qwen2": "Yes"} {"id": "29566154", "revid": "23790359", "url": "https://en.wikipedia.org/wiki?curid=29566154", "title": "Feeder line (manufacturing)", "text": "A feeder line is a secondary assembly line which provides parts for use in a primary assembly line. Researchers assert that the traditional level scheduling methodology of assembly line planning is not effective unless feeder lines provide parts to the primary assembly line.", "Automation-Control": 0.9895573854, "Qwen2": "Yes"} {"id": "30474456", "revid": "35465059", "url": "https://en.wikipedia.org/wiki?curid=30474456", "title": "Sum-of-squares optimization", "text": "A sum-of-squares optimization program is an optimization problem with a linear cost function and a particular type of constraint on the decision variables. These constraints are of the form that when the decision variables are used as coefficients in certain polynomials, those polynomials should have the polynomial SOS property. When fixing the maximum degree of the polynomials involved, sum-of-squares optimization is also known as the Lasserre hierarchy of relaxations in semidefinite programming.\nSum-of-squares optimization techniques have been applied across a variety of areas, including control theory (in particular, for searching for polynomial Lyapunov functions for dynamical systems described by polynomial vector fields), statistics, finance and machine learning.\nOptimization problem.\nGiven a vector formula_1 and polynomials formula_2 for formula_3, formula_4, a sum-of-squares optimization problem is written as\nformula_5\nHere \"SOS\" represents the class of sum-of-squares (SOS) polynomials.\nThe quantities formula_6 are the decision variables. SOS programs can be converted to semidefinite programs (SDPs) using the duality of the SOS polynomial program and a relaxation for constrained polynomial optimization using positive-semidefinite matrices, see the following section.\nDual problem: constrained polynomial optimization.\nSuppose we have an formula_7-variate polynomial formula_8 , and suppose that we would like to minimize this polynomial over a subset formula_9. Suppose furthermore that the constraints on the subset formula_10 can be encoded using formula_11 polynomial equalities of degree at most formula_12, each of the form formula_13 where formula_14 is a polynomial of degree at most formula_12. A natural, though generally non-convex program for this optimization problem is the following:\nformula_16\nsubject to:\nformula_17\nwhere formula_18 is the formula_19-dimensional vector with one entry for every monomial in formula_20 of degree at most formula_21, so that for each multiset formula_22 formula_23, formula_24 is a matrix of coefficients of the polynomial formula_25 that we want to minimize, and formula_26 is a matrix of coefficients of the polynomial formula_27 encoding the formula_28-th constraint on the subset formula_29. The additional, fixed constant index in our search space, formula_30, is added for the convenience of writing the polynomials formula_25 and formula_32 in a matrix representation.\nThis program is generally non-convex, because the constraints are not convex. One possible convex relaxation for this minimization problem uses semidefinite programming to replace the rank-one matrix of variables formula_33 with a positive-semidefinite matrix formula_34: we index each monomial of size at most formula_12 by a multiset formula_36 of at most formula_12 indices, formula_38. For each such monomial, we create a variable formula_39 in the program, and we arrange the variables formula_39 to form the matrix formula_41, where formula_42is the set of real matrices whose rows and columns are identified with multisets of elements from formula_7 of size at most formula_21. We then write the following semidefinite program in the variables formula_39:\nformula_46\nsubject to:\nformula_47\nformula_48\nformula_49\nformula_50\nwhere again formula_24 is the matrix of coefficients of the polynomial formula_25 that we want to minimize, and formula_26 is the matrix of coefficients of the polynomial formula_27 encoding the formula_28-th constraint on the subset formula_29.\nThe third constraint ensures that the value of a monomial that appears several times within the matrix is equal throughout the matrix, and is added to make formula_34 respect the symmetries present in the quadratic form formula_58.\nDuality.\nOne can take the dual of the above semidefinite program and obtain the following program:\nformula_59\nsubject to:\nformula_60\nWe have a variable formula_61 corresponding to the constraint formula_62 (where formula_63 is the matrix with all entries zero save for the entry indexed by formula_64), a real variable formula_65for each polynomial constraint formula_66 and for each group of multisets formula_67, we have a dual variable formula_68for the symmetry constraint formula_69. The positive-semidefiniteness constraint ensures that formula_70 is a sum-of-squares of polynomials over formula_29: by a characterization of positive-semidefinite matrices, for any positive-semidefinite matrix formula_72, we can write formula_73 for vectors formula_74. Thus for any formula_75,\nformula_76\nwhere we have identified the vectors formula_77 with the coefficients of a polynomial of degree at most formula_21. This gives a sum-of-squares proof that the value formula_79 over formula_80.\nThe above can also be extended to regions formula_81 defined by polynomial inequalities.\nSum-of-squares hierarchy.\nThe sum-of-squares hierarchy (SOS hierarchy), also known as the Lasserre hierarchy, is a hierarchy of convex relaxations of increasing power and increasing computational cost. For each natural number formula_82 the corresponding convex relaxation is known as the \"formula_83th level\" or \"formula_84-th round of the SOS hierarchy.\" The formula_85st round, when formula_86, corresponds to a basic semidefinite program, or to sum-of-squares optimization over polynomials of degree at most formula_87. To augment the basic convex program at the formula_85st level of the hierarchy to formula_83-th level, additional variables and constraints are added to the program to have the program consider polynomials of degree at most formula_90.\nThe SOS hierarchy derives its name from the fact that the value of the objective function at the formula_83-th level is bounded with a sum-of-squares proof using polynomials of degree at most formula_92 via the dual (see \"Duality\" above). Consequently, any sum-of-squares proof that uses polynomials of degree at most formula_92 can be used to bound the objective value, allowing one to prove guarantees on the tightness of the relaxation.\nIn conjunction with a theorem of Berg, this further implies that given sufficiently many rounds, the relaxation becomes arbitrarily tight on any fixed interval. Berg's result states that every non-negative real polynomial within a bounded interval can be approximated within accuracy formula_94 on that interval with a sum-of-squares of real polynomials of sufficiently high degree, and thus if formula_95 is the polynomial objective value as a function of the point formula_96, if the inequality formula_97 holds for all formula_96 in the region of interest, then there must be a sum-of-squares proof of this fact. Choosing formula_99 to be the minimum of the objective function over the feasible region, we have the result.\nComputational cost.\nWhen optimizing over a function in formula_100 variables, the formula_83-th level of the hierarchy can be written as a semidefinite program over formula_102 variables, and can be solved in time formula_102 using the ellipsoid method.\nSum-of-squares background.\nA polynomial formula_104 is a \"sum of squares\" (\"SOS\") if there exist polynomials formula_105 such that formula_106. For example,\nformula_107\nis a sum of squares since\nformula_108\nwhere \nformula_109\nNote that if formula_104 is a sum of squares then formula_111 for all formula_112. Detailed descriptions of polynomial SOS are available.\nQuadratic forms can be expressed as formula_113 where formula_114 is a symmetric matrix. Similarly, polynomials of degree ≤ 2\"d\" can be expressed as \nformula_115\nwhere the vector formula_116 contains all monomials of degree formula_117. This is known as the Gram matrix form. An important fact is that formula_104 is SOS if and only if there exists a symmetric and positive-semidefinite matrix formula_114 such that formula_120.\nThis provides a connection between SOS polynomials and positive-semidefinite matrices.", "Automation-Control": 0.7944574356, "Qwen2": "Yes"} {"id": "20027065", "revid": "1136227013", "url": "https://en.wikipedia.org/wiki?curid=20027065", "title": "Structured support vector machine", "text": "The structured support-vector machine is a machine learning algorithm that generalizes the Support-Vector Machine (SVM) classifier. Whereas the SVM classifier supports binary classification, multiclass classification and regression, the structured SVM allows training of a classifier for general structured output labels.\nAs an example, a sample instance might be a natural language sentence, and the output label is an annotated parse tree. Training a classifier consists of showing pairs of correct sample and output label pairs. After training, the structured SVM model allows one to predict for new sample instances the corresponding output label; that is, given a natural language sentence, the classifier can produce the most likely parse tree.\nTraining.\nFor a set of formula_1 training instances formula_2, formula_3 from a sample space formula_4 and label space formula_5, the structured SVM minimizes the following regularized risk function.\nThe function is convex in formula_7 because the maximum of a set of affine functions is convex. The function formula_8 measures a distance in label space and is an arbitrary function (not necessarily a metric) satisfying formula_9 and formula_10. The function formula_11 is a feature function, extracting some feature vector from a given sample and label. The design of this function depends very much on the application.\nBecause the regularized risk function above is non-differentiable, it is often reformulated in terms of a quadratic program by introducing one slack variable formula_12 for each sample, each representing the value of the maximum. The standard structured SVM primal formulation is given as follows.\nInference.\nAt test time, only a sample formula_14 is known, and a prediction function formula_15 maps it to a predicted label from the label space formula_5. For structured SVMs, given the vector formula_7 obtained from training, the prediction function is the following.\nTherefore, the maximizer over the label space is the predicted label. Solving for this maximizer is the so-called inference problem and similar to making a maximum a-posteriori (MAP) prediction in probabilistic models. Depending on the structure of the function formula_19, solving for the maximizer can be a hard problem.\nSeparation.\nThe above quadratic program involves a very large, possibly infinite number of linear inequality constraints. In general, the number of inequalities is too large to be optimized over explicitly. Instead the problem is solved by using delayed constraint generation where only a finite and small subset of the constraints is used. Optimizing over a subset of the constraints enlarges the feasible set and will yield a solution that provides a lower bound on the objective. To test whether the solution formula_7 violates constraints of the complete set inequalities, a separation problem needs to be solved. As the inequalities decompose over the samples, for each sample formula_21 the following problem needs to be solved.\nThe right hand side objective to be maximized is composed of the constant formula_23 and a term dependent on the variables optimized over, namely formula_24. If the achieved right hand side objective is smaller or equal to zero, no violated constraints for this sample exist. If it is strictly larger than zero, the most violated constraint with respect to this sample has been identified. The problem is enlarged by this constraint and resolved. The process continues until no violated inequalities can be identified.\nIf the constants are dropped from the above problem, we obtain the following problem to be solved.\nThis problem looks very similar to the inference problem. The only difference is the addition of the term formula_26. Most often, it is chosen such that it has a natural decomposition in label space. In that case, the influence of formula_27 can be encoded into the inference problem and solving for the most violating constraint is equivalent to solving the inference problem.", "Automation-Control": 0.7461848259, "Qwen2": "Yes"} {"id": "23188556", "revid": "35692693", "url": "https://en.wikipedia.org/wiki?curid=23188556", "title": "Jakob Stoustrup", "text": "Jakob Stoustrup is a Danish researcher employed at Aalborg University, where he serves as professor of control theory at the Department of Electronic Systems.\nEducation.\nJakob Stoustrup received the M.Sc. degree in Electrical Engineering in 1987, and the Ph.D. degree in Applied Mathematics in 1991, both from the Technical University of Denmark.\nBackground, career, and scientific contributions.\nAfter a first position as teaching assistant at the Technical University of Denmark, and visiting researcher at Eindhoven University of Technology, The Netherlands, 1988, he became Senior Researcher sponsored by Danish Technical Research Council, 1991. Assistant Professor 1991–1995, and Associate Professor 1995–1996, both at the Department of Mathematics, Technical University of Denmark. He was visiting professor at the University of Strathclyde, Glasgow, U.K., in 1996, and later visiting professor at the Mittag-Leffler Institute, Stockholm, Sweden, 2003. From 1997–2013 and since 2016 (full) Professor at Automation & Control, Aalborg University, and from 2006–2013, he acted as Head of Research for the Department of Electronic Systems. From 2014 to 2016 he acted as Chief Scientist with the Pacific Northwest National Laboratory, where he led the Control of Complex Systems Initiative. In 2017, Jakob Stoustrup was appointed as pro-dean for the TECH Faculty at Aalborg University.\nStoustrup has been a Member of the Swedish Research Council (Signals and Systems), of the Norwegian Research Council, of the European Research Council and of The Danish Research Council for Technology and Production Sciences. He has acted as associate editor, guest editor, and editorial board member of international journals. At several occasions, Jakob Stoustrup has acted as plenary speaker at international conferences, and he has also acted as a General Chair for such events. Jakob Stoustrup has been appointed by the Institute of Electrical and Electronics Engineers, as Chairman of a Control Systems Society/Robotics & Automation Society Joint Chapter. In 2008, Jakob Stoustrup was elected as Chairman for a Technical Committee of the International Federation of Automatic Control, TC6.4. In 2011, he was appointed as a member of the Technical Board of the International Federation of Automatic Control. Jakob Stoustrup has extensive industrial cooperation, and has been CEO for two technological start-up companies. He has led numerous major research projects based on a high number of research grants and contracts.\nThe main contributions of Jakob Stoustrup have been to robust control theory and to the theory of fault tolerant control systems. In these two areas he has published approximately 300 peer-reviewed scientific papers. \nIn 2009, Jakob Stoustrup proposed a novel research direction in the area of control theory, called plug-and-play control. \nAs an unusual accomplishment, his work spans the whole range from the development of new theoretical methods to practical industrial applications.\nIn the area of robust control theory, Jakob Stoustrup has in particular contributed to the development of loop transfer recovery methods for the design of H∞ controllers and to the development of robust design methods for systems having parametric uncertainty descriptions. Loop transfer recovery methods have been among the most popular model based design methods used in industry for decades, due to their intuitive relationships between full state feedback designs and observer based designs. Loop transfer recovery methods were originally designed as an extension to the LQG design methodology, but Jakob Stoustrup and his co-workers advanced the recovery methods to the area of H∞ control, thereby admitting robustness aspects to be included directly in the design paradigm.\nJakob Stoustrup's contributions to the design of robust controllers for systems with parametric uncertainty descriptions have mainly been focusing on establishing methods based on convex optimization. Whereas parametric uncertainty descriptions are often natural candidates for systems with first principle models, because they reflect the variation of physical parameters, the underlying optimization problems often turns out to be non-convex, meaning that they do not readily admit efficient on-line solutions with guaranteed performance. However, in the work of Stoustrup and co-workers, it was described how a class of such problems could be turned into convex optimization problems, and explicit algorithms for efficient solutions were suggested.\nOne theoretical result from Stoustrup in the area of robust control states that for a fairly general class of systems, the order of a decentralized H∞ controller tends to infinity as the performance approaches its optimal value, and in fact that not even an infinite-dimensional (causal) controller exists in that case.\nIn the area of fault tolerant control systems, the main contribution of Jakob Stoustrup has been to introduce a number of optimization based methods in order to solve fault diagnosis and fault tolerant control problems.\nThe results include explicit methods for time-varying, non-linear, and uncertain systems for the design of fault diagnosis and fault tolerant control systems. Inspired by his previous work in the area of robust control theory, Jakob Stoustrup and his co-workers has proposed a general architecture for the modeling and design of fault diagnosis and fault tolerant control systems, handling the above-mentioned challenges.\nOne theoretical result by Jakob Stoustrup in the area of fault tolerant control systems provides a positive answer to a previously open problem. By a constructive proof it is established that under mild conditions, a feedback controller for a system with two or more sensors always exists such that the system remains stable if the signal from any one of the sensors disappear. It is also shown, however, that the smallest order of such a controller can be unboundedly large.\nBesides from the theoretical achievements mentioned above, Jakob Stoustrup has accomplished to bring a significant number among the theoretical results to actual industrial practice. Jakob Stoustrup and his group have worked with a significant number of industries in a wide range of industrial sectors. Examples of industrial applications from his group include:\nThese industrial applications have been carried out by Jakob Stoustrup in cooperation with more than 50 industrial companies in several countries.", "Automation-Control": 0.9898356795, "Qwen2": "Yes"} {"id": "23188939", "revid": "226661", "url": "https://en.wikipedia.org/wiki?curid=23188939", "title": "Unimat", "text": "The Unimat was a series of combination machines sold for light hobbyist engineering, such as model engineering. They were distinctive as the same major components could be re-arranged into either a lathe or milling machine.\nIt covers a range of commercially sold machines intended for machining and metalworking for model making hobbyists manufactured by the Emco company. The machines enable the user to have a drill press, lathe and milling machine. Most of the Unimat range is no longer in production, but the smallest Unimat 1 and its variants is now produced by the Cool Tool Gmbh.\nModels.\n\"TheCoolTool\" company.\nThis model of the Unimat 1 has many plastic parts. It is capable of working mostly wood and plastics.\nThis model of the Unimat 1 has many plastic parts and plastic machining cross slides. It is capable of working wood, plastic and soft metals i.e. aluminum and brass. It is the same machine as the basic with additional capabilities such as wood turning, sawing, drilling, milling and metal turning.\nThis is a collection of enhancements for any of the Unimat 1 tools. It includes options for more power, table saw and router table.\nThis model of the Unimat 1 has been upgraded to metal parts and cross slides that give the unit a higher level of accuracy. It is capable of working any of the materials from the Basic or Classic versions plus soft steel.", "Automation-Control": 0.8679718971, "Qwen2": "Yes"} {"id": "1204141", "revid": "44703459", "url": "https://en.wikipedia.org/wiki?curid=1204141", "title": "Industrial civilization", "text": "Industrial civilization refers to the state of civilization following the Industrial Revolution, characterised by widespread use of powered machines. The transition of an individual region from pre-industrial society into an industrial society is referred to as the process of industrialisation, which may occur in different regions of the world at different times. Individual regions may specialise further as the civilisation continues to advance, resulting in some regions transitioning to a service economy, or information society, or post-industrial society (these are still dependent on industry, but allow individuals to move out of manufacturing jobs). The present era is sometimes referred to as the Information Age . De-industrialization of a region may occur for a range of reasons.\nIndustrial civilization has allowed a significant growth both in world population, thanks to mechanised agriculture and advances in modern medicine, and in the standard of living.\nSuch a civilization is mostly dependent on fossil fuel, with efforts underway to find alternatives for energy production. Some areas have exhibited de-industrialization as certain industries go into decline, or are superseded.\nContrast with other terms.\nContrast with industrial society.\n\"Industrial civilization\" refers to the broader state of civilization, which spans multiple societies; industrial society just to specific segments (within the civilization) dependent on manufacturing jobs, whilst industrial civilisation as a whole involves many regions interdependent (via international trade) specialized in different ways, including information society and service economy. Note that these societies are still dependent on industrial civilization for their goods, and food imports coming from mechanised agriculture.\nContrast with industrial revolution.\nThe \"industrial revolution\" is the historical event that ushered in industrial civilization. The modern world has evolved further following development in mass production and information technology (allowing service economy, and information society).\nContrast with industrialisation.\n\"Industrialisation\" is the process of any individual area being transformed. Industrial civilisation as a whole may have regions that still benefit from industrial societies, without being industrialised themselves, or having specialised in other ways (e.g. service economies).", "Automation-Control": 0.9974818826, "Qwen2": "Yes"} {"id": "8454064", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=8454064", "title": "Visual sensor network", "text": "A visual sensor network or smart camera network or intelligent camera network is a network of spatially distributed smart camera devices capable of processing, exchanging data and fusing images of a scene from a variety of viewpoints into some form more useful than the individual images. A visual sensor network may be a type of wireless sensor network, and much of the theory and application of the latter applies to the former. The network generally consists of the cameras themselves, which have some local image processing, communication and storage capabilities, and possibly one or more central computers, where image data from multiple cameras is further processed and fused (this processing may, however, simply take place in a distributed fashion across the cameras and their local controllers). Visual sensor networks also provide some high-level services to the user so that the large amount of data can be distilled into information of interest using specific queries.\nThe primary difference between visual sensor networks and other types of sensor networks is the nature and volume of information the individual sensors acquire: unlike most sensors, cameras are directional in their field of view, and they capture a large amount of visual information which may be partially processed independently of data from other cameras in the network. Alternatively, one may say that while most sensors measure some value such as temperature or pressure, visual sensors measure \"patterns\". In light of this, communication in visual sensor networks differs substantially from traditional sensor networks.\nApplications.\nVisual sensor networks are most useful in applications involving area surveillance, tracking, and environmental monitoring. Of particular use in surveillance applications is the ability to perform a dense 3D reconstruction of a scene and storing data over a period of time, so that operators can view events as they unfold over any period of time (including the current moment) from any arbitrary viewpoint in the covered area, even allowing them to \"fly\" around the scene in real time. High-level analysis using object recognition and other techniques can intelligently track objects (such as people or cars) through a scene, and even determine what they are doing so that certain activities could be automatically brought to the operator's attention. Another possibility is the use of visual sensor networks in telecommunications, where the network would automatically select the \"best\" view (perhaps even an arbitrarily generated one) of a live event.", "Automation-Control": 0.7186236978, "Qwen2": "Yes"} {"id": "31324309", "revid": "4626582", "url": "https://en.wikipedia.org/wiki?curid=31324309", "title": "SmartCAM", "text": "SmartCAM is a suite of Computer-Aided Manufacturing (CAM) and CAD/CAM software applications that uses toolpath modeling to assist CNC machinists in creating computer-numerically controlled (CNC) programs that direct CNC machine tools.\nThe SmartCAM family of applications include systems in support of CNC milling, turning, mill/turning, Wire EDM and fabrication.\nOne of the pioneers of \"stand-alone\" CAM systems available for the personal computer, SmartCAM was initially developed in 1984 by Point Control Company in Eugene, Oregon, and after a series of corporate acquisitions from 1994 to 2001, in 2003 ownership and development has been conducted by SmartCAMcnc in Springfield, Oregon.", "Automation-Control": 0.9620428681, "Qwen2": "Yes"} {"id": "275473", "revid": "6727347", "url": "https://en.wikipedia.org/wiki?curid=275473", "title": "Control system", "text": "A control system manages, commands, directs, or regulates the behavior of other devices or systems using control loops. It can range from a single home heating controller using a thermostat controlling a domestic boiler to large industrial control systems which are used for controlling processes or machines. The control systems are designed via control engineering process.\nFor continuously modulated control, a feedback controller is used to automatically control a process or operation. The control system compares the value or status of the process variable (PV) being controlled with the desired value or setpoint (SP), and applies the difference as a control signal to bring the process variable output of the plant to the same value as the setpoint.\nFor sequential and combinational logic, software logic, such as in a programmable logic controller, is used.\nLogic control.\nLogic control systems for industrial and commercial machinery were historically implemented by interconnected electrical relays and cam timers using ladder logic. Today, most such systems are constructed with microcontrollers or more specialized programmable logic controllers (PLCs). The notation of ladder logic is still in use as a programming method for PLCs.\nLogic controllers may respond to switches and sensors and can cause the machinery to start and stop various operations through the use of actuators. Logic controllers are used to sequence mechanical operations in many applications. Examples include elevators, washing machines and other systems with interrelated operations. An automatic sequential control system may trigger a series of mechanical actuators in the correct sequence to perform a task. For example, various electric and pneumatic transducers may fold and glue a cardboard box, fill it with the product and then seal it in an automatic packaging machine.\nPLC software can be written in many different ways – ladder diagrams, SFC (sequential function charts) or statement lists.\nOn–off control.\nOn–off control uses a feedback controller that switches abruptly between two states. A simple bi-metallic domestic thermostat can be described as an on-off controller. When the temperature in the room (PV) goes below the user setting (SP), the heater is switched on. Another example is a pressure switch on an air compressor. When the pressure (PV) drops below the setpoint (SP) the compressor is powered. Refrigerators and vacuum pumps contain similar mechanisms. Simple on–off control systems like these can be cheap and effective.\nFuzzy logic.\nFuzzy logic is an attempt to apply the easy design of logic controllers to the control of complex continuously varying systems. Basically, a measurement in a fuzzy logic system can be partly true.\nThe rules of the system are written in natural language and translated into fuzzy logic. For example, the design for a furnace would start with: \"If the temperature is too high, reduce the fuel to the furnace. If the temperature is too low, increase the fuel to the furnace.\"\nMeasurements from the real world (such as the temperature of a furnace) are \"fuzzified\" and logic is calculated arithmetic, as opposed to Boolean logic, and the outputs are \"de-fuzzified\" to control equipment.\nWhen a robust fuzzy design is reduced to a single, quick calculation, it begins to resemble a conventional feedback loop solution and it might appear that the fuzzy design was unnecessary. However, the fuzzy logic paradigm may provide scalability for large control systems where conventional methods become unwieldy or costly to derive.\nFuzzy electronics is an electronic technology that uses fuzzy logic instead of the two-value logic more commonly used in digital electronics.\nPhysical implementation.\nThe range of control system implementation is from compact controllers often with dedicated software for a particular machine or device, to distributed control systems for industrial process control for a large physical plant.\nLogic systems and feedback controllers are usually implemented with programmable logic controllers.", "Automation-Control": 0.9744856358, "Qwen2": "Yes"} {"id": "39388546", "revid": "1145874762", "url": "https://en.wikipedia.org/wiki?curid=39388546", "title": "Polygonal turning", "text": "Polygonal turning (or polygon turning) is a machining process which allows non-circular forms (polygons) to be machine turned without interrupting the rotation of the raw material. \nTechnical details.\nPolygonally turned parts may have several points, teeth, or other forms at the ends or along their circumference. The technique requires synchronisation of the movement of the polygonal turning mill and the part being machined. Polygonal turning allows rapid production and clean machining of advanced geometries. \nThe polygon turning unit has a multitude of inserts, and is synchronized so that when an insert cuts the turning bar stock, it cuts the bar at the same radial position each time the workpiece rotates. This enables geometries such as hexes, squares and flats to be machined at faster speeds than by milling.\nHistorical notes.\nThe Polygonal Turning Corporation of Marquette, Michigan manufactured shaped joinery products for domestic use during the 1890s.", "Automation-Control": 0.8714701533, "Qwen2": "Yes"} {"id": "39401577", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=39401577", "title": "Spring Back Compensation", "text": "Due to the plastic-elastic characteristic of a metal, it is typical that any deformation of sheet metal at room temperature will have both elastic and plastic deformation. After the metal workpiece is removed from the tool or deformation implement, the elastic deformation will be released and only the plastic deformation will remain. When a metal forming tool is planned and designed to deform a workpiece, the shape imparted by the tool will be a combination of elastic and plastic deformation. The release of the elastic deformation is the spring back often observed at the end of a metal forming process. \nThe spring back has to be compensated to achieve an accurate result.\nUsually, that is realized by overbending the material corresponding to the magnitude of the spring back. That means for the practical side of the bending process; the bending former, enters deeper into the bending prism.\nFor other sheet metal forming operations like drawing, it entails deforming the sheet metal past the planned net shape of the part, so that when the spring back is released from the part, the plastic deformation in that part delivers the desired shape of the part. In the case of complex tools, the spring back has to be already considered in the engineering and construction phases. Therefore, complex software simulations are used. Frequently this is not enough to deliver the desired results. In such cases practical experiments are done, using the trial-and-error plus experience method to correct the tool. However, the results (work pieces) are only stable, if all influencing factors are the same.\nThis mainly includes:\nThe list of factors may be continued. Spring back assessment of final formed products is a difficult problem and is affected by the complexity of the formed shape. The NUMISHEET 93 conference benchmark problem involves the draw bending of a U-channel using three measured parameters. Parameter less approaches have been proposed for more complex geometries but need validation.\nPractical example: electronic bending tools with spring-back compensation.\nManufactures of electrical assembly's produce components that are flat, using copper and aluminum. The mechanical properties of copper and aluminum are very different and require different programmable inputs in order to achieve the same dimensional characteristics. This variation in inputs is due to spring-back compensation.\nBending technology for flat material which measures each bend angle and provides spring back compensation is required. This gives the bend angle of flat materials true accuracy. This is attained by using bending prisms with electronic angular measurement technology. While bending two flat bolds supporting the material turn around. The bolds are directly connected to the angular sensors. A computer or rather the machine control then calculates the required final stroke. The spring back of every bend is compensated regardless of material type.\nIf the measuring accuracy is 0.1º, a high angle accuracy of +/- 0.2º is achieved instantly with the first workpiece without any rework. Because no adjustments are required, material waste amounts and setup times drop considerably. Even inconsistencies within a single piece of material are automatically adjusted.", "Automation-Control": 0.8157468438, "Qwen2": "Yes"} {"id": "39414235", "revid": "1461430", "url": "https://en.wikipedia.org/wiki?curid=39414235", "title": "Relaxed intersection", "text": "The relaxed intersection of \"m\" sets corresponds to the classical\nintersection between sets except that it is allowed to relax few sets in order to avoid an empty intersection.\nThis notion can be used to solve constraints satisfaction problems\nthat are inconsistent by relaxing a small number of constraints.\nWhen a bounded-error approach is considered for parameter estimation,\nthe relaxed intersection makes it possible to be robust with respect\nto some outliers.\nDefinition.\nThe \"q\"-relaxed intersection of the \"m\" subsets\nformula_1\nof formula_2,\ndenoted by\nformula_3\nis the set of all\nformula_4\nwhich belong to all\nformula_5\n's, except\nformula_6\nat most.\nThis definition is illustrated by Figure 1.\nDefine\nformula_7\nWe have\nformula_8\nCharacterizing the q-relaxed intersection is a thus a set inversion problem.\nExample.\nConsider 8 intervals:\nformula_9\nformula_10\nformula_11\nformula_12\nformula_13\nformula_14\nWe have\nformula_15\nformula_16\nformula_17\nformula_18\nformula_19\nformula_20\nformula_21\nRelaxed intersection of intervals.\nThe relaxed intersection of intervals is not necessary an interval. We thus take\nthe interval hull of the result. If formula_22's are intervals, the relaxed\nintersection can be computed with a complexity of \"m\".log(\"m\") by using the \nMarzullo's algorithm. It suffices to\nsort all lower and upper bounds of the \"m\" intervals to represent the\nfunction formula_23. Then, we easily get the set\nformula_24\nwhich corresponds to a union of intervals.\nWe then return the\nsmallest interval which contains this union.\nFigure 2 shows the function\nformula_25\nassociated to the previous example.\nRelaxed intersection of boxes.\nTo compute the \"q\"-relaxed intersection of \"m\" boxes of\nformula_26, we project all \"m\" boxes with respect to the \"n\" axes.\nFor each of the \"n\" groups of \"m\" intervals, we compute the \"q\"-relaxed intersection.\nWe return Cartesian product of the \"n\" resulting intervals.\nFigure 3 provides an\nillustration of the 4-relaxed intersection of 6 boxes. Each point of the\nred box belongs to 4 of the 6 boxes.\nRelaxed union.\nThe \"q\"-relaxed union of formula_27 is defined by\nformula_28\nNote that when \"q\"=0, the relaxed union/intersection corresponds to\nthe classical union/intersection. More precisely, we have\nformula_29\nand\nformula_30\nDe Morgan's law.\nIf formula_31 denotes the complementary set of formula_22, we have\nformula_33\nformula_34\nAs a consequence\nformula_35\nRelaxation of contractors.\nLet formula_36 be \"m\" contractors for the sets formula_37,\nthen\nformula_38\nis a contractor for formula_39\nand\nformula_40\nis a contractor for formula_41, where\nformula_42\nare contractors for\nformula_43\nCombined with a branch-and-bound algorithm such as SIVIA (Set Inversion Via Interval Analysis), the \"q\"-relaxed\nintersection of \"m\" subsets of formula_2 can be computed.\nApplication to bounded-error estimation.\nThe \"q\"-relaxed intersection can be used for robust localization\nor for tracking.\nRobust observers can also be implemented using the relaxed intersections to be robust with respect to outliers.\nWe propose here a simple example\nto illustrate the method.\nConsider a model the \"i\"th model output of which is given by\nformula_45\nwhere formula_46. Assume that we have\nformula_47\nwhere formula_48 and formula_49 are given by the following list\nformula_50\nThe sets formula_51 for different formula_6 are depicted on\nFigure 4.", "Automation-Control": 0.6244611144, "Qwen2": "Yes"} {"id": "31075291", "revid": "42144160", "url": "https://en.wikipedia.org/wiki?curid=31075291", "title": "Linear octree", "text": "A linear octree is an octree that is represented by a linear array instead of a tree data structure. \nTo simplify implementation, a linear octree is usually complete (that is, every internal node has exactly 8 child nodes) and where the maximum permissible depth is fixed a priori (making it sufficient to store the complete list of leaf nodes). That is, all the nodes of the octree can be generated from the list of its leaf nodes. Space filling curves are often used to represent linear octrees.", "Automation-Control": 0.6054285765, "Qwen2": "Yes"} {"id": "13579251", "revid": "32798152", "url": "https://en.wikipedia.org/wiki?curid=13579251", "title": "EnergyCS", "text": "EnergyCS is a Monrovia, California-based company specializing in integration and controls for high-energy, large format batteries. The company provides battery management systems for lithium-ion batteries and other advance energy storage technologies and is active in the electric vehicle and stationary energy storage space.\nEnergyCS is also a pioneer in the area of PHEV (Plug-in hybrid Electric Vehicles). The company produced the first lithium-ion powered plug-in Prius.", "Automation-Control": 0.8164476156, "Qwen2": "Yes"} {"id": "13580135", "revid": "32452", "url": "https://en.wikipedia.org/wiki?curid=13580135", "title": "Conditional short-circuit current", "text": "Conditional short-circuit current is the value of the alternating current component of a prospective current, which a switch without integral short-circuit protection, but protected by a suitable short circuit protective device (SCPD) in series, can withstand for the operating time of the current under specified test conditions. It may be understood to be the RMS value of the maximum permissible current over a specified time interval (t0,t1) and operating conditions.\nThe IEC definition is critiqued to be open to interpretation. \nformula_1", "Automation-Control": 0.863268435, "Qwen2": "Yes"} {"id": "13595409", "revid": "31561140", "url": "https://en.wikipedia.org/wiki?curid=13595409", "title": "Loop performance", "text": "Loop performance in control engineering indicates the performance of control loops, such as a regulatory PID loop. Performance refers to the accuracy of a control system's ability to track (output) the desired signals to regulate the plant process variables in the most beneficial and optimised way, without delay or overshoot.\nImportance.\nRegulatory control loops are critical in automated manufacturing and utility industries like refining, paper and chemicals manufacturing, power generation, among others. They are used to control a particular parameter within a process. The parameter that is being controlled could be temperature, pressure, flow or level of some process. For example, temperature controllers are used in boilers which are used in production of gasoline.\nSoftware.\nThere are many software applications that help in measuring and analysing the performance of control loops in industrial plants. Benchmarking the loop performance and identifying opportunities for improvement are key drivers for improving plant reliability, production throughput and safe operation.", "Automation-Control": 0.9978440404, "Qwen2": "Yes"} {"id": "23628757", "revid": "10951369", "url": "https://en.wikipedia.org/wiki?curid=23628757", "title": "Torque screwdriver", "text": "A torque screwdriver is a screwdriver with components that ensure tightening to a specified torque, ensuring tightening which is sufficient, but not excessive. An insufficiently tightened screw connection may loosen in operation, and excessive tightening can damage parts; for example, if the nuts holding the wheel of a car in place are too loose, or damaged by overtightening, a wheel may come off at speed. Torque screwdrivers are used in mechanical production, manufacturing, and maintenance; their use is part of quality assurance.\nMost torque screwdrivers allow the torque to be set to any value within a range. All have a torque-limiting clutch that disengages once the preset torque has been reached.\nTorque screwdrivers can exert torques from 0.04 N⋅m to at least 27 N⋅m. Although no single tool covers the entire range, low-, mid-, and high-torque ranges are available.\nTorque screwdrivers and torque wrenches have similar purposes and mechanisms.\nTorque-limiting clutch.\nThe clutch is the component that defines a torque screwdriver. This is achieved with steel balls rolling between indented plates, compressed by a spring at one end, and the other side driving a screw or fastener. The torque limiting clutch is the part of the tool that limits the amount of torque being applied to the fastener at the receiving end of the tool. On simpler tools the clutch settings may be marked with arbitrary numbers (e.g., from 1 for the lowest available torque to 20 for the highest, without necessarily having a linear relationship with actual torque) rather than torque values.\nTorque screwdrivers are available with several types of clutch, including “cam-over”, “cushion clutch”, and “auto shutoff”. Most of these clutch types are used in electric screwdrivers, air screwdrivers, impulse screwdrivers, manual torque screwdrivers, and cordless torque screwdrivers. Each type has the ability to preset a specified torque value. In some cases a tool may need to be certified in a calibration lab to verify its torque output; a certificate may be issued by an organisation such as NIST in the United States.\nCam over.\nA Cam-over clutch is usually found in a manual torque screwdriver where the clutch simply “fags“ or “cams-over”, meaning that it signals the maximum torque has been achieved.\nCushion clutch.\nCushion clutch or “slip clutch” styles are found in both electric screwdrivers and air screwdrivers. This clutch style is similar to the “cam-over” once the final torque is reached because the clutch continues to cam over and slip. It will continue to run until the operator releases the throttle.\nAuto shutoff.\nAn auto-shutoff clutch switches off the tool once the maximum torque is reached. Auto shutoff tools are designed for critical applications. They provide precision torque control and reduce energy consumption by eliminating idling.\nDrive source.\nThe torque may be provided manually (by the operator's wrist), by an electric motor, or by a pneumatic drive.\nManual torque screwdriver.\nManual torque screwdrivers are made in straight and pistol-grip models. Manual torque screwdrivers can have a range of 0.04 N⋅m (6 in oz) to 20 N⋅m (170 in lb).\nElectric torque screwdriver.\nScrewdriving requires torque to be applied by a rotary motion. Drilling responds to the same description; general-purpose power tools (\"drill/drivers\") are designed for both screwdriving and drilling, with a slipping clutch and low speed added to the drilling functionality. In an industrial environment dedicated tools optimised for their particular function are more often used.\nCorded.\nCorded electric torque screwdrivers are commonly made in three different designs: pistol grip, angle and inline. This type is the one mostly used for industrial assembly applications such as electronic assembly and small parts assembly. Brushed electric motors and more efficient brushless motors are used. The torque ranges typically from 0.02 N⋅m to at least 27 N⋅m, with speeds of up to 2,000 revolutions per minute (rpm).\nElectric screwdrivers with transducers can be categorized into three groups according to their physical features:\nCordless.\nCordless torque screwdriver are powered by batteries, usually rechargeable batteries with voltages from 3.6 to 18 volts. Dedicated screwdrivers for domestic use tend to operate off 3.6 to 4.8 volts and have relatively low maximum torque; drill/drivers operate off higher voltages and can deliver higher maximum torque. Cordless torque screwdrivers are used for the same applications as cordless screwdrivers without torque control.\nPneumatic torque screwdriver.\nThe pneumatic torque screwdriver is widely used for assembly requiring higher levels of torque. These tools are commonly used in automotive, aerospace and marine manufacturing. Pneumatic tools require a constant pressurized air source. Torque for this type of torque screwdriver ranges from 0.17 N⋅m (1.5 in lb) to 30 N⋅m (265 in lb), and speeds range from 800 to 2600 rpm. These tools must be near their compressed air source, not a problem in manufacturing but less suitable for general maintenance. Torque may not be controlled as accurately as by electrically powered tools.", "Automation-Control": 0.9167160392, "Qwen2": "Yes"} {"id": "1332422", "revid": "552474", "url": "https://en.wikipedia.org/wiki?curid=1332422", "title": "Mod parrot", "text": "mod_parrot is an optional module for the Apache web server. It embeds a Parrot virtual machine interpreter into the Apache server and provides access to the Apache API to allow handlers to be written in Parrot assembly language, or any high-level language targeted to Parrot.", "Automation-Control": 0.8850095272, "Qwen2": "Yes"} {"id": "1560090", "revid": "30620742", "url": "https://en.wikipedia.org/wiki?curid=1560090", "title": "Active-set method", "text": "In mathematical optimization, the active-set method is an algorithm used to identify the active constraints in a set of inequality constraints. The active constraints are then expressed as equality constraints, thereby transforming an inequality-constrained problem into a simpler equality-constrained subproblem.\nAn optimization problem is defined using an objective function to minimize or maximize, and a set of constraints\nthat define the feasible region, that is, the set of all \"x\" to search for the optimal solution. Given a point formula_2 in the feasible region, a constraint \nis called active at formula_4 if formula_5, and inactive at formula_2 if formula_7 Equality constraints are always active. The active set at formula_4 is made up of those constraints formula_9 that are active at the current point .\nThe active set is particularly important in optimization theory, as it determines which constraints will influence the final result of optimization. For example, in solving the linear programming problem, the active set gives the hyperplanes that intersect at the solution point. In quadratic programming, as the solution is not necessarily on one of the edges of the bounding polygon, an estimation of the active set gives us a subset of inequalities to watch while searching the solution, which reduces the complexity of the search.\nActive-set methods.\nIn general an active-set algorithm has the following structure:\nMethods that can be described as active-set methods include:", "Automation-Control": 0.803368032, "Qwen2": "Yes"} {"id": "71545208", "revid": "1153862156", "url": "https://en.wikipedia.org/wiki?curid=71545208", "title": "Angeliki Pantazi", "text": "Angeliki Pantazi is a Greek researcher in neuromorphic computing and in applications of control theory to computer data storage systems, for IBM Research in Zurich.\nEducation and career.\nPantazi studied electrical engineering and computer technology at the University of Patras, where she earned a diploma in 1996 and a Ph.D. in 2005. She has been affiliated with IBM Research in Zurich since 2002, and became a permanent member of the research staff in 2006.\nRecognition.\nPantazi is a Fellow of the International Federation of Automatic Control. She was named as an IBM Master Inventor in 2014.\nShe was part of a group of IBM researchers who in 2009 won both the Control Systems Technology Award and the Transactions on Control Systems Technology Outstanding Paper Award of the IEEE Control Systems Society, for their work on nanopositioning in microelectromechanical systems. She was the 2017 winner of the Control Systems Society Transition to Practice Award, for \"the development of advanced control technologies for magnetic tape data storage and nanopositioning applications\".", "Automation-Control": 0.9997689724, "Qwen2": "Yes"} {"id": "624144", "revid": "4851336", "url": "https://en.wikipedia.org/wiki?curid=624144", "title": "Control logic", "text": "Control logic is a key part of a software program that controls the operations of the program. The control logic responds to commands from the user, and it also acts on its own to perform automated tasks that have been structured into the program.\nControl logic can be modeled using a state diagram, which is a form of hierarchical state machine. These state diagrams can also be combined with flow charts to provide a set of computational semantics for describing complex control logic. This mix of state diagrams and flow charts is illustrated in the figure on the right, which shows the control logic for a simple stopwatch. The control logic takes in commands from the user, as represented by the event named “START”, but also has automatic recurring sample time events, as represented by the event named “TIC”.", "Automation-Control": 0.9990975261, "Qwen2": "Yes"} {"id": "23980156", "revid": "21417351", "url": "https://en.wikipedia.org/wiki?curid=23980156", "title": "Servo Robot Group", "text": "SERVO-ROBOT Group is a company that develops and creates intelligent sensing and digital vision systems to simplify manufacturing process automation such as welding. Therefore, the main activity is to build intelligent sensing systems based on precision measurement with laser beams and other intelligent sensing devices applicable to various industries such as automotive, railroad, pipe and tube, aerospace, shipbuilding, fabricated structures, windmill towers manufacturing, etc.\nFounded in 1983, SERVO-ROBOT has established its world headquarters, production plant and research and development center in the St-Bruno Industrial Park, south of Montreal, Quebec, Canada. More than 95% of SERVO-ROBOT's products are exported outside of Canada every year.\nApplications.\nInnovations developed in patents mentioned in the above section resulted in concrete solutions easily applicable to markets ranging from automotive to aerospace which has helped many companies and factories to become more productive and reach their Six Sigma constant improvements goals.\nExternal links.\nRobotic Industry Association \nManufacturing Talk ", "Automation-Control": 0.6533908844, "Qwen2": "Yes"} {"id": "225192", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=225192", "title": "Petri net", "text": "A Petri net, also known as a place/transition (PT) net, is one of several mathematical modeling languages for the description of distributed systems. It is a class of discrete event dynamic system. A Petri net is a directed bipartite graph that has two types of elements: places and transitions. Place elements are depicted as white circles and transition elements are depicted as rectangles. \nA place can contain any number of tokens, depicted as black circles. A transition is enabled if all places connected to it as inputs contain at least one token. Some sources state that Petri nets were invented in August 1939 by Carl Adam Petri—at the age of 13—for the purpose of describing chemical processes.\nLike industry standards such as UML activity diagrams, Business Process Model and Notation, and event-driven process chains, Petri nets offer a graphical notation for stepwise processes that include choice, iteration, and concurrent execution. Unlike these standards, Petri nets have an exact mathematical definition of their execution semantics, with a well-developed mathematical theory for process analysis.\nHistorical background.\nThe German computer scientist Carl Adam Petri, after whom such structures are named, analyzed Petri nets extensively in his 1962 dissertation .\nPetri net basics.\nA Petri net consists of \"places\", \"transitions\", and \"arcs\". Arcs run from a place to a transition or vice versa, never between places or between transitions. The places from which an arc runs to a transition are called the \"input places\" of the transition; the places to which arcs run from a transition are called the \"output places\" of the transition.\nGraphically, places in a Petri net may contain a discrete number of marks called \"tokens\". Any distribution of tokens over the places will represent a configuration of the net called a \"marking\". In an abstract sense relating to a Petri net diagram, a transition of a Petri net may \"fire\" if it is \"enabled\", i.e. there are sufficient tokens in all of its input places; when the transition fires, it consumes the required input tokens, and creates tokens in its output places. A firing is atomic, i.e. a single non-interruptible step.\nUnless an \"execution policy\" (e.g. a strict ordering of transitions, describing precedence) is defined, the execution of Petri nets is nondeterministic: when multiple transitions are enabled at the same time, they will fire in any order.\nSince firing is nondeterministic, and multiple tokens may be present anywhere in the net (even in the same place), Petri nets are well suited for modeling the concurrent behavior of distributed systems.\nFormal definition and basic terminology.\nPetri nets are state-transition systems that extend a class of nets called elementary nets.\nDefinition 1. A \"net\" is a tuple formula_1 where\nDefinition 2. Given a net \"N\" = (\"P\", \"T\", \"F\"), a \"configuration\" is a set \"C\" so that \"C\" ⊆ \"P\".\nDefinition 3. An \"elementary net\" is a net of the form \"EN\" = (\"N\", \"C\") where\nDefinition 4. A \"Petri net\" is a net of the form \"PN\" = (\"N\", \"M\", \"W\"), which extends the elementary net so that\nIf a Petri net is equivalent to an elementary net, then \"Z\" can be the countable set {0,1} and those elements in \"P\" that map to 1 under \"M\" form a configuration. Similarly, if a Petri net is not an elementary net, then the multiset \"M\" can be interpreted as representing a non-singleton set of configurations. In this respect, \"M\" extends the concept of configuration for elementary nets to Petri nets.\nIn the diagram of a Petri net (see top figure right), places are conventionally depicted with circles, transitions with long narrow rectangles and arcs as one-way arrows that show connections of places to transitions or transitions to places. If the diagram were of an elementary net, then those places in a configuration would be conventionally depicted as circles, where each circle encompasses a single dot called a \"token\". In the given diagram of a Petri net (see right), the place circles may encompass more than one token to show the number of times a place appears in a configuration. The configuration of tokens distributed over an entire Petri net diagram is called a \"marking\".\nIn the top figure (see right), the place \"p\"1 is an input place of transition \"t\"; whereas, the place \"p\"2 is an output place to the same transition. Let \"PN\"0 (top figure) be a Petri net with a marking configured \"M\"0, and \"PN\"1 (bottom figure) be a Petri net with a marking configured \"M\"1. The configuration of \"PN\"0 \"enables\" transition \"t\" through the property that all input places have sufficient number of tokens (shown in the figures as dots) \"equal to or greater\" than the multiplicities on their respective arcs to \"t\". Once and only once a transition is enabled will the transition fire. In this example, the \"firing\" of transition \"t\" generates a map that has the marking configured \"M\"1 in the image of \"M\"0 and results in Petri net \"PN\"1, seen in the bottom figure. In the diagram, the firing rule for a transition can be characterised by subtracting a number of tokens from its input places equal to the multiplicity of the respective input arcs and accumulating a new number of tokens at the output places equal to the multiplicity of the respective output arcs.\nRemark 1. The precise meaning of \"equal to or greater\" will depend on the precise algebraic properties of addition being applied on \"Z\" in the firing rule, where subtle variations on the algebraic properties can lead to other classes of Petri nets; for example, algebraic Petri nets.\nThe following formal definition is loosely based on . Many alternative definitions exist.\nSyntax.\nA Petri net graph (called \"Petri net\" by some, but see below) is a 3-tuple formula_5, where\nThe \"flow relation\" is the set of arcs: formula_7. In many textbooks, arcs can only have multiplicity 1. These texts often define Petri nets using \"F\" instead of \"W\". When using this convention, a Petri net graph is a bipartite multigraph formula_8 with node partitions \"S\" and \"T\".\nThe \"preset\" of a transition \"t\" is the set of its \"input places\": formula_9;\nits \"postset\" is the set of its \"output places\": formula_10. Definitions of pre- and postsets of places are analogous.\nA \"marking\" of a Petri net (graph) is a multiset of its places, i.e., a mapping formula_11. We say the marking assigns to each place a number of \"tokens\".\nA Petri net (called \"marked Petri net\" by some, see above) is a 4-tuple formula_12, where\nExecution semantics.\nIn words\nWe are generally interested in what may happen when transitions may continually fire in arbitrary order.\nWe say that a marking \"is reachable from\" a marking \"in one step\" if formula_18; we say that it \"is reachable from \" if formula_19, where formula_20 is the reflexive transitive closure of formula_21; that is, if it is reachable in 0 or more steps.\nFor a (marked) Petri net formula_22, we are interested in the firings that can be performed starting with the initial marking formula_14. Its set of \"reachable markings\" is the set\nformula_24\nThe \"reachability graph\" of is the transition relation formula_21 restricted to its reachable markings formula_26. It is the state space of the net.\nA \"firing sequence\" for a Petri net with graph and initial marking formula_14 is a sequence of transitions formula_28 such that formula_29. The set of firing sequences is denoted as formula_30.\nVariations on the definition.\nA common variation is to disallow arc multiplicities and replace the bag of arcs \"W\" with a simple set, called the \"flow relation\", formula_31.\nThis does not limit expressive power as both can represent each other.\nAnother common variation, e.g. in Desel and Juhás (2001), is to allow \"capacities\" to be defined on places. This is discussed under \"extensions\" below.\nFormulation in terms of vectors and matrices.\nThe markings of a Petri net formula_12 can be regarded as vectors of non-negative integers of length formula_33.\nIts transition relation can be described as a pair of formula_33 by formula_35 matrices:\nThen their difference\ncan be used to describe the reachable markings in terms of matrix multiplication, as follows.\nFor any sequence of transitions , write formula_41 for the vector that maps every transition to its number of occurrences in . Then, we have\nIt must be required that is a firing sequence; allowing arbitrary sequences of transitions will generally produce a larger set.\nformula_43\nformula_44\nCategory-theoretic formulation.\nMeseguer and Montanari considered a kind of symmetric monoidal categories known as Petri categories.\nMathematical properties of Petri nets.\nOne thing that makes Petri nets interesting is that they provide a balance between modeling power and analyzability: many things one would like to know about concurrent systems can be automatically determined for Petri nets, although some of those things are very expensive to determine in the general case. Several subclasses of Petri nets have been studied that can still model interesting classes of concurrent systems, while these problems become easier.\nAn overview of such decision problems, with decidability and complexity results for Petri nets and some subclasses, can be found in Esparza and Nielsen (1995).\nReachability.\nThe reachability problem for Petri nets is to decide, given a Petri net \"N\" and a marking \"M\", whether formula_45.\nIt is a matter of walking the reachability-graph defined above, until either the requested-marking is reached or it can no longer be found. This is harder than it may seem at first: the reachability graph is generally infinite, and it isn't easy to determine when it is safe to stop.\nIn fact, this problem was shown to be EXPSPACE-hard years before it was shown to be decidable at all (Mayr, 1981). Papers continue to be published on how to do it efficiently. In 2018, Czerwiński et al. improved the lower bound and showed that the problem is not ELEMENTARY. In 2021, this problem was shown to be non-primitive recursive, independently by Jerome Leroux \nand by Wojciech Czerwiński and Łukasz Orlikowski. These results thus close the long-standing complexity gap.\nWhile reachability seems to be a good tool to find erroneous states, for practical problems the constructed graph usually has far too many states to calculate. To alleviate this problem, linear temporal logic is usually used in conjunction with the tableau method to prove that such states cannot be reached. Linear temporal logic uses the semi-decision technique to find if indeed a state can be reached, by finding a set of necessary conditions for the state to be reached then proving that those conditions cannot be satisfied.\nLiveness.\nPetri nets can be described as having different degrees of liveness formula_46. A Petri net formula_47 is called formula_48-live if and only if all of its transitions are formula_48-live, where a transition is\nNote that these are increasingly stringent requirements: formula_60-liveness implies formula_61-liveness, for formula_62.\nThese definitions are in accordance with Murata's overview, which additionally uses formula_63\"-live\" as a term for \"dead\".\nBoundedness.\nA place in a Petri net is called \"k-bound\" if it does not contain more than \"k\" tokens in all reachable markings, including the initial marking; it is said to be \"safe\" if it is 1-bounded; it is \"bounded\" if it is \"k-bounded\" for some \"k\".\nA (marked) Petri net is called \"k\"-bounded, \"safe\", or \"bounded\" when all of its places are.\nA Petri net (graph) is called \"(structurally) bounded\" if it is bounded for every possible initial marking.\nA Petri net is bounded if and only if its reachability graph is finite.\nBoundedness is decidable by looking at covering, by constructing the Karp–Miller Tree.\nIt can be useful to explicitly impose a bound on places in a given net.\nThis can be used to model limited system resources.\nSome definitions of Petri nets explicitly allow this as a syntactic feature.\nFormally, \"Petri nets with place capacities\" can be defined as tuples formula_64, where formula_12 is a Petri net, formula_66 an assignment of capacities to (some or all) places, and the transition relation is the usual one restricted to the markings in which each place with a capacity has at most that many tokens.\nFor example, if in the net \"N\", both places are assigned capacity 2, we obtain a Petri net with place capacities, say \"N2\"; its reachability graph is displayed on the right.\nAlternatively, places can be made bounded by extending the net. To be exact,\na place can be made \"k\"-bounded by adding a \"counter-place\" with flow opposite to that of the place, and adding tokens to make the total in both places \"k\".\nDiscrete, continuous, and hybrid Petri nets.\nAs well as for discrete events, there are Petri nets for continuous and hybrid discrete-continuous processes that are useful in discrete, continuous and hybrid control theory, and related to discrete, continuous and hybrid automata.\nExtensions.\nThere are many extensions to Petri nets. Some of them are completely backwards-compatible (e.g. coloured Petri nets) with the original Petri net, some add properties that cannot be modelled in the original Petri net formalism (e.g. timed Petri nets). Although backwards-compatible models do not extend the computational power of Petri nets, they may have more succinct representations and may be more convenient for modeling. Extensions that cannot be transformed into Petri nets are sometimes very powerful, but usually lack the full range of mathematical tools available to analyse ordinary Petri nets.\nThe term high-level Petri net is used for many Petri net formalisms that extend the basic P/T net formalism; this includes coloured Petri nets, hierarchical Petri nets such as Nets within Nets, and all other extensions sketched in this section. The term is also used specifically for the type of coloured nets supported by CPN Tools.\nA short list of possible extensions follows:\nThere are many more extensions to Petri nets, however, it is important to keep in mind, that as the complexity of the net increases in terms of extended properties, the harder it is to use standard tools to evaluate certain properties of the net. For this reason, it is a good idea to use the most simple net type possible for a given modelling task.\nRestrictions.\nInstead of extending the Petri net formalism, we can also look at restricting it, and look at particular types of Petri nets, obtained by restricting the syntax in a particular way. Ordinary Petri nets are the nets where all arc weights are 1. Restricting further, the following types of ordinary Petri nets are commonly used and studied:\nWorkflow nets.\nWorkflow nets (WF-nets) are a subclass of Petri nets intending to model the workflow of process activities. \nThe WF-net transitions are assigned to tasks or activities, and places are assigned to the pre/post conditions.\nThe WF-nets have additional structural and operational requirements, mainly the addition of a single input (source) place with no previous transitions, and output place (sink) with no following transitions. Accordingly, start and termination markings can be defined that represent the process status.\nWF-nets have the soundness property, indicating that a process with a start marking of \"k\" tokens in its source place, can reach the termination state marking with \"k\" tokens in its sink place (defined as \"k\"-sound WF-net). Additionally, all the transitions in the process could fire (i.e., for each transition there is a reachable state in which the transition is enabled). \nA general sound (G-sound) WF-net is defined as being \"k\"-sound for every \"k\" > 0.\nA directed path in the Petri net is defined as the sequence of nodes (places and transitions) linked by the directed arcs. An elementary path includes every node in the sequence only once.\nA well-handled Petri net is a net in which there are no fully distinct elementary paths between a place and a transition (or transition and a place), i.e., if there are two paths between the pair of nodes then these paths share a node.\nAn acyclic well-handled WF-net is sound (G-sound).\nExtended WF-net is a Petri net that is composed of a WF-net with additional transition t (feedback transition). The sink place is connected as the input place of transition t and the source place as its output place. Firing of the transition causes iteration of the process (Note, the extended WF-net is not a WF-net). \nWRI (Well-handled with Regular Iteration) WF-net, is an extended acyclic well-handled WF-net. \nWRI-WF-net can be built as composition of nets, i.e., replacing a transition within a WRI-WF-net with a subnet which is a WRI-WF-net. The result is also WRI-WF-net. WRI-WF-nets are G-sound, therefore by using only WRI-WF-net building blocks, one can get WF-nets that are G-sound by construction.\nThe design structure matrix (DSM) can model process relations, and be utilized for process planning. The DSM-nets are realization of DSM-based plans into workflow processes by Petri nets, and are equivalent to WRI-WF-nets. The DSM-net construction process ensures the soundness property of the resulting net.\nOther models of concurrency.\nOther ways of modelling concurrent computation have been proposed, including vector addition systems, communicating finite-state machines, Kahn process networks, process algebra, the actor model, and trace theory. Different models provide tradeoffs of concepts such as compositionality, modularity, and locality.\nAn approach to relating some of these models of concurrency is proposed in the chapter by Winskel and Nielsen.", "Automation-Control": 0.9468713999, "Qwen2": "Yes"} {"id": "21527187", "revid": "910180", "url": "https://en.wikipedia.org/wiki?curid=21527187", "title": "Telecom network protocol analyzer", "text": "A telecom network protocol analyzer is a protocol analyzer to analyze a switching and signaling telecommunication protocol between different nodes in PSTN or Mobile telephone networks, such as 2G or 3G GSM networks, CDMA networks, WiMAX and so on. \nIn a mobile telecommunication network it can analyze the traffic between MSC and BSC, BSC and BTS, MSC and HLR, MSC and VLR, VLR and HLR, and so on.\nProtocol analyzers are mainly used for performance measurement and troubleshooting. These devices connect to the network to calculate key performance indicators to monitor the network and speed-up troubleshooting activities.", "Automation-Control": 0.6255507469, "Qwen2": "Yes"} {"id": "12251992", "revid": "39191556", "url": "https://en.wikipedia.org/wiki?curid=12251992", "title": "RunBot", "text": "RunBot is a miniature bipedal robot which belongs to the class of limit cycle walkers. Instead of using a central pattern generator it uses reflexes which generate the gait. The reflexes are triggered by ground contact sensors in the feet which then activate the motors. The generation of the walking gait is straightforward: when a foot touches the ground the \"other\" leg is lifted upwards so that the robot falls forward. This then causes this leg to touch the ground and so forth. The walking speed can be improved by means of reinforcement learning because there are only a few parameters in this scheme. RunBot was built in 2005 by Tao Geng as part of his PhD under supervision of Prof Woergoetter and after an idea by Dr Porr to use a walking robot to benchmark reflex based reinforcement learning rules. Its movements and adaptability are based on the work of neurophysiologist Nikolai Bernstein.\nSince its inception the RunBot has undergone numerous design iterations, for example where a moveable upper body mass on the robot keeps the walking pattern stable even on uneven terrain.\nDesign.\nThe locomotion system is kept simple with four motors: one on each of two knees, one on each of two hips. The sensory system is of similar simplicity, with the ability to detect the ground contact and the angles of the hips/knee motors. The motors are controlled by force and not by angle.\nExternal links.\nRunbot's creators recorded demonstrations of RunBot:", "Automation-Control": 0.8118691444, "Qwen2": "Yes"} {"id": "26684508", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=26684508", "title": "Tisean", "text": "TISEAN (acronym for Time Series Analysis) is a software package for the analysis of time series with methods based on the theory of nonlinear dynamical systems. It was developed by Rainer Hegger, Holger Kantz and Thomas Schreiber and is distributed under the GPL licence. Two highly cited scientific publications serve as an introduction to the methods addressed in the package: the article \"Practical implementation of nonlinear time series methods: The TISEAN package\" and the book \"Nonlinear time series analysis\".", "Automation-Control": 0.6260534525, "Qwen2": "Yes"} {"id": "26019038", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=26019038", "title": "Succinct game", "text": "In algorithmic game theory, a succinct game or a succinctly representable game is a game which may be represented in a size much smaller than its normal form representation. Without placing constraints on player utilities, describing a game of formula_1 players, each facing formula_2 strategies, requires listing formula_3 utility values. Even trivial algorithms are capable of finding a Nash equilibrium in a time polynomial in the length of such a large input. A succinct game is of \"polynomial type\" if in a game represented by a string of length \"n\" the number of players, as well as the number of strategies of each player, is bounded by a polynomial in \"n\" (a formal definition, describing succinct games as a computational problem, is given by Papadimitriou & Roughgarden 2008).\nTypes of succinct games.\nGraphical games.\nGraphical games are games in which the utilities of each player depends on the actions of very few other players. If formula_4 is the greatest number of players by whose actions any single player is affected (that is, it is the indegree of the game graph), the number of utility values needed to describe the game is formula_5, which, for a small formula_4 is a considerable improvement.\nIt has been shown that any normal form game is reducible to a graphical game with all degrees bounded by three and with two strategies for each player. Unlike normal form games, the problem of finding a pure Nash equilibrium in graphical games (if one exists) is NP-complete. The problem of finding a (possibly mixed) Nash equilibrium in a graphical game is PPAD-complete. Finding a correlated equilibrium of a graphical game can be done in polynomial time, and for a graph with a bounded treewidth, this is also true for finding an \"optimal\" correlated equilibrium.\nSparse games.\nSparse games are those where most of the utilities are zero. Graphical games may be seen as a special case of sparse games.\nFor a two player game, a sparse game may be defined as a game in which each row and column of the two payoff (utility) matrices has at most a constant number of non-zero entries. It has been shown that finding a Nash equilibrium in such a sparse game is PPAD-hard, and that there does not exist a fully polynomial-time approximation scheme unless PPAD is in P.\nSymmetric games.\nIn symmetric games all players are identical, so in evaluating the utility of a combination of strategies, all that matters is how many of the formula_1 players play each of the formula_2 strategies. Thus, describing such a game requires giving only formula_9 utility values.\nIn a symmetric game with 2 strategies there always exists a pure Nash equilibrium – although a \"symmetric\" pure Nash equilibrium may not exist. The problem of finding a pure Nash equilibrium in a symmetric game (with possibly more than two players) with a constant number of actions is in AC0; however, when the number of actions grows with the number of players (even linearly) the problem is NP-complete. In any symmetric game there exists a symmetric equilibrium. Given a symmetric game of \"n\" players facing \"k\" strategies, a symmetric equilibrium may be found in polynomial time if k=formula_10. Finding a correlated equilibrium in symmetric games may be done in polynomial time.\nAnonymous games.\nIn anonymous games, players have different utilities but do not distinguish between other players (for instance, having to choose between \"go to cinema\" and \"go to bar\" while caring only how crowded will each place be, not who'll they meet there). In such a game a player's utility again depends on how many of his peers choose which strategy, and his own, so formula_11 utility values are required.\nIf the number of actions grows with the number of players, finding a pure Nash equilibrium in an anonymous game is NP-hard. An optimal correlated equilibrium of an anonymous game may be found in polynomial time. When the number of strategies is 2, there is a known PTAS for finding an ε-approximate Nash equilibrium.\nPolymatrix games.\nIn a polymatrix game (also known as a \"multimatrix game\"), there is a utility matrix for every pair of players \"(i,j)\", denoting a component of player i's utility. Player i's final utility is the sum of all such components. The number of utilities values required to represent such a game is formula_12.\nPolymatrix games always have at least one mixed Nash equilibrium. The problem of finding a Nash equilibrium in a polymatrix game is PPAD-complete. Moreover, the problem of finding a constant approximate Nash equilibrium in a polymatrix game is also PPAD-complete. Finding a correlated equilibrium of a polymatrix game can be done in polynomial time. Note that even if pairwise games played between players have pure Nash equilibria, the global interaction does not necessarily admit a pure Nash equilibrium (although a mixed Nash equilibrium must exist). Checking if a pure Nash equilibrium exists is a strongly NP-complete problem.\nCompetitive polymatrix games with only zero-sum interactions between players are a generalization of two-player zero-sum games. The Minimax theorem originally formulated for two-player games by von Neumann generalizes to zero-sum polymatrix games. \nSame as two-player zero-sum games, polymatrix zero-sum games have mixed Nash equilibria that can be computed in polynomial time and those equilibria coincide with correlated equilibria. But some other properties of two-player zero-sum games do not generalize. Notably, players need not have a unique value of the game and equilibrium strategies are not max-min strategies in a sense that worst-case payoffs of players are not maximized when using an equilibrium strategy. There exists an open source Python library for simulating competitive polymatrix games.\nPolymatrix games which have coordination games on their edges are potential games and can be solved using a potential function method.\nCircuit games.\nThe most flexible of way of representing a succinct game is by representing each player by a polynomial-time bounded Turing machine, which takes as its input the actions of all players and outputs the player's utility. Such a Turing machine is equivalent to a Boolean circuit, and it is this representation, known as circuit games, that we will consider.\nComputing the value of a 2-player zero-sum circuit game is an EXP-complete problem, and approximating the value of such a game up to a multiplicative factor is known to be in PSPACE. Determining whether a pure Nash equilibrium exists is a formula_13-complete problem (see Polynomial hierarchy).\nOther representations.\nMany other types of succinct game exist (many having to do with allocation of resources). Examples include congestion games, network congestion games, scheduling games, local effect games, facility location games, action-graph games, hypergraphical games and more.\nSummary of complexities of finding equilibria.\nBelow is a table of some known complexity results for finding certain classes of equilibria in several game representations. \"NE\" stands for \"Nash equilibrium\", and \"CE\" for \"correlated equilibrium\". \"n\" is the number of players and \"s\" is the number of strategies each player faces (we're assuming all players face the same number of strategies). In graphical games, \"d\" is the maximum indegree of the game graph. For references, see main article text.", "Automation-Control": 0.7652766109, "Qwen2": "Yes"} {"id": "5708736", "revid": "76", "url": "https://en.wikipedia.org/wiki?curid=5708736", "title": "Feedback linearization", "text": "Feedback linearization is a common strategy employed in nonlinear control to control nonlinear systems. Feedback linearization techniques may be applied to nonlinear control systems of the form\nwhere formula_1 is the state, formula_2 are the inputs. The approach involves transforming a nonlinear control system into an equivalent linear control system through a change of variables and a suitable control input. In particular, one seeks a change of coordinates formula_3 and control input formula_4 so that the dynamics of formula_5 in the coordinates formula_6 take the form of a linear, controllable control system,\nAn outer-loop control strategy for the resulting linear control system can then be applied to achieve the control objective.\nFeedback linearization of SISO systems.\nHere, consider the case of feedback linearization of a single-input single-output (SISO) system. Similar results can be extended to multiple-input multiple-output (MIMO) systems. In this case, formula_7 and formula_8. The objective is to find a coordinate transformation formula_9 that transforms the system (1) into the so-called normal form which will reveal a feedback law of the form\nthat will render a linear input–output map from the new input formula_10 to the output formula_11. To ensure that the transformed system is an equivalent representation of the original system, the transformation must be a diffeomorphism. That is, the transformation must not only be invertible (i.e., bijective), but both the transformation and its inverse must be smooth so that differentiability in the original coordinate system is preserved in the new coordinate system. In practice, the transformation can be only locally diffeomorphic and the linearization results only hold in this smaller region.\nSeveral tools are required to solve this problem.\nLie derivative.\nThe goal of feedback linearization is to produce a transformed system whose states are the output formula_11 and its first formula_13 derivatives. To understand the structure of this target system, we use the Lie derivative. Consider the time derivative of (2), which can be computed using the chain rule,\nNow we can define the Lie derivative of formula_15 along formula_16 as,\nand similarly, the Lie derivative of formula_15 along formula_19 as,\nWith this new notation, we may express formula_21 as,\nNote that the notation of Lie derivatives is convenient when we take multiple derivatives with respect to either the same vector field, or a different one. For example,\nand\nRelative degree.\nIn our feedback linearized system made up of a state vector of the output formula_11 and its first formula_13 derivatives, we must understand how the input formula_27 enters the system. To do this, we introduce the notion of relative degree. Our system given by (1) and (2) is said to have relative degree formula_28 at a point formula_29 if,\nConsidering this definition of relative degree in light of the expression of the time derivative of the output formula_11, we can consider the relative degree of our system (1) and (2) to be the number of times we have to differentiate the output formula_11 before the input formula_27 appears explicitly. In an LTI system, the relative degree is the difference between the degree of the transfer function's denominator polynomial (i.e., number of poles) and the degree of its numerator polynomial (i.e., number of zeros).\nLinearization by feedback.\nFor the discussion that follows, we will assume that the relative degree of the system is formula_37. In this case, after differentiating the output formula_37 times we have,\nwhere the notation formula_40 indicates the formula_37th derivative of formula_11. Because we assumed the relative degree of the system is formula_37, the Lie derivatives of the form formula_44 for formula_45 are all zero. That is, the input formula_27 has no direct contribution to any of the first formula_13th derivatives.\nThe coordinate transformation formula_48 that puts the system into normal form comes from the first formula_13 derivatives. In particular,\ntransforms trajectories from the original formula_51 coordinate system into the new formula_52 coordinate system. So long as this transformation is a diffeomorphism, smooth trajectories in the original coordinate system will have unique counterparts in the formula_52 coordinate system that are also smooth. Those formula_52 trajectories will be described by the new system,\nHence, the feedback control law\nrenders a linear input–output map from formula_57 to formula_58. The resulting linearized system\nis a cascade of formula_37 integrators, and an outer-loop control formula_57 may be chosen using standard linear system methodology. In particular, a state-feedback control law of\nwhere the state vector formula_52 is the output formula_11 and its first formula_13 derivatives, results in the LTI system\nwith,\nSo, with the appropriate choice of formula_68, we can arbitrarily place the closed-loop poles of the linearized system.\nUnstable zero dynamics.\nFeedback linearization can be accomplished with systems that have relative degree less than formula_37. However, the normal form of the system will include zero dynamics (i.e., states that are not observable from the output of the system) that may be unstable. In practice, unstable dynamics may have deleterious effects on the system (e.g., it may be dangerous for internal states of the system to grow unbounded). These unobservable states may be controllable or at least stable, and so measures can be taken to ensure these states do not cause problems in practice. Minimum phase systems provide some insight on zero dynamics.", "Automation-Control": 0.9985007048, "Qwen2": "Yes"} {"id": "50993227", "revid": "7611264", "url": "https://en.wikipedia.org/wiki?curid=50993227", "title": "ULMA Handling Systems", "text": "ULMA Handling Systems is a material handling and logistics automation company, supplier of automated storage and retrieval systems, based in Oñati, Spain. The company engineers design, produce, and install material handling systems in installations, from small warehouses to complex systems.\nCorporate information.\nThe firm has operational subsidiaries in several countries \nHistory.\nULMA Handling Systems is one of the 8 companies which make up ULMA Group, which dates back to 1957 when six young mechanics set up a small workshop in Oñati (Guipúzcoa). In 1988 was founded ULMA Handling Systems, after a technology transfer agreement was signed with the Japanese company Daifuku for the sale and introduction of automatic material handling.\nIn 1997, the company broke into overseas markets installing warehouses in Brazil, France and Italy and nowadays has subsidiaries in Spain, France, Netherlands, Brazil, Chile and Peru.\nProducts and services.\nThe company designs material handling systems involving automatic movements of the products, improving the productivity rates and the efficiency of the warehouses.\nThe company develops order picking solutions, automated storage and retrieval systems (AS/RS), conveyor and automated guided vehicles, automated sorting solutions and end of line solutions. ULMA Handling Systems offers from logistics consulting, planning, design to after sales service.\nIn addition, the company provides warehouse management software which guarantees the optimization and optimum control of all the movements of the goods that are located into the warehouse. \nIt also offers baggage handling solutions, designing and developing integral solutions and health logistics solutions, such as storage of medication and its dispensing.", "Automation-Control": 0.8836647272, "Qwen2": "Yes"} {"id": "12499952", "revid": "21171569", "url": "https://en.wikipedia.org/wiki?curid=12499952", "title": "Equal channel angular extrusion", "text": "Equal channel angular extrusion (ECAE) called also equal channel angular pressing (ECAP) is one technique from the Severe Plastic Deformation (SPD) group, aimed at producing Ultra Fine Grained (UFG) material. Developed in the Soviet Union in 1973 by Segal.  However, the dates are not always consistent. In industrial metalworking, it is an extrusion process, The technique is able to refine the microstructure of metals and alloys, thereby improving their strength according to the Hall-Petch relationship. This process improves not only the strength but also other properties such as corrosion and wear resistance of alloys and compounds. \nECAE is unique because significant cold work can be accomplished without reduction in the cross sectional area of the deformed workpiece. In conventional deformation processes like rolling, forging, extrusion, and drawing, strain is introduced by reduction in the cross sectional area. ECAE produces significant deformation strain without reducing the cross sectional area. This is accomplished by extruding the work piece around a corner. For example, a square cross section bar of metal is forced through a channel with a 90° degree angle. The cross section of the channel is equal on entry and exit. The complex deformation of the metal as it flows around the corner produces very high strain. Because the cross section remains the same, a work piece can be extruded multiple times with each pass introducing additional strain. \nDie design is critical because of the large forces required. \nTo reduce the friction of the pushed sample is lubricated with grease for example mixture of graphite and oil, and to reduce the forces, the process is sometimes carried out at elevated temperatures but then recrystallization can occur which can also leads to excessive grain growth at elevated temperature. \nThere are some modifications of the process e.g. incremental ECAP (I-ECAP) for the production of continuous products.\nProcess routes.\nThe process can be carried out in multiple passes. According to the rotation angle and direction between next passes, there can be four fundamental process routes named A, Ba, Bc, and C:\nFinite element method in the ECAE process.\nThe behave during deformation and flow of the material, are analyzed by scientists and there are many articles on computer simulation, finite element method is one of the important approaches to understand the deformation occurring in the ECAE process.\nSee also.\nSevere plastic deformation\nStrengthening mechanisms of materials", "Automation-Control": 0.9151530266, "Qwen2": "Yes"} {"id": "9481277", "revid": "40742885", "url": "https://en.wikipedia.org/wiki?curid=9481277", "title": "Control of chaos", "text": "In lab experiments that study chaos theory, approaches designed to control chaos are based on certain observed system behaviors. Any chaotic attractor contains an infinite number of unstable, periodic orbits. Chaotic dynamics, then, consists of a motion where the system state moves in the neighborhood of one of these orbits for a while, then falls close to a different unstable, periodic orbit where it remains for a limited time and so forth. This results in a complicated and unpredictable wandering over longer periods of time.\nControl of chaos is the stabilization, by means of small system perturbations, of one of these unstable periodic orbits. The result is to render an otherwise chaotic motion more stable and predictable, which is often an advantage. The perturbation must be tiny compared to the overall size of the attractor of the system to avoid significant modification of the system's natural dynamics.\nSeveral techniques have been devised for chaos control, but most are developments of two basic approaches: the OGY (Ott, Grebogi and Yorke) method and Pyragas continuous control. Both methods require a previous determination of the unstable periodic orbits of the chaotic system before the controlling algorithm can be designed.\nOGY method.\nE. Ott, C. Grebogi and J. A. Yorke were the first to make the key observation that the infinite number of unstable periodic orbits typically embedded in a chaotic attractor could be taken advantage of for the purpose of achieving control by means of applying only very small perturbations. After making this general point, they illustrated it with a specific method, since called the OGY method (Ott, Grebogi and Yorke) of achieving stabilization of a chosen unstable periodic orbit. In the OGY method, small, wisely chosen, kicks are applied to the system once per cycle, to maintain it near the desired unstable periodic orbit.\nTo start, one obtains information about the chaotic system by analyzing a slice of the chaotic attractor. This slice is a Poincaré section. After the information about the section has been gathered, one allows the system to run and waits until it comes near a desired periodic orbit in the section. Next, the system is encouraged to remain on that orbit by perturbing the appropriate parameter. When the control parameter is actually changed, the chaotic attractor is shifted and distorted somewhat. If all goes according to plan, the new attractor encourages the system to continue on the desired trajectory. One strength of this method is that it does not require a detailed model of the chaotic system but only some information about the Poincaré section. It is for this reason that the method has been so successful in controlling a wide variety of chaotic systems.\nThe weaknesses of this method are in isolating the Poincaré section and in calculating the precise perturbations necessary to attain stability.\nPyragas method.\nIn the Pyragas method of stabilizing a periodic orbit, an appropriate continuous controlling signal is injected into the system, whose intensity is practically zero as the system evolves close to the desired periodic orbit but increases when it drifts away from the desired orbit. Both the Pyragas and OGY methods are part of a general class of methods called \"closed loop\" or \"feedback\" methods which can be applied based on knowledge of the system obtained through solely observing the behavior of the system as a whole over a suitable period of time.\nApplications.\nExperimental control of chaos by one or both of these methods has been achieved in a variety of systems, including turbulent fluids, oscillating chemical reactions, magneto-mechanical oscillators and cardiac tissues. attempt the control of chaotic bubbling with the OGY method and using electrostatic potential as the primary control variable.\nForcing two systems into the same state is not the only way to achieve synchronization of chaos. Both control of chaos and synchronization constitute parts of cybernetical physics, a research area on the border between physics and control theory.", "Automation-Control": 0.8424516916, "Qwen2": "Yes"} {"id": "1835200", "revid": "25046916", "url": "https://en.wikipedia.org/wiki?curid=1835200", "title": "Automation surprise", "text": "An automation surprise is an action that is performed by an automation system and is unexpected by the user. A mode error can be a common cause of an automation surprise. Automation surprise can be dangerous when it upsets the situational awareness of a control operator.", "Automation-Control": 0.9897094369, "Qwen2": "Yes"} {"id": "1839766", "revid": "43266521", "url": "https://en.wikipedia.org/wiki?curid=1839766", "title": "Tool and cutter grinder", "text": "A Tool and Cutter Grinder is used to sharpen milling cutters and tool bits along with a host of other cutting tools.\nIt is an extremely versatile machine used to perform a variety of grinding operations: surface, cylindrical, or complex shapes. The image shows a manually operated setup, however highly automated Computer Numerical Control (CNC) machines are becoming increasingly common due to the complexities involved in the process.\nThe operation of this machine (in particular, the manually operated variety) requires a high level of skill. The two main skills needed are understanding of the relationship between the grinding wheel and the metal being cut and knowledge of tool geometry. The illustrated set-up is only one of many combinations available. The huge variety in shapes and types of machining cutters requires flexibility in usage. A variety of dedicated fixtures are included that allow cylindrical grinding operations or complex angles to be ground. The vise shown can swivel in three planes.\nThe table moves longitudinally and laterally, the head can swivel as well as being adjustable in the horizontal plane, as visible in the first image. This flexibility in the head allows the critical clearance angles required by the various cutters to be achieved.\nCNC tool and cutter grinder.\nToday's tool and cutter grinder is typically a CNC machine tool, usually 5 axes, which produces endmills, drills, step tools, etc. which are widely used in the metal cutting and woodworking industries.\nModern CNC tool and cutter grinders enhance productivity by typically offering features such as automatic tool loading as well as the ability to support multiple grinding wheels. High levels of automation, as well as automatic in-machine tool measurement and compensation, allow extended periods of unmanned production. With careful process configuration and appropriate tool support, tolerances less than 5 micrometres (0.0002\") can be consistently achieved even on the most complex parts.\nApart from manufacturing, in-machine tool measurement using touch-probe or laser technology allows cutting tools to be reconditioned. During normal use, cutting edges either wear and/or chip. The geometric features of cutting tools can be automatically measured within the CNC tool grinder and the tool ground to return cutting surfaces to optimal condition. \nSignificant software advancements have allowed CNC tool and cutter grinders to be utilized in a wide range of industries. Advanced CNC grinders feature sophisticated software that allows geometrically complex parts to be designed either parametrically or by using third party CAD/CAM software. 3D simulation of the entire grinding process and the finished part is possible as well as detection of any potential mechanical collisions and calculation of production time. Such features allow parts to be designed and verified, as well as the production process optimized, entirely within the software environment. \nTool and cutter grinders can be adapted to manufacturing precision machine components. The machine, when used for these purposes more likely would be called a CNC Grinding System.\nCNC Grinding Systems are widely used to produce parts for aerospace, medical, automotive, and other industries. Extremely hard and exotic materials are generally no problem for today's grinding systems and the multi-axis machines are capable of generating quite complex geometries.\nRadius grinder.\nA radius grinder (or radius tool grinder) is a special grinder used for grinding the most complex tool forms, and is the historical predecessor to the CNC tool and cutter grinder. Like the CNC grinder, it may be used for other tasks where grinding spherical surfaces is necessary. The tool itself consists of three parts: The grinder head, work table, and holding fixture. The grinder head has three degrees of freedom. Vertical movement, movement into the workpiece, and tilt. These are generally set statically, and left fixed throughout operations. The work table is a T-slotted X-axis table mounted on top of a radial fixture. Mounting the X axis on top of the radius table, as opposed to the other way around, allows for complex and accurate radius grinds. The holding fixtures can be anything one can mount on a slotted table, but most commonly used is a collet or chuck fixture that indexes and has a separate Y movement to allow accurate depth setting and endmill sharpening. The dressers used on these grinders are usually quite expensive, and can dress the grinding wheel itself with a particular radius.\nD-bit grinder.\nThe D-bit (after Deckel, the brand of the original manufacturer) grinder is a tool bit grinder designed to produce single-lip cutters for pantograph milling machines. Pantographs are a variety of milling machine used to create cavities for the dies used in the molding process; they are largely obsolete and replaced by CNC machining centers in modern industry.\nWith the addition of accessory holders, the single-lip grinding capability may also be applied to grinding lathe cutting bits, and simple faceted profiles on tips of drill bits or end mills. The machine is sometimes advertised as a \"universal cutter-grinder\", but the \"universal\" term refers only to the range of compound angles available, not that the machine is capable of sharpening the universe of tools. The machine is not capable of sharpening drill bits in the standard profiles, or generating any convex or spiral profiles.", "Automation-Control": 0.8763913512, "Qwen2": "Yes"} {"id": "1841168", "revid": "33011235", "url": "https://en.wikipedia.org/wiki?curid=1841168", "title": "Punch press", "text": "A punch press is a type of machine press used to cut holes in material. It can be small and manually operated and hold one simple die set, or be very large, CNC operated, with a multi-station turret and hold a much larger and complex die set.\nDescription.\nPunch presses are large machines with either a 'C' type frame, or a 'portal' (bridge) type frame. The \"C\" type has the hydraulic ram at the top foremost part, whereas the portal frame is much akin to a complete circle with the ram being centered within the frame to stop frame deflection or distortion.\nC type presses have a bed plate which is used to lock the die bottom bolster. For locking the die, T-bolts are used and so this plate contains T-slots into which T-bolts are slid in. These slots are placed diagonally and with a slot horizontal to the longer side of the plate, is the general practice. These slots run up to a central hole made in the plate, the hole being large enough to accommodate another bush with a hole, the hole being used for dropping the punched part to the bottom of the press. The top of the tool butted against a vertical sliding ram with a clamping system which accommodates only a particular diameter of a threaded cylindrical member called the \"shank\" of the tool. The bottom portion of the tool is locked to the bottom bed plate and the top portion of the tool is locked to the sliding ram. Top and bottom portions of the tool are generally guided by suitable pillar and bush assemblies, which gives safety to the punching elements of the tool.\nGenerally the tool is placed slightly above the bottom bed plate by providing two parallel blocks accurately ground to the same size. This is a necessary action since many tools, scrap (cut pieces which are a waste) is discharged through the bottom element of the tool, not necessarily in the centre of the tool. The scrap or the blank (the required portion) come out from the die at different places. These have to be taken out horizontally from between the parallels placed. Otherwise they get accumulated inside the tool itself and cause severe damage to the tool.\nIn very heavy presses with higher tonnage, the sliding ram has also a thick plate with T slots for locking the top plate of the tool (called the top bolster). In such cases the threaded cylinder called shank is not attached to the tool. The clamps are either mechanical (manually operated using spanners) or air operated varieties.\nTurret type punch press machines have a table or bed with brushes or rollers to allow the sheet metal workpiece to traverse with low friction. Brushes are used where scratches on the workpiece must be minimized, as with brushed aluminium or high polished materials.\nThe punch press is characterized by parameters such as:\nPunch presses are usually referred to by their tonnage and table size. In a production environment a 30-ton press is mostly the machine used today. The tonnage needed to cut and form the material is well known, so sizing tooling for a specific job is a fairly straightforward task. According to the requirement the tonnage may even go up to 2000 to 2500 ton presses.\nDie set.\nA die set consists of a set of punches (male) and dies (female) which, when pressed together, form a hole in a workpiece (and may also deform the workpiece in some desired manner). The punches and dies are removable, with the punch being attached to the ram during the punching process. The ram moves up and down in a vertically linear motion, forcing the punch through the material into the die.\nAxis.\nThe main bed of most machines is called the 'X' axis, with the 'Y' axis being at right angles to that and allowed to traverse under CNC control. Dependent on the size of the machine, the beds, and the sheet metal workpiece weight, the motors required to move these axis tables will vary in size and power. Older styles of machines used DC motors; however, with advances in technology, today's machines mostly use AC brushless motors for drives.\nCNC-controlled operation.\nTo start a cycle, the CNC controller commands the drives to move the table along the X and the Y axis to a desired position. Once in position, the control initiates the punching sequence and pushes the ram from top dead center (TDC) to bottom dead center (BDC) through the material plane. (The terms BDC and TDC go back to older presses with pneumatic or hydraulic clutches. On today's machines BDC/TDC do not actually exist but are still used for the bottom and top of a stroke.)\nOn its stroke from TDC to BDC, the punch enters the material, pushing it through the die, obtaining the shape determined by the design of the punch and die set. The piece of material (slug) cut from the workpiece is ejected through the die and bolster plate and collected in a scrap container. The return to TDC signals to the control to begin the next cycle.\nThe punch press is used for high volume production. Cycle times are often measured in milliseconds. Material yield is measured as a percentage of parts to waste per sheet processed. CAD/CAM programs maximize yield by nesting parts in the layout of the sheet.\nDrive type.\nFlywheel drive.\nMost punch presses today are hydraulically powered. Older machines, however, have mechanically driven rams, meaning the power to the ram is provided by a heavy, constantly rotating flywheel. The flywheel drives the ram using a Pitman arm. In the 19th century, the flywheels were powered by leather drive belts attached to line shafting, which in turn ran to a steam plant. In the modern workplace, the flywheel is powered by an electric motor.\nMechanical punch press.\nMechanical punch presses fall into two distinct types, depending on the type of clutch or braking system with which they are equipped. Generally, older presses are \"full revolution\" presses that require a full revolution of the crankshaft for them to come to a stop. Full revolution clutch presses are known to be dangerous and outlawed in many countries unless the pinch point is fully guarded. This is because the braking mechanism depends on a set of raised keys or \"dogs\" to fall into matching slots to stop the ram. A full revolution clutch can only bring the ram to a stop at the same location - top dead center. Newer presses are often \"part revolution\" presses equipped with braking systems identical to the brakes on commercial trucks. When air is applied, a band-type brake expands and allows the crankshaft to revolve. When the stopping mechanism is applied the air is bled, causing the clutch to open and the braking system to close, stopping the ram in any part of its rotation. Modern part revolution clutch and brake units are normally combined units that operate in a fail safe mode, a dual air safety valve engages clutch and starts slide motion and brake is applied by springs.\nHydraulic punch press.\nHydraulic punch presses power the ram with a hydraulic cylinder rather than a flywheel, and are either valve controlled or valve and feedback controlled. Valve controlled machines usually allow a one stroke operation allowing the ram to stroke up and down when commanded. Controlled feedback systems allow the ram to be proportionally controlled to within fixed points as commanded. \nThis allows greater control over the stroke of the ram, and increases punching rates as the ram no longer has to complete the traditional full stroke up and down but can operate within a very short window of stroke.\nServo drive turret punch press.\nA servo drive turret punch press uses twin AC servo drives directly coupled to the drive shaft. This drive system combines the simplicity of the original clutch and brake technology with the speed of a hydraulic ram driven system. This results in high performance, reliability, and lower operating costs. A servo drive press system has no complex hydraulics or oil-cooling chillers, thus reducing maintenance and repair costs. A turret press can be equipped with advanced technology that stores and reuses energy generated during ram deceleration, providing extended electrical power savings.", "Automation-Control": 0.9690651894, "Qwen2": "Yes"} {"id": "1541115", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=1541115", "title": "Pistonless rotary engine", "text": "A pistonless rotary engine is an internal combustion engine that does not use pistons in the way a reciprocating engine does. Designs vary widely but typically involve one or more rotors, sometimes called rotary pistons. Although many different designs have been constructed, only the Wankel engine has achieved widespread adoption.\nThe term rotary combustion engine has been used as a name for these engines to distinguish them from early (generally up to the early 1920s) aircraft engines and motorcycle engines also known as \"rotary engines\". However, both continue to be called \"rotary engines\" and only the context determines which type is meant, whereas the \"pistonless\" prefix is less ambiguous.\nPistonless rotary engines.\nA pistonless rotary engine replaces the linear reciprocating motion of a piston with more complex compression/expansion motions with the objective of improving some aspect of the engine's operation, such as: higher efficiency thermodynamic cycles, lower mechanical stress, lower vibration, higher compression, or less mechanical complexity. the Wankel engine is the only successful pistonless rotary engine, but many similar concepts have been proposed and are under various stages of development. Examples of rotary engines include:", "Automation-Control": 0.73222363, "Qwen2": "Yes"} {"id": "40545374", "revid": "8766034", "url": "https://en.wikipedia.org/wiki?curid=40545374", "title": "Load rejection", "text": "Load rejection in an electric power system is the condition in which there is a sudden load loss in the system which causes the generating equipment to be over-frequency.\nA load rejection test is part of commissioning for power systems to confirm that the system can withstand a sudden loss of load and return to normal operating conditions using its governor. Load banks are normally used for these tests.", "Automation-Control": 0.8870129585, "Qwen2": "Yes"} {"id": "36555728", "revid": "1937176", "url": "https://en.wikipedia.org/wiki?curid=36555728", "title": "ION LMD", "text": "ION LMD system is one of the laser microdissection systems and a name of device that follows Gravity-Assisted Microdissection method, also known as GAM method. This non-contact laser microdissection system makes cell isolation for further genetic analysis possible. It is the first developed laser microdissection system in Asia.\nHistory.\nAt first, proto type of ION LMD system was developed in 2004.\nThe first generation of ION LMD was developed in 2005 and then the second generation(so-called G2) was developed in 2008. At last, the third generation(so-called ION LMD Pro) was developed in 2012.\nManufacturer.\nJungWoo F&B was founded in 1994, and offers various factory automation products for clients in semiconductor, consumer electronics, LCD, automotive manufacturing and ship-building industries. In 2003, the company entered the bio-mechanics business for the medical laboratory market and developed an ION LMD system which is utilized in cancer research.\nAwards.\nThis ION LMD system has got some reliable awards.", "Automation-Control": 0.9481340647, "Qwen2": "Yes"} {"id": "36960047", "revid": "42522270", "url": "https://en.wikipedia.org/wiki?curid=36960047", "title": "Thin-wall injection molding", "text": "Thin wall injection molding is a specialized form of conventional injection molding that focuses on mass-producing plastic parts that are thin and light so that material cost savings can be made and cycle times can be as short as possible. Shorter cycle times means higher productivity and lower costs per part.\nThe definition of thin wall is really about the size of the part compared to its wall thickness. For any particular plastic part, as the wall thickness reduces the harder it is to manufacture using the injection molding process. The size of a part puts a limit on how thin the wall thickness can be. For packaging containers thin wall means wall thicknesses that are less than 0.025 inch (0.62mm) with a flow length to wall thickness greater than 200.\nMarkets.\nThe trend towards thin wall molding continues to increase in many plastic industries as plastic material and energy costs continue to rise and delivery lead times are squeezed.\nThe following industries make use of thin wall molding: \nExamples.\nPlastic resins suitable for thin-wall molding should have high-flow properties, particularly low melt viscosity. In addition, they need to be robust enough to avoid degradation from the heat generated by high shear rates (high injection speeds)\nSome plastic manufacturers make plastics specifically for thin wall applications that have excellent flow properties inside the mold cavity. For example, plastic manufacturer Sabic has a polypropylene food contact grade plastic which is specifically designed for thin wall margarine containers and lids.\nAnother plastic manufacturer, Bayer, makes a blend of Polycarbonate (PC) and Acrylonitrile butadiene styrene (ABS) specifically designed to make thin wall mobile housings.\nEquipment.\nPlastic injection molding machine.\nCompared to conventional injection molding, thin wall molding requires molding machines that are designed and built to withstand higher stresses and injection pressures. The molding machines computer control should also be precise in order to make quality parts. For this reason these molding machines are more expensive than general purpose machines. \nThin-wall-capable machines usually also have accumulator-assisted clamps to accommodate fast cycle times.\nRegular maintenance schedules must be completed so that the machine and part quality does not suffer. These machines usually work 24/7 so they need to be well maintained.\nInjection mold design.\nAs with the injection molding machines, injection molds need to be robust enough to withstand high stresses and pressures. Heavy mold construction with through hardened tool steels will ensure a long lasting mold.\nThe mold must also have a well designed cooling system so that heat can be quickly extracted from the hot plastic part allowing fast cycle times. To achieve this, cooling channels need to be designed close to the molding surface. \nCleaning the mould on a daily basis is also a critical requirement to maintain the part quality.\nRobotics.\nIn countries where manual labour is expensive, robots are commonly used to remove the plastic parts from the mold and order them into equal stacks. These robots are fixed to the molding machine and need to be fast and reliable.\nPlastic injection molding process.\nThe range of process parameters, which are employed for thin wall molded parts, is considerably narrower than that of conventional injection molding because thin parts are difficult for the injection unit of the machine to fill compared to thicker parts. Even with optimally designed parts and molds, it is still more difficult to produce parts with thin walls.\nConsistent injection speeds and pressures are required to maintain the quality of the parts produced from thin wall molding. A properly trained molding technician who understands and operates the machine within the confines of the narrow processing window at which the molded product's production cost-effectiveness is optimized will ensure that the process produces quality parts during the production.", "Automation-Control": 0.9877184629, "Qwen2": "Yes"} {"id": "36964367", "revid": "26906394", "url": "https://en.wikipedia.org/wiki?curid=36964367", "title": "Disjunctive graph", "text": "In the mathematical modeling of job shop scheduling problems, disjunctive graphs are a way of modeling a system of tasks to be scheduled and timing constraints that must be respected by the schedule.\nThey are mixed graphs, in which vertices (representing tasks to be performed) may be connected by both directed and undirected edges (representing timing constraints between tasks). The two types of edges represent constraints of two different types:\nPairs of tasks that have no constraint on their ordering – they can be performed in either order or even simultaneously – are disconnected from each other in the graph.\nA valid schedule for the disjunctive graph may be obtained by finding an acyclic orientation of the undirected edges – that is, deciding for each pair of non-simultaneous tasks which is to be first, without introducing any circular dependencies – and then ordering the resulting directed acyclic graph. In particular, suppose that all tasks have equal length and the goal is to find a schedule that minimizes the makespan, the total time until all tasks have been completed. In this case, the makespan can be computed from the longest path in the oriented graph, which can be found in polynomial time for directed acyclic graphs. However, the orientation stage of the solution is much more difficult: it is NP-hard to find the acyclic orientation that minimizes the length of the longest path. In particular, by the Gallai–Hasse–Roy–Vitaver theorem, if all edges are initially undirected, then orienting them to minimize the longest path is equivalent to finding an optimal graph coloring of the initial undirected graph.", "Automation-Control": 0.8931853175, "Qwen2": "Yes"} {"id": "71285020", "revid": "23646674", "url": "https://en.wikipedia.org/wiki?curid=71285020", "title": "Redmi 10 5G", "text": "The Redmi 10 5G is a 5G Android-based smartphone developed by Redmi, a sub-brand of Xiaomi Inc. Was introduced 29 March 2022 with Redmi Note 11S 5G and global version of Redmi Note 11 Pro+ 5G. Earlier in China Redmi 10 5G was released with Redmi Note 11E Pro as Redmi Note 11E. and later it was released in India with Redmi 11 Prime and Redmi A1 as Redmi 11 Prime 5G with other front camera.\nRedmi 10 5G was launched under the POCO brand as Poco M4 5G (stylized and marketed as POCO M4 5G) with different designs and a primary camera. The Indian variant of Poco M4 5G has more advanced cameras.\nIn China global Poco M4 5G is sold as Redmi Note 11R with Ice Crystal Galaxy (silver) color option instead POCO Yellow and a bigger memory configuration.\nDesign.\nThe front is made of Gorilla Glass 3. The back is made of plastic with a wavy texture.\nThe design of Redmi 10 5G/Note 11E and Redmi 11 Prime 5G back is similar to Oppo smartphones, when in Poco M4 5G/Note 11R it is similar to Pixel 6. Also, all models have IP53 dust and splash protection.\nOn the bottom side there are USB-C port, speaker and microphone. On the top side there are additional microphone, IR blaster and 3.5mm audio jack. On the left side there is dual SIM tray with microSD slot. On the right side are a volume rocker and power button with a mounted fingerprint scanner.\nRedmi 10 5G sells in 3 colors: Graphite Gray, Chrome Silver and Aurora Green.\nRedmi Note 11E sells in 3 colors: Mysterious Darkness (gray), Ice Crystal Galaxy (silver) and Microbrewed Mint (green).\nRedmi 11 Prime 5G sells in 3 colors: Thunder Black, Chrome Silver and Meadow Green.\nPoco M4 5G sells in 3 colors: Power Black, Cool Blue and POCO Yellow.\nRedmi Note 11R sells in 3 colors: Polar Blue Ocean, Mysterious Darkness (gray) and Ice Crystal Galaxy (silver).\nSpecifications.\nHardware.\nPlatform.\nThe smartphones have, like Redmi Note 10 5G, MediaTek Dimensity 700 with Mali-G57 MC2.\nBattery.\nThese devices have non-removable battery with capacity 5000 mAh and 18 W fast charging.\nCamera.\nAll models have dual rear camera with 50 MP, on Redmi 10 5G/Note 11E, Redmi 11 Prime 5G and Indian Poco M4 5G and 13 MP on global Poco M4 5G/Note 11R wide camera plus 2 MP, depth sensor. Redmi 10 5G/Note 11E and Poco M4 5G have 5 MP, Indian Poco M4 5G has 8 MP, and Redmi 11 Prime 5G has 8 MP, front camera. Both the rear and front camera can record video in 1080p@30fps.\nDisplay.\nPhones have 6.58\" IPS LCD display with Full HD+ (2408 × 1080; ~401 ppi) image resolution, 90 Hz refresh rate and waterdrop notch.\nMemory.\nRedmi 10 5G solds with 4/64, 4/128 and 6/128 GB, Redmi Note 11E ― 4/128 and 6/128 GB, Redmi 11 Prime 5G and Poco M4 5G ― 4/64 and 6/128 GB, and Redmi Note 11E ― 4/128, 6/128 and 8/128 GB.\nAll models have LPDDR4X type of RAM and type of storage UFS 2.2 which could be extanded by microSD up to 1 TB in global Poco M4 5G/Redmi Note 11R and up to 512 GB in other models.\nSoftware.\nInitially, Redmi 10 5G/Note 11E, Redmi 11 Prime 5G, and Redmi Note 11R were released with MIUI 13 custom skin, and Poco M4 5G was released with MIUI 13 for POCO. Both based on Android 12. Redmi devices were updated to MIUI 14 and Poco M4 5G to MIUI 14 for POCO. Both ROMs are based on Android 13.", "Automation-Control": 0.997522831, "Qwen2": "Yes"} {"id": "1291319", "revid": "1137780513", "url": "https://en.wikipedia.org/wiki?curid=1291319", "title": "Time-invariant system", "text": "In control theory, a time-invariant (TI) system has a time-dependent system function that is not a direct function of time. Such systems are regarded as a class of systems in the field of system analysis. The time-dependent system function is a function of the time-dependent input function. If this function depends \"only\" indirectly on the time-domain (via the input function, for example), then that is a system that would be considered time-invariant. Conversely, any direct dependence on the time-domain of the system function could be considered as a \"time-varying system\".\nMathematically speaking, \"time-invariance\" of a system is the following property:\nIn the language of signal processing, this property can be satisfied if the transfer function of the system is not a direct function of time except as expressed by the input and output.\nIn the context of a system schematic, this property can also be stated as follows, as shown in the figure to the right:\nIf a time-invariant system is also linear, it is the subject of linear time-invariant theory (linear time-invariant) with direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areas. Nonlinear time-invariant systems lack a comprehensive, governing theory. Discrete time-invariant systems are known as shift-invariant systems. Systems which lack the time-invariant property are studied as time-variant systems.\nSimple example.\nTo demonstrate how to determine if a system is time-invariant, consider the two systems:\nSince the System Function formula_4 for system A explicitly depends on \"t\" outside of formula_5, it is not time-invariant because the time-dependence is not explicitly a function of the input function.\nIn contrast, system B's time-dependence is only a function of the time-varying input formula_5. This makes system B time-invariant.\nThe Formal Example below shows in more detail that while System B is a Shift-Invariant System as a function of time, \"t\", System A is not.\nFormal example.\nA more formal proof of why systems A and B above differ is now presented. To perform this proof, the second definition will be used.\nMore generally, the relationship between the input and output is \nand its variation with time is\nFor time-invariant systems, the system properties remain constant with time, \nApplied to Systems A and B above:\nAbstract example.\nWe can denote the shift operator by formula_26 where formula_27 is the amount by which a vector's index set should be shifted. For example, the \"advance-by-1\" system\ncan be represented in this abstract notation by\nwhere formula_30 is a function given by\nwith the system yielding the shifted output\nSo formula_33 is an operator that advances the input vector by 1.\nSuppose we represent a system by an operator formula_34. This system is time-invariant if it commutes with the shift operator, i.e.,\nIf our system equation is given by\nthen it is time-invariant if we can apply the system operator formula_34 on formula_30 followed by the shift operator formula_26, or we can apply the shift operator formula_26 followed by the system operator formula_34, with the two computations yielding equivalent results.\nApplying the system operator first gives\nApplying the shift operator first gives\nIf the system is time-invariant, then", "Automation-Control": 0.6569590569, "Qwen2": "Yes"} {"id": "1291342", "revid": "11521989", "url": "https://en.wikipedia.org/wiki?curid=1291342", "title": "Time-variant system", "text": "A time-variant system is a system whose output response depends on moment of observation as well as moment of input signal application. In other words, a time delay or time advance of input not only shifts the output signal in time but also changes other parameters and behavior. Time variant systems respond differently to the same input at different times. The opposite is true for time invariant systems (TIV).\nOverview.\nThere are many well developed techniques for dealing with the response of linear time invariant systems, such as Laplace and Fourier transforms. However, these techniques are not strictly valid for time-varying systems. A system undergoing slow time variation in comparison to its time constants can usually be considered to be time invariant: they are close to time invariant on a small scale. An example of this is the aging and wear of electronic components, which happens on a scale of years, and thus does not result in any behaviour qualitatively different from that observed in a time invariant system: day-to-day, they are effectively time invariant, though year to year, the parameters may change. Other linear time variant systems may behave more like nonlinear systems, if the system changes quickly – significantly differing between measurements.\nThe following things can be said about a time-variant system:\nLinear time-variant systems.\nLinear-time variant (LTV) systems are the ones whose parameters vary with time according to previously specified laws. Mathematically, there is a well defined dependence of the system over time and over the input parameters that change over time.\nIn order to solve time-variant systems, the algebraic methods initial conditions of the system i.e. whether the system is zero-input or non-zero input system.\nExamples of time-variant systems.\nThe following time varying systems cannot be modelled by assuming that they are time invariant:", "Automation-Control": 0.7937988639, "Qwen2": "Yes"} {"id": "5181161", "revid": "688249028", "url": "https://en.wikipedia.org/wiki?curid=5181161", "title": "RP-570", "text": "RP-570 is a communications protocol used in industrial environments to communicate between a front-end computer and the substation to be controlled.\nIt is a SCADA legacy protocol and is based on the low-level protocol IEC TC57, format class 1.2.\nRP-570 stands for: \n\"RTU Protocol based on IEC 57 part 5-1 (present IEC 870) version 0 or 1\"\nExternal links.\nDetails may be found here:", "Automation-Control": 0.9910166264, "Qwen2": "Yes"} {"id": "70177408", "revid": "1461430", "url": "https://en.wikipedia.org/wiki?curid=70177408", "title": "Stochastic variance reduction", "text": "(Stochastic) variance reduction is an algorithmic approach to minimizing functions that can be decomposed into finite sums. By exploiting the finite sum structure, variance reduction techniques are able to achieve convergence rates that are impossible to achieve with methods that treat the objective as an infinite sum, as in the classical Stochastic approximation setting.\nVariance reduction approaches are widely used for training machine learning models such as logistic regression and support vector machines as these problems have finite-sum structure and uniform conditioning that make them ideal candidates for variance reduction.\nFinite sum objectives.\nA function formula_1 is considered to have finite sum structure if it can be decomposed into a summation or average:\nwhere the function value and derivative of each formula_3 can be queried independently. Although variance reduction methods can be applied for any positive formula_4 and any formula_3 structure, their favorable theoretical and practical properties arise when formula_4 is large compared to the condition number of each formula_3, and when the formula_3 have similar (but not necessarily identical) Lipschitz smoothness and strong convexity constants.\nThe finite sum structure should be contrasted with the stochastic approximation setting which deals with functions of the form formula_9\nwhich is the expected value of a function depending on a random variable formula_10. Any finite sum problem can be optimized using a stochastic approximation algorithm by using formula_11.\nRapid Convergence.\nStochastic variance reduced methods without acceleration are able to find a minima of formula_1 within accuracy formula_13, i.e. formula_14 in a number of steps of the order:\nThe number of steps depends only logarithmically on the level of accuracy required, in contrast to the stochastic approximation framework, where the number of steps formula_16 required grows proportionally to the accuracy required.\nStochastic variance reduction methods converge almost as fast as the gradient descent method's formula_17 rate, despite using only a stochastic gradient, at a formula_18 lower cost than gradient descent.\nAccelerated methods in the stochastic variance reduction framework achieve even faster convergence rates, requiring only\nsteps to reach formula_20 accuracy, potentially formula_21 faster than non-accelerated methods. Lower complexity bounds. for the finite sum class establish that this rate is the fastest possible for smooth strongly convex problems.\nApproaches.\nVariance reduction approaches fall within 3 main categories: table averaging methods, full-gradient snapshot methods and dual methods. Each category contains methods designed for dealing with convex, non-smooth, and non-convex problems, each differing in hyper-parameter settings and other algorithmic details. \nSAGA.\nIn the SAGA method, the prototypical table averaging approach, a table of size formula_4 is maintained that contains the last gradient witnessed for each formula_3 term, which we denote formula_24. At each step, an index formula_25 is sampled, and a new gradient formula_26 is computed. The iterate formula_27 is updated with:\nand afterwards table entry formula_25 is updated with formula_30.\nSAGA is among the most popular of the variance reduction methods due to its simplicity, easily adaptable theory, and excellent performance. It is the successor of the SAG method, improving on its flexibility and performance. \nSVRG.\nThe stochastic variance reduced gradient method (SVRG), the prototypical snapshot method, uses a similar update except instead of using the average of a table it instead uses a full-gradient that is reevaluated at a snapshot point formula_31 at regular intervals of formula_32 iterations. The update becomes:\nThis approach requires two stochastic gradient evaluations per step, one to compute formula_26 and one to compute formula_35 where-as table averaging approaches need only one. \nDespite the high computational cost, SVRG is popular as its simple convergence theory is highly adaptable to new optimization settings. It also has lower storage requirements than tabular averaging approaches, which make it applicable in many settings where tabular methods can not be used.\nSDCA.\nExploiting the dual representation of the objective leads to another variance reduction approach that is particularly suited to finite-sums where each term has a structure that makes computing the convex conjugate formula_36 or its proximal operator tractable. The standard SDCA method considers finite sums that have additional structure compared to generic finite sum setting:\nwhere each formula_3 is 1 dimensional and each formula_39 is a data point associated with formula_3.\nSDCA solves the dual problem:\nby a stochastic coordinate ascent procedure, where at each step the objective is optimized with respect to a randomly chosen coordinate formula_42, leaving all other coordinates the same. An approximate primal solution formula_43 can be recovered from the formula_44 values:\nThis method obtains similar theoretical rates of convergence to other stochastic variance reduced methods, while avoiding the need to specify a step-size parameter. It is fast in practice when formula_46 is large, but significantly slower than the other approaches when formula_46 is small.\nAccelerated approaches.\nAccelerated variance reduction methods are built upon the standard methods above. The earliest approaches make use of proximal operators to accelerate convergence, either approximately or exactly. Direct acceleration approaches have also been developed\nCatalyst acceleration.\nThe catalyst framework uses any of the standard methods above as an inner optimizer to approximately solve a proximal operator:\nafter which it uses an extrapolation step to determine the next formula_49:\nThe catalyst method's flexibility and simplicity make it a popular baseline approach. It doesn't achieve the optimal rate of convergence among accelerated methods, it is potentially slower by up to a log factor in the hyper-parameters.\nPoint-SAGA.\nProximal operations may also be applied directly to the formula_3 terms to yield an accelerated method. The Point-SAGA method replaces the gradient operations in SAGA with proximal operator evaluations, result in a simple, direct acceleration method:\nwith the table update formula_53 performed after each step. Here formula_54 is defined as the proximal operator for the formula_55th term:\nUnlike other known accelerated methods, Point-SAGA requires only a single iterate sequence formula_43 to be maintained between steps, and it has the advantage of only having a single tunable parameter formula_58. It obtains the optimal accelerated rate of convergence for strongly convex finite-sum minimization without additional log factors.", "Automation-Control": 0.7741318941, "Qwen2": "Yes"} {"id": "13675124", "revid": "1125920115", "url": "https://en.wikipedia.org/wiki?curid=13675124", "title": "Plant (control theory)", "text": "A plant in control theory is the combination of process and actuator. A plant is often referred to with a transfer function\n(commonly in the s-domain) which indicates the relation between an input signal and the output signal of a system without feedback, commonly determined by physical properties of the system. An example would be an actuator with its transfer of the input of the actuator to its physical displacement. In a system with feedback, the plant still has the same transfer function, but a control unit and a feedback loop (with their respective transfer functions) are added to the system.", "Automation-Control": 0.9967011213, "Qwen2": "Yes"} {"id": "13676033", "revid": "1138071573", "url": "https://en.wikipedia.org/wiki?curid=13676033", "title": "Common Industrial Protocol", "text": "The Common Industrial Protocol (CIP) is an industrial protocol for industrial automation applications. It is supported by ODVA.\nPreviously known as Control and Information Protocol, CIP encompasses a comprehensive suite of messages and services for the collection of manufacturing automation applications – control, safety, synchronization, motion, configuration and information. It allows users to integrate these manufacturing applications with enterprise-level Ethernet networks and the Internet. It is supported by hundreds of vendors around the world, and is media-independent. CIP provides a unified communication architecture throughout the manufacturing enterprise. It is used in EtherNet/IP, DeviceNet, CompoNet and ControlNet.\nODVA is the organization that supports network technologies built on the Common Industrial Protocol (CIP). These also currently include application extensions to CIP: CIP Safety, CIP Motion and CIP Sync.", "Automation-Control": 0.9999938011, "Qwen2": "Yes"} {"id": "31422551", "revid": "1060827576", "url": "https://en.wikipedia.org/wiki?curid=31422551", "title": "Youla–Kucera parametrization", "text": "In control theory the Youla–Kučera parametrization (also simply known as Youla parametrization) is a formula that describes all possible stabilizing feedback controllers for a given plant \"P\", as function of a single parameter \"Q\". \nDetails.\nThe YK parametrization is a general result. It is a fundamental result of control theory and launched an entirely new area of research and found application, among others, in optimal and robust control. The engineering significance of the YK formula is that if one wants to find a stabilizing controller that meets some additional criterion, one can adjust the parameter \"Q\" such that the desired criterion is met.\nFor ease of understanding and as suggested by Kučera it is best described for three increasingly general kinds of plant.\nStable SISO plant.\nLet formula_1 be a transfer function of a stable single-input single-output system (SISO) system. Further, let formula_2 be a set of stable and proper functions of \"formula_3\". Then, the set of all proper stabilizing controllers for the plant formula_1 can be defined as\nwhere formula_6 is an arbitrary proper and stable function of \"s\". It can be said, that formula_6 parametrizes all stabilizing controllers for the plant formula_1.\nGeneral SISO plant.\nConsider a general plant with a transfer function formula_1. Further, the transfer function can be factorized as\nNow, solve the Bézout's identity of the form\nwhere the variables to be found formula_14 must be also proper and stable.\nAfter proper and stable formula_15 are found, we can define one stabilizing controller that is of the form formula_16. After we have one stabilizing controller at hand, we can define all stabilizing controllers using a parameter formula_6 that is proper and stable. The set of all stabilizing controllers is defined as\nGeneral MIMO plant.\nIn a multiple-input multiple-output (MIMO) system, consider a transfer matrix formula_19. It can be factorized using right coprime factors formula_20 or left factors formula_21. The factors must be proper, stable and doubly coprime, which ensures that the system formula_19 is controllable and observable. This can be written by Bézout identity of the form:\nAfter finding formula_24 that are stable and proper, we can define the set of all stabilizing controllers formula_25 using left or right factor, provided having negative feedback.\nwhere formula_27 is an arbitrary stable and proper parameter. \nLet formula_1 be the transfer function of the plant and let formula_29 be a stabilizing controller. Let their right coprime factorizations be:\nthen all stabilizing controllers can be written as\nwhere formula_33 is stable and proper.", "Automation-Control": 0.967281878, "Qwen2": "Yes"} {"id": "35154374", "revid": "1101750", "url": "https://en.wikipedia.org/wiki?curid=35154374", "title": "Buick Century Cruiser", "text": "The Buick Century Cruiser was a dream car (concept car) created by Buick in 1969. It was conceived as being designed for automated highways, where steering wheels would be unnecessary. The vehicle offered swivel contour seats, a refrigerator, and a TV set. The computerized car would be programmed by punch cards with predetermined routes programmed by information provided by electric highway centers. The vehicle would be monitored by a radar like device. The vehicle would also have a device to steer the vehicle manually as well as controlling speed. The canopy would slide open for easy cockpit access. It was related to the Firebird IV concept car and shares an appearance with the GM-X Stiletto.", "Automation-Control": 0.7254530191, "Qwen2": "Yes"} {"id": "6622091", "revid": "57939", "url": "https://en.wikipedia.org/wiki?curid=6622091", "title": "OPEX (corporation)", "text": "OPEX Corporation is a manufacturing company based in Moorestown, New Jersey. They primarily manufacture warehouse automation equipment, high volume mailroom automation equipment, document scanners, and remittance processors. Their warehouse automation products have been implemented at retail and e-commerce companies such as: HBC, BOXED, and iHERB\nOPEX employs approximately ~1600 people throughout the world with locations in Moorestown Township, New Jersey USA, Plano, Texas, Bolton, England, Villebon Sur Yvette, France, and Wiesbaden, Germany.", "Automation-Control": 0.8303197622, "Qwen2": "Yes"} {"id": "25208477", "revid": "12396222", "url": "https://en.wikipedia.org/wiki?curid=25208477", "title": "VCDIFF", "text": "VCDIFF is a format and an algorithm for delta encoding, described in IETF's RFC 3284. The algorithm is based on Jon Bentley and Douglas McIlroy's paper \"Data Compression Using Long Common Strings\" written in 1999. VCDIFF is used as one of the delta encoding algorithms in \"Delta encoding in HTTP\" (RFC 3229) and was employed in Google's Shared Dictionary Compression Over HTTP technology, formerly used in their Chrome browser.\nDelta instructions.\nVCDIFF has 3 delta instructions. ADD, COPY, and RUN. ADD adds a new sequence, COPY copies from an old sequence, and RUN adds repeated data.\nImplementations.\nFree software implementations include xdelta (version 3) and open-vcdiff.", "Automation-Control": 0.8790172338, "Qwen2": "Yes"} {"id": "43013627", "revid": "10289486", "url": "https://en.wikipedia.org/wiki?curid=43013627", "title": "Conformal cooling channel", "text": "Conformal cooling channel is a cooling passageway which follows the shape or profile of the mould core or cavity to perform rapid uniform cooling process for injection moulding or blow moulding processes.", "Automation-Control": 0.8003006577, "Qwen2": "Yes"} {"id": "22293052", "revid": "28903366", "url": "https://en.wikipedia.org/wiki?curid=22293052", "title": "Four-slide", "text": "A four-slide, also known as a multislide, multi-slide, or four-way, is a metalworking machine tool used in the high-volume manufacture of small stamped components from bar or wire stock. The press is most simply described as a horizontal stamping press that uses cams to control tools. The machine is used for progressive or transfer stamping operations.\nDesign.\nA four-slide is quite different from most other presses. The key of the machine is its moving slides that have tools attached, which strike the workpiece together or in sequence to form it. These slides are driven by four shafts that outline the machine. The shafts are connected by bevel gears so that one shaft is driven by an electric motor, and then that shaft's motion drives the other three shafts. Each shaft then has cams which drive the slides, usually of a split-type. This shafting arrangement allows the workpiece to be worked for four sides, which makes this machine extremely versatile. A hole near the center of the machine is provided to expel the completed workpiece.\nAdvantages and disadvantages.\nThe greatest advantage of the four-slide machine is its ability to complete all of the operations required to form the workpiece from start to finish. Moreover, it can handle certain parts that transfer or progressive dies cannot, because it can manipulate from four axes. Due to this flexibility it reduces the cost of the finished part because it requires less machines, setups, and handling. Also, because only one machine is required, less space is required for any given workpiece. As compared to standard stamping presses the tooling is usually inexpensive, due to the simplicity of the tools. A four-slide can usually produce 20,000 to 70,000 finished parts per 16-hour shift, depending on the number of operations per part; this speed usually results in a lower cost per part.\nThe biggest disadvantage is its size constraints. The largest machines can handle stock up to wide, long, and thick. For wires the limit is . Other limits are the travel on the slides, which maxes out at , and the throw of the forming cams, which is between . The machine is also limited to only shearing and bending operations. Extrusion and upsetting operations are impractical because it hinders the movement of the workpiece to the next station. Drawing and stretching require too much tonnage and the mechanisms required for the operations are space prohibitive. Finally, this machine is only feasible to use on high volume parts because of the long lead time required to set up the tooling.\nMaterials.\nThe material stock used in four-slides is usually limited by its formability and not the machine capabilities. Usually the forming characteristics and bending radii are the most limiting factors. The most commonly used materials are:\nUse.\nItems that are commonly produced on this machine include: automotive stampings, hinges, links, clips, and razor blades.", "Automation-Control": 0.8785692453, "Qwen2": "Yes"} {"id": "22323510", "revid": "5230605", "url": "https://en.wikipedia.org/wiki?curid=22323510", "title": "Sanjoy K. Mitter", "text": "Sanjoy Kumar Mitter (December 9, 1933 – June 26, 2023) was a Professor in the Department of Electrical Engineering and Computer Science at MIT who was a noted control theorist.\nLife and career.\nMitter was born in 1933 in Calcutta, India. He received a B.Sc. in mathematics from the University of Calcutta, and a B.Sc. in Engineering at City and Guilds of London Institute. He continued his studies in the United Kingdom and Ph.D. from Imperial College of Science and Technology, London. After graduation, he worked at Brown, Boveri & Cie, the Battelle Memorial Institute, and the Central Electricity Generating Board before joining Case Western Reserve University (CWRU) in 1965 as an assistant professor. Mitter became an associate professor at CWRU in 1967 and moved to MIT in 1969. He became a professor of electrical engineering at MIT in 1973. At MIT, he was director for both the Center for Intelligent Control and the Laboratory for Information and Decision Systems.\nMitter's research is concerned with Systems, Control and Communication. He has furnished proofs in nonlinear filtering and optimal control theory, as well as carrying out more applied work in image analysis, computation of optimal controls and reliability of electrical power systems.\nMitter lived in Cambridge, Massachusetts. He died in June 2023.\nHonors and awards.\nMitter received both the Richard E. Bellman Control Heritage Award from the American Automatic Control Council (2007) and the IEEE Control Systems Award (in 2000). In 1988 he was elected a member of the National Academy of Engineering \"for outstanding contributions to the theory and applications of automatic control and nonlinear filtering\".", "Automation-Control": 0.6336325407, "Qwen2": "Yes"} {"id": "10234884", "revid": "40524794", "url": "https://en.wikipedia.org/wiki?curid=10234884", "title": "Deep drawing", "text": "Deep drawing is a sheet metal forming process in which a sheet metal blank is radially drawn into a forming die by the mechanical action of a punch. It is thus a shape transformation process with material retention. The process is considered \"deep\" drawing when the depth of the drawn part exceeds its diameter. This is achieved by redrawing the part through a series of dies. \nThe flange region (sheet metal in the die shoulder area) experiences a radial drawing stress and a tangential compressive stress due to the material retention property. These compressive stresses (hoop stresses) result in flange wrinkles (wrinkles of the first order). Wrinkles can be prevented by using a blank holder, the function of which is to facilitate controlled material flow into the die radius. Deep drawing presses, especially in the Aerospace and Medical industries, require unparalleled accuracy and precision. Sheet hydroforming presses do complex draw work. Bed size, tonnage, stroke, speed, and more can be tailored to your specific draw forming application.\nProcess.\nThe total drawing load consists of the ideal forming load and an additional component to compensate for friction in the contacting areas of the flange region and bending forces as well as unbending forces at the die radius. The forming load is transferred from the punch radius through the drawn part wall into the deformation region (sheet metal flange). In the drawn part wall, which is in contact with the punch, the hoop strain is zero whereby the plane strain condition is reached. In reality, mostly the strain condition is only approximately plane. Due to tensile forces acting in the part wall, wall thinning is prominent and results in an uneven part wall thickness, such that the part wall thickness is lowest at the point where the part wall loses contact with the punch, i.e., at the punch radius.\nThe thinnest part thickness determines the maximum stress that can be transferred to the deformation zone. Due to material volume constancy, the flange thickens and results in blank holder contact at the outer boundary rather than on the entire surface. The maximum stress that can be safely transferred from the punch to the blank sets a limit on the maximum blank size (initial blank diameter in the case of rotationally symmetrical blanks). An indicator of material formability is the limiting drawing ratio (LDR), defined as the ratio of the maximum blank diameter that can be safely drawn into a cup without flange to the punch diameter. Determination of the LDR for complex components is difficult and hence the part is inspected for critical areas for which an approximation is possible. During severe deep drawing the material work hardens and it may be necessary to anneal the parts in controlled atmosphere ovens to restore the original elasticity of the material.\nCommercial applications of this metal shaping process often involve complex geometries with straight sides and radii. In such a case, the term stamping is used in order to distinguish between the deep drawing (radial tension-tangential compression) and stretch-and-bend (along the straight sides) components. Deep drawing is always accompanied by other forming techniques within the press. These other forming methods include:\nOften components are partially deep drawn in order to create a series of diameters throughout the component (as in the image of the deep draw line). It common use to consider this process as a cost saving alternative to turned parts which require much more raw material. \nThe sequence of deep drawn components is referred to as a \"deep draw line\". The numbers of components that form the deep draw line is given by the quantity of \"stations\" available in the press. In the case of mechanical presses this is determined by the number of cams on the top shaft.\nFor high precision mass productions, it is always advisable to use a transfer press also known as eyelet press. The advantage of this type of press, in respect to conventional progressive presses, is that the parts are transferred from one die to the next by means of so-called \"fingers\". Not only do the fingers transfer the parts but they also guide the component during the process. This allows parts to be drawn to the deepest depths with the tightest tolerances.\nOther types of presses: \nVariations.\nDeep drawing has been classified into \"conventional\" and \"unconventional\" deep drawing. The main aim of any unconventional deep drawing process is to extend the formability limits of the process. Some of the unconventional processes include hydromechanical deep drawing, Hydroform process, Aquadraw process, Guerin process, Marform process and the hydraulic deep drawing process to name a few.\nThe Marform process, for example, operates using the principle of rubber pad forming techniques. Deep-recessed parts with either vertical or sloped walls can be formed. In this type of forming, the die rig employs a rubber pad as one tool half and a solid tool half, similar to the die in a conventional die set, to form a component into its final shape. Dies are made of cast light alloys and the rubber pad is 1.5-2 times thicker than the component to be formed. For Marforming, single-action presses are equipped with die cushions and blank holders. The blank is held against the rubber pad by a blank holder, through which a punch is acting as in conventional deep drawing. It is a double-acting apparatus: at first the ram slides down, then the blank holder moves: this feature allows it to perform deep drawings (30-40% transverse dimension) with no wrinkles.\nIndustrial uses of deep drawing processes include automotive body and structural parts, aircraft components, utensils and white goods. Complex parts are normally formed using progressive dies in a single forming press or by using a press line.\nWorkpiece materials and power requirements.\nSofter materials are much easier to deform and therefore require less force to draw. The following is a table demonstrating the draw force to percent reduction of commonly used materials.\nTool materials.\nPunches and dies are typically made of tool steel, however cheaper (but softer) carbon steel is sometimes used in less severe applications. It is also common to see cemented carbides used where high wear and abrasive resistance is present.\nAlloy steels are normally used for the ejector system to kick the part out and in durable and heat resistant blankholders.\nLubrication and cooling.\nLubricants are used to reduce friction between the working material and the punch and die. They also aid in removing the part from the punch. Some examples of lubricants used in drawing operations are heavy-duty emulsions, phosphates, white lead, and wax films. Plastic films covering both sides of the part while used with a lubricant will leave the part with a fine surface.", "Automation-Control": 0.9986720085, "Qwen2": "Yes"} {"id": "10238320", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=10238320", "title": "Incremental sheet forming", "text": "Incremental sheet forming (or ISF, also known as Single Point Forming) is a sheet metal forming technique where a sheet is formed into the final workpiece by a series of small incremental deformations. However, studies have shown that it can be applied to polymer and composite sheets too. Generally, the sheet is formed by a round tipped tool, typically 5 to 20mm in diameter. The tool, which can be attached to a CNC machine, a robot arm or similar, indents into the sheet by about 1 mm and follows a contour for the desired part. It then indents further and draws the next contour for the part into the sheet and continues to do this until the full part is formed. ISF can be divided into variants depending on the number of contact points between tool, sheet and die (in case there is any). The term Single Point Incremental Forming (SPIF) is used when the opposite side of the sheet is supported by a faceplate and Two Point Incremental Forming (TPIF) when a full or partial die supports the sheet.\nTypes.\nSingle-point incremental forming (SPIF) and double-sided incremental forming (DSIF) are the two variants of the IF process. In the DSIF process, two tools are used to form the sheet on either side, while the SPIF process only uses a tool on one side of the sheet. Thus, a component having features on either side of the sheet, e.g., an inverted cone can be effectively formed by the DSIF process. \nAdvantages over conventional sheet metal forming.\nBecause the process can be controlled entirely by CNC processes no die is required as is in traditional sheet metal forming. The elimination of the die in the manufacturing process reduces the cost per piece and decreases turnaround time for low production runs because the need to manufacture a die is eliminated. However, for high production runs the time and cost to produce a die is absorbed by the higher per piece speed and lower per piece cost.\nSeveral authors recognize that the formability of metal materials under the localized deformation imposed by incremental forming is better than in conventional deep drawing. In contrast, there is a loss of accuracy with the ISF process.\nImplementation.\nThe ISF process is generally implemented by clamping a sheet in the XY plane, which is free to move along the Z axis. The tool moves in the XY plane and is coordinated with movements in the Z axis to create the desired part. It is often convenient to retrofit a CNC milling machine to accommodate the process. Spherical, flat-bottomed, and parabolic tool profiles can be used to achieve differing surface finishes and forming limits.\nThe machine employs a combination of stretch forming by drawing the sheet incrementally down over a die, with the CNC tool approach described above. This is said to produce a more even distribution of thickness of the material. The process is well suited to one-off manufacture though difficulties in simulating the process mean that toolpaths are complex and time-consuming to determine.\nFord Motor Company has recently released Ford Freeform Fabrication Technology, a two-point incremental sheet-forming technique being implemented in the rapid prototyping of automotive parts. Complex shapes such as the human face and cranial implants have been manufactured successfully using this manufacturing process. Advances in the technology are expected to increase adoption in the near future by other sheet metal-reliant manufacturers.\nApplications.\nIncremental forming (IF) is a recent manufacturing process having a wide range of applications in the following areas.\nList of process parameters.\nThe mechanics of the process is influenced by many parameters, including:\nCurrent research.\nResearch is underway at several universities. The most common implementation is to outfit a traditional milling machine with the spherical tool used in the ISF process. Key research areas include", "Automation-Control": 0.9998662472, "Qwen2": "Yes"} {"id": "22931116", "revid": "20483999", "url": "https://en.wikipedia.org/wiki?curid=22931116", "title": "Hidden Markov random field", "text": "In statistics, a hidden Markov random field is a generalization of a hidden Markov model. Instead of having an underlying Markov chain, hidden Markov random fields have an underlying Markov random field.\nSuppose that we observe a random variable formula_1, where formula_2. Hidden Markov random fields assume that the probabilistic nature of formula_1 is determined by the unobservable Markov random field formula_4, formula_2.\nThat is, given the neighbors formula_6 of formula_7 is independent of all other formula_8 (Markov property).\nThe main difference with a hidden Markov model is that neighborhood is not defined in 1 dimension but within a network, i.e. formula_4 is allowed to have more than the two neighbors that it would have in a Markov chain. The model is formulated in such a way that given formula_4, formula_1 are independent (conditional independence of the observable variables given the Markov random field).\nIn the vast majority of the related literature, the number of possible latent states is considered a user-defined constant. However, ideas from nonparametric Bayesian statistics, which allow for data-driven inference of the number of states, have been also recently investigated with success, e.g.", "Automation-Control": 0.9364722967, "Qwen2": "Yes"} {"id": "15723416", "revid": "22619", "url": "https://en.wikipedia.org/wiki?curid=15723416", "title": "IEC 62264", "text": " IEC 62264 is an international standard for enterprise control system integration. This standard is based upon ANSI/ISA-95.\nCurrent parts of IEC 62264.\nIEC 62264 consists of the following parts detailed in separate IEC 62264 standard documents:", "Automation-Control": 0.9836254716, "Qwen2": "Yes"} {"id": "65247253", "revid": "15104030", "url": "https://en.wikipedia.org/wiki?curid=65247253", "title": "October 1865 West Sydney colonial by-election", "text": "A by-election was held for the New South Wales Legislative Assembly electorate of West Sydney on 18 October 1865 because of the resignation of John Robertson due to financial difficulties.\nResult.\n\nJohn Robertson resigned due to financial difficulties.", "Automation-Control": 0.9993670583, "Qwen2": "Yes"} {"id": "65248818", "revid": "15104030", "url": "https://en.wikipedia.org/wiki?curid=65248818", "title": "1867 East Sydney colonial by-election", "text": "A by-election was held for the New South Wales Legislative Assembly electorate of East Sydney on 20 March 1867 because Charles Cowper resigned due to financial difficulties.\nResult.\n\nCharles Cowper resigned due to financial difficulties.", "Automation-Control": 0.8503637314, "Qwen2": "Yes"} {"id": "15754228", "revid": "5042921", "url": "https://en.wikipedia.org/wiki?curid=15754228", "title": "Probe positioning system", "text": "A probe positioning system is a tool for the positioning of a (hand-held) measuring device, such as an ultrasound transducer in a fixed, predetermined place to the object, such as a patient. The operation of these systems varies from completely manual, to completely automated.\nIn (semi-) automated probe positioning systems, a control system corrects for the movement of the object or disturbances in the environment. These systems can use a tilt, pressure or other sensor carried by the probe to collect positional data. The positioner, such as a robotic arm is coupled to the probe. The positioner can provide roll and pitch control as well as translating the probe in lateral and longitudinal directions. A processor receives signals from the sensors corresponding to the actual orientation of the probe and controls the positioner to adjust the orientation of the probe until the desired position is achieved.", "Automation-Control": 0.9010002613, "Qwen2": "Yes"} {"id": "5298349", "revid": "18872885", "url": "https://en.wikipedia.org/wiki?curid=5298349", "title": "International Federation of Automatic Control", "text": "The International Federation of Automatic Control (IFAC), founded in September 1957 in France, is a multinational federation of 49 national member organizations (NMO), each one representing the engineering and scientific societies concerned with automatic control in its own country.\nThe aim of the Federation is to promote the science and technology of control in the broadest sense in all systems, whether, for example, engineering, physical, biological, social or economic, in both theory and application. IFAC is also concerned with the impact of control technology on society.\nIFAC pursues its purpose by organizing technical meetings, by publications, and by any other means consistent with its constitution and which will enhance the interchange and circulation of information on automatic control activities.\nInternational World Congresses are held every three years. Between congresses, IFAC sponsors many symposia, conferences and workshops covering particular aspects of automatic control.\nThe official journals of IFAC are \"Automatica\", \"Control Engineering Practice\", \"Annual Reviews in Control\", \"Journal of Process Control\", \"Engineering Applications of Artificial Intelligence\", the \"Journal of Mechatronics, Nonlinear Analysis: Hybrid Systems\", and the \"IFAC Journal of Systems and Control\".\nAwards.\nIFAC Fellows\nMajor Medals\nHigh Impact Paper Award\nOutstanding Service Award", "Automation-Control": 0.9959392548, "Qwen2": "Yes"} {"id": "38391021", "revid": "925940984", "url": "https://en.wikipedia.org/wiki?curid=38391021", "title": "Almen round", "text": "An Almen round is a thin round disk used to quantify the intensity of a shot peening process. Developed in 1994 by Rudolf Bosshard in Switzerland, it is a modification of the Almen strip method, which is used worldwide as a surface treatment testing method in the field of shot peening. The basic principle is the same, but due to the simple shape and minimized size, the Almen round is more suitable for automated processing and installation on dummy rigs. Also instead of the Almen block according SAE J442, here a matching device is used and if connected to electronic processing unit, the Almen value according AMS-S-13165 (predecessor MIL-S-13165 Rev.C) can be evaluated in one run.\nTest specimen.\nThe Almen round is a circular cutting from an original Almen strip normally in material SAE 1070. It can be either of a \"A\", \"C\" or \"N\" type offered in various quality grades. The standard strip allows the splitting into four pieces rounds, which have a diameter of 18.7 mm, by either waterjet or laser technique. Therefore, the spaces are maintained in respect of material and thickness. The flatness test to follow is simple and imprecise pieces can be eliminated.\nSpecimen holder.\nIn the endeavor to standardize the parameters, the clamping down of the Almen rounds follows a strict geometry. The clamping head fixes the Almen round in correct manner, thus important for all related procedure. When working with the monitoring sensor, the specimen is preloaded to increase overall accuracy.\nMeasuring technique.\nOther than the Almen strip method with the two steps: 1. Processing with holder and 2. Measuring with gage, the Almen round technique combines those activities. The specimen holder is complemented with a measuring system, e.g. a distance sensor system of various kinds to complete the monitoring sensor. While bombarded by shot, the captured Almen round gets bent towards the direction of the attack. The linear transformation gets converted into an electric signal sent to a processing unit. With calibration discs, errors will be minimalized. For the offline measuring of an individual round, the online monitoring sensor or an appropriate device will also be used.\nSignal processing.\nThus the equipment is designed for direct exposure to the shot stream, the deformation of the round can be monitored online and directly converted into the Almen standard arc high definition (SAE J443) either in mm or inches. Such a process time will be in the range of less than 5 to 40 seconds. If on screen, the graph shows the basis for the calculation algorithm, also the essential point in mm or inches that is the output information equivalent to the original Almen round definition.\nAdvanced application.\nMany so called \"critical parts\" require a test run with dummy parts. Therefore the Almen round is captured in a special mount that can be screwed or glued onto the dummy or even on a real part. In such a case an online monitoring is not possible, the specimens must get removed and then measured separately with the monitoring sensor. In this case, only the arc height can be traced. So this application should be in combination with a parallel running online process.\nField of application.\nThe Almen round can be utilized in aircraft and automotive industry, research and subcontracting peening enterprises.\nApplication restriction.\nAs the Almen strip principle has been established in 1942 and international standardization has reached a top-level. The strip work routine is a fixed procedure in practical peening technology. The Almen round principle is more accurate and offers a considerable working time reduction. But as there is no international approved standardization, this technique will be used for special applications only.", "Automation-Control": 0.610743165, "Qwen2": "Yes"} {"id": "61293808", "revid": "1461430", "url": "https://en.wikipedia.org/wiki?curid=61293808", "title": "Interval predictor model", "text": "In regression analysis, an interval predictor model (IPM) is an approach to regression where bounds on the function to be approximated are obtained.\nThis differs from other techniques in machine learning, where usually one wishes to estimate point values or an entire probability distribution.\nInterval Predictor Models are sometimes referred to as a nonparametric regression technique, because a potentially infinite set of functions are contained by the IPM, and no specific distribution is implied for the regressed variables.\nMultiple-input multiple-output IPMs for multi-point data commonly used to represent functions have been recently developed. These IPM prescribe the parameters of the model as a path-connected, semi-algebraic set using sliced-normal or sliced-exponential distributions. A key advantage of this approach is its ability to characterize complex parameter dependencies to varying fidelity levels. This practice enables the analyst to adjust the desired level of conservatism in the prediction. \nAs a consequence of the theory of scenario optimization, in many cases rigorous predictions can be made regarding the performance of the model at test time. \nHence an interval predictor model can be seen as a guaranteed bound on quantile regression.\nInterval predictor models can also be seen as a way to prescribe the support of random predictor models, of which a Gaussian process is a specific case\nConvex interval predictor models.\nTypically the interval predictor model is created by specifying a parametric function, which is usually chosen to be the product of a parameter vector and a basis.\nUsually the basis is made up of polynomial features or a radial basis is sometimes used.\nThen a convex set is assigned to the parameter vector, and the size of the convex set is minimized such that every possible data point can be predicted by one possible value of the parameters.\nEllipsoidal parameters sets were used by Campi (2009), which yield a convex optimization program to train the IPM.\nCrespo (2016) proposed the use of a hyperrectangular parameter set, which results in a convenient, linear form for the bounds of the IPM.\nHence the IPM can be trained with a linear optimization program:\nwhere the training data examples are formula_2 and formula_3, and the Interval Predictor Model bounds formula_4 and formula_5 are parameterised by the parameter vector formula_6.\nThe reliability of such an IPM is obtained by noting that for a convex IPM the number of support constraints is less than the dimensionality of the trainable parameters, and hence the scenario approach can be applied.\nLacerda (2017) demonstrated that this approach can be extended to situations where the training data is interval valued rather than point valued.\nNon-convex interval predictor models.\nIn Campi (2015) a non-convex theory of scenario optimization was proposed.\nThis involves measuring the number of support constraints, formula_7, for the Interval Predictor Model after training and hence making predictions about the reliability of the model.\nThis enables non-convex IPMs to be created, such as a single layer neural network.\nCampi (2015) demonstrates that an algorithm where the scenario optimization program is only solved formula_7 times which can determine the reliability of the model at test time without a prior evaluation on a validation set.\nThis is achieved by solving the optimisation program\nwhere the interval predictor model center line formula_10, and the model width formula_11. This results in an IPM which makes predictions with homoscedastic uncertainty.\nSadeghi (2019) demonstrates that the non-convex scenario approach from Campi (2015) can be extended to train deeper neural networks which predict intervals with hetreoscedastic uncertainty on datasets with imprecision.\nThis is achieved by proposing generalizations to the max-error loss function given by\nwhich is equivalent to solving the optimisation program proposed by Campi (2015).\nApplications.\nInitially, scenario optimization was applied to robust control problems.\nCrespo (2015) and (2021) applied Interval Predictor Models to the design of space radiation shielding and to system identification. \nIn Patelli (2017), Faes (2019), and Crespo (2018), Interval Predictor models were applied to the structural reliability analysis problem.\nBrandt (2017) applies interval predictor models to fatigue damage estimation of offshore wind turbines jacket substructures.\nGaratti (2019) proved that Chebyshev layers (i.e., the minimax layers around functions fitted by linear formula_13-regression) belong to a particular class of Interval Predictor Models, for which the reliability is invariant with respect to the distribution of the data. \nSoftware implementations.\nPyIPM provides an open-source Python implementation of the work of Crespo (2015).\nOpenCOSSAN provides a Matlab implementation of the work of Crespo (2015).", "Automation-Control": 0.9573068023, "Qwen2": "Yes"} {"id": "371255", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=371255", "title": "Sliding mode control", "text": "In control systems, sliding mode control (SMC) is a nonlinear control method that alters the dynamics of a nonlinear system by applying a discontinuous control signal (or more rigorously, a set-valued control signal) that forces the system to \"slide\" along a cross-section of the system's normal behavior. The state-feedback control law is not a continuous function of time. Instead, it can switch from one continuous structure to another based on the current position in the state space. Hence, sliding mode control is a variable structure control method. The multiple control structures are designed so that trajectories always move toward an adjacent region with a different control structure, and so the ultimate trajectory will not exist entirely within one control structure. Instead, it will \"slide\" along the boundaries of the control structures. The motion of the system as it slides along these boundaries is called a \"sliding mode\" and the geometrical locus consisting of the boundaries is called the \"sliding (hyper)surface\". In the context of modern control theory, any variable structure system, like a system under SMC, may be viewed as a special case of a hybrid dynamical system as the system both flows through a continuous state space but also moves through different discrete control modes.\nIntroduction.\nFigure 1 shows an example trajectory of a system under sliding mode control. The sliding surface is described by formula_1, and the sliding mode along the surface commences after the finite time when system trajectories have reached the surface. In the theoretical description of sliding modes, the system stays confined to the sliding surface and need only be viewed as sliding along the surface. However, real implementations of sliding mode control approximate this theoretical behavior with a high-frequency and generally non-deterministic switching control signal that causes the system to \"chatter\" in a tight neighborhood of the sliding surface. Chattering can be reduced through the use of deadbands or boundary layers around the sliding surface, or other compensatory methods. Although the system is nonlinear in general, the idealized (i.e., non-chattering) behavior of the system in Figure 1 when confined to the formula_1 surface is an LTI system with an exponentially stable origin.\nOne of the compensatory methods is the adaptive sliding mode control method proposed in \n which uses estimated uncertainty to construct continuous control law. In this method chattering is eliminated while preserving accuracy (for more details see references [2] and [3]). The three distinguished features of the proposed adaptive sliding mode controller are as follows: (i) The structured (or parametric) uncertainties and unstructured uncertainties (un-modeled dynamics, unknown external disturbances) are synthesized into a single type uncertainty term called lumped uncertainty. Therefore, a linearly parameterized dynamic model of the system is not required, and the simple structure and computationally efficient properties of this approach make it suitable for the real-time control applications. (ii) The adaptive sliding mode control scheme design relies on the online estimated uncertainty vector rather than relying on the worst-case scenario (i.e., bounds of uncertainties). Therefore, a-priory knowledge of the bounds of uncertainties is not required, and at each time instant, the control input compensates for the uncertainty that exists. (iii) The developed continuous control law using fundamentals of the sliding mode control theory eliminates the chattering phenomena without trade-off between performance and robustness, which is prevalent in boundary-layer approach.\nIntuitively, sliding mode control uses practically infinite gain to force the trajectories of a dynamic system to slide along the restricted sliding mode subspace. Trajectories from this reduced-order sliding mode have desirable properties (e.g., the system naturally slides along it until it comes to rest at a desired equilibrium). The main strength of sliding mode control is its robustness. Because the control can be as simple as a switching between two states (e.g., \"on\"/\"off\" or \"forward\"/\"reverse\"), it need not be precise and will not be sensitive to parameter variations that enter into the control channel. Additionally, because the control law is not a continuous function, the sliding mode can be reached in \"finite\" time (i.e., better than asymptotic behavior). Under certain common conditions, optimality requires the use of bang–bang control; hence, sliding mode control describes the optimal controller for a broad set of dynamic systems.\nOne application of sliding mode controller is the control of electric drives operated by switching power converters. Because of the discontinuous operating mode of those converters, a discontinuous sliding mode controller is a natural implementation choice over continuous controllers that may need to be applied by means of pulse-width modulation or a similar technique of applying a continuous signal to an output that can only take discrete states. Sliding mode control has many applications in robotics. In particular, this control algorithm has been used for tracking control of unmanned surface vessels in simulated rough seas with high degree of success.\nSliding mode control must be applied with more care than other forms of nonlinear control that have more moderate control action. In particular, because actuators have delays and other imperfections, the hard sliding-mode-control action can lead to chatter, energy loss, plant damage, and excitation of unmodeled dynamics. Continuous control design methods are not as susceptible to these problems and can be made to mimic sliding-mode controllers.\nControl scheme.\nConsider a nonlinear dynamical system described by\nwhere\nis an -dimensional state vector and\nis an -dimensional input vector that will be used for state feedback. The functions formula_5 and formula_6 are assumed to be continuous and sufficiently smooth so that the Picard–Lindelöf theorem can be used to guarantee that solution formula_7 to Equation  exists and is unique.\nA common task is to design a state-feedback control law formula_8 (i.e., a mapping from current state formula_7 at time to the input formula_10) to stabilize the dynamical system in Equation  around the origin formula_11. That is, under the control law, whenever the system is started away from the origin, it will return to it. For example, the component formula_12 of the state vector formula_13 may represent the difference some output is away from a known signal (e.g., a desirable sinusoidal signal); if the control formula_10 can ensure that formula_12 quickly returns to formula_16, then the output will track the desired sinusoid. In sliding-mode control, the designer knows that the system behaves desirably (e.g., it has a stable equilibrium) provided that it is constrained to a subspace of its configuration space. Sliding mode control forces the system trajectories into this subspace and then holds them there so that they slide along it. This reduced-order subspace is referred to as a \"sliding (hyper)surface\", and when closed-loop feedback forces trajectories to slide along it, it is referred to as a \"sliding mode\" of the closed-loop system. Trajectories along this subspace can be likened to trajectories along eigenvectors (i.e., modes) of LTI systems; however, the sliding mode is enforced by creasing the vector field with high-gain feedback. Like a marble rolling along a crack, trajectories are confined to the sliding mode.\nThe sliding-mode control scheme involves\nBecause sliding mode control laws are not continuous, it has the ability to drive trajectories to the sliding mode in finite time (i.e., stability of the sliding surface is better than asymptotic). However, once the trajectories reach the sliding surface, the system takes on the character of the sliding mode (e.g., the origin formula_17 may only have asymptotic stability on this surface).\nThe sliding-mode designer picks a \"switching function\" formula_18 that represents a kind of \"distance\" that the states formula_13 are away from a sliding surface.\nThe sliding-mode-control law switches from one state to another based on the \"sign\" of this distance. So the sliding-mode control acts like a stiff pressure always pushing in the direction of the sliding mode where formula_22.\nDesirable formula_7 trajectories will approach the sliding surface, and because the control law is not continuous (i.e., it switches from one state to another as trajectories move across this surface), the surface is reached in finite time. Once a trajectory reaches the surface, it will slide along it and may, for example, move toward the formula_25 origin. So the switching function is like a topographic map with a contour of constant height along which trajectories are forced to move.\nThe sliding (hyper)surface/manifold is typically of dimension formula_26 where is the number of states in formula_13 and is the number of input signals (i.e., control signals) in formula_10. For each control index formula_29, there is an formula_30-dimensional sliding surface given by\nThe vital part of SMC design is to choose a control law so that the sliding mode (i.e., this surface given by formula_31) exists and is reachable along system trajectories. The principle of sliding mode control is to forcibly constrain the system, by suitable control strategy, to stay on the sliding surface on which the system will exhibit desirable features. When the system is constrained by the sliding control to stay on the sliding surface, the system dynamics are governed by reduced-order system obtained from Equation .\nTo force the system states formula_13 to satisfy formula_33, one must:\nExistence of closed-loop solutions.\nNote that because the control law is not continuous, it is certainly not locally Lipschitz continuous, and so existence and uniqueness of solutions to the closed-loop system is \"not\" guaranteed by the Picard–Lindelöf theorem. Thus the solutions are to be understood in the Filippov sense. Roughly speaking, the resulting closed-loop system moving along formula_33 is approximated by the smooth dynamics formula_38 however, this smooth behavior may not be truly realizable. Similarly, high-speed pulse-width modulation or delta-sigma modulation produces outputs that only assume two states, but the effective output swings through a continuous range of motion. These complications can be avoided by using a different nonlinear control design method that produces a continuous controller. In some cases, sliding-mode control designs can be approximated by other continuous control designs.\nTheoretical foundation.\nThe following theorems form the foundation of variable structure control.\nTheorem 1: Existence of sliding mode.\nConsider a Lyapunov function candidate\nwhere formula_39 is the Euclidean norm (i.e., formula_40 is the distance away from the sliding manifold where formula_31). For the system given by Equation  and the sliding surface given by Equation , a sufficient condition for the existence of a sliding mode is that\nin a neighborhood of the surface given by formula_43.\nRoughly speaking (i.e., for the scalar control case when formula_44), to achieve formula_45, the feedback control law formula_46 is picked so that formula_47 and formula_48 have opposite signs. That is,\nNote that\nand so the feedback control law formula_56 has a direct impact on formula_48.\nReachability: Attaining sliding manifold in finite time.\nTo ensure that the sliding mode formula_31 is attained in finite time, formula_59 must be more strongly bounded away from zero. That is, if it vanishes too quickly, the attraction to the sliding mode will only be asymptotic. To ensure that the sliding mode is entered in finite time,\nwhere formula_61 and formula_62 are constants.\nExplanation by comparison lemma.\nThis condition ensures that for the neighborhood of the sliding mode formula_63,\nSo, for formula_65,\nwhich, by the chain rule (i.e., formula_67 with formula_68), means\nwhere formula_70 is the upper right-hand derivative of formula_71 and the symbol formula_72 denotes proportionality. So, by comparison to the curve formula_73 which is represented by differential equation formula_74 with initial condition formula_75, it must be the case that formula_76 for all . Moreover, because formula_77, formula_78 must reach formula_79 in finite time, which means that must reach formula_80 (i.e., the system enters the sliding mode) in finite time. Because formula_78 is proportional to the Euclidean norm formula_82 of the switching function formula_47, this result implies that the rate of approach to the sliding mode must be firmly bounded away from zero.\nConsequences for sliding mode control.\nIn the context of sliding mode control, this condition means that\nwhere formula_39 is the Euclidean norm. For the case when switching function formula_47 is scalar valued, the sufficient condition becomes\nTaking formula_88, the scalar sufficient condition becomes\nwhich is equivalent to the condition that\nThat is, the system should always be moving toward the switching surface formula_91, and its speed formula_92 toward the switching surface should have a non-zero lower bound. So, even though formula_47 may become vanishingly small as formula_13 approaches the formula_31 surface, formula_48 must always be bounded firmly away from zero. To ensure this condition, sliding mode controllers are discontinuous across the formula_91 manifold; they \"switch\" from one non-zero value to another as trajectories cross the manifold.\nTheorem 2: Region of attraction.\nFor the system given by Equation  and sliding surface given by Equation , the subspace for which the formula_98 surface is reachable is given by\nThat is, when initial conditions come entirely from this space, the Lyapunov function candidate formula_100 is a Lyapunov function and formula_13 trajectories are sure to move toward the sliding mode surface where formula_102. Moreover, if the reachability conditions from Theorem 1 are satisfied, the sliding mode will enter the region where formula_103 is more strongly bounded away from zero in finite time. Hence, the sliding mode formula_91 will be attained in finite time.\nTheorem 3: Sliding motion.\nLet\nbe nonsingular. That is, the system has a kind of controllability that ensures that there is always a control that can move a trajectory to move closer to the sliding mode. Then, once the sliding mode where formula_106 is achieved, the system will stay on that sliding mode. Along sliding mode trajectories, formula_51 is constant, and so sliding mode trajectories are described by the differential equation\nIf an formula_13-equilibrium is stable with respect to this differential equation, then the system will slide along the sliding mode surface toward the equilibrium.\nThe \"equivalent control law\" on the sliding mode can be found by solving\nfor the equivalent control law formula_56. That is,\nand so the equivalent control\nThat is, even though the actual control formula_10 is not continuous, the rapid switching across the sliding mode where formula_31 forces the system to \"act\" as if it were driven by this continuous control.\nLikewise, the system trajectories on the sliding mode behave as if\nThe resulting system matches the sliding mode differential equation\n, the sliding mode surface formula_31, and the trajectory conditions from the reaching phase now reduce to the above derived simpler condition. Hence, the system can be assumed to follow the simpler formula_119 condition after some initial transient during the period while the system finds the sliding mode. The same motion is approximately maintained when the equality formula_106 only approximately holds.\nIt follows from these theorems that the sliding motion is invariant (i.e., insensitive) to sufficiently small disturbances entering the system through the control channel. That is, as long as the control is large enough to ensure that formula_45 and formula_48 is uniformly bounded away from zero, the sliding mode will be maintained as if there was no disturbance. The invariance property of sliding mode control to certain disturbances and model uncertainties is its most attractive feature; it is strongly robust.\nAs discussed in an example below, a sliding mode control law can keep the constraint\nin order to asymptotically stabilize any system of the form\nwhen formula_125 has a finite upper bound. In this case, the sliding mode is where\n(i.e., where formula_127). That is, when the system is constrained this way, it behaves like a simple stable linear system, and so it has a globally exponentially stable equilibrium at the formula_128 origin.\nControl design examples.\nAutomated design solutions.\nAlthough various theories exist for sliding mode control system design, there is a lack of a highly effective design methodology due to practical difficulties encountered in analytical and numerical methods. A reusable computing paradigm such as a genetic algorithm can, however, be utilized to transform a 'unsolvable problem' of optimal design into a practically solvable 'non-deterministic polynomial problem'. This results in computer-automated designs for sliding model control.\nSliding mode observer.\nSliding mode control can be used in the design of state observers. These non-linear high-gain observers have the ability to bring coordinates of the estimator error dynamics to zero in finite time. Additionally, switched-mode observers have attractive measurement noise resilience that is similar to a Kalman filter. For simplicity, the example here uses a traditional sliding mode modification of a Luenberger observer for an LTI system. In these sliding mode observers, the order of the observer dynamics are reduced by one when the system enters the sliding mode. In this particular example, the estimator error for a single estimated state is brought to zero in finite time, and after that time the other estimator errors decay exponentially to zero. However, as first described by Drakunov, a can be built that brings the estimation error for all estimated states to zero in a finite (and arbitrarily small) time.\nHere, consider the LTI system\nwhere state vector formula_180, formula_181 is a vector of inputs, and output is a scalar equal to the first state of the formula_13 state vector. Let\nwhere\nThe goal is to design a high-gain state observer that estimates the state vector formula_13 using only information from the measurement formula_190. Hence, let the vector formula_191 be the estimates of the states. The observer takes the form\nwhere formula_193 is a nonlinear function of the error between estimated state formula_194 and the output formula_190, and formula_196 is an observer gain vector that serves a similar purpose as in the typical linear Luenberger observer. Likewise, let\nwhere formula_198 is a column vector. Additionally, let formula_199 be the state estimator error. That is, formula_200. The error dynamics are then\nwhere formula_202 is the estimator error for the first state estimate. The nonlinear control law can be designed to enforce the sliding manifold\nso that estimate formula_194 tracks the real state formula_12 after some finite time (i.e., formula_206). Hence, the sliding mode control switching function\nTo attain the sliding manifold, formula_48 and formula_47 must always have opposite signs (i.e., formula_142 for essentially all formula_13). However,\nwhere formula_213 is the collection of the estimator errors for all of the unmeasured states. To ensure that formula_142, let\nwhere\nThat is, positive constant must be greater than a scaled version of the maximum possible estimator errors for the system (i.e., the initial errors, which are assumed to be bounded so that can be picked large enough; al). If is sufficiently large, it can be assumed that the system achieves formula_217 (i.e., formula_206). Because formula_219 is constant (i.e., 0) along this manifold, formula_220 as well. Hence, the discontinuous control formula_221 may be replaced with the equivalent continuous control formula_222 where\nSo\nThis equivalent control formula_222 represents the contribution from the other formula_30 states to the trajectory of the output state formula_12. In particular, the row formula_228 acts like an output vector for the error subsystem\nSo, to ensure the estimator error formula_230 for the unmeasured states converges to zero, the formula_231 vector formula_232 must be chosen so that the formula_233 matrix formula_234 is Hurwitz (i.e., the real part of each of its eigenvalues must be negative). Hence, provided that it is observable, this formula_230 system can be stabilized in exactly the same way as a typical linear state observer when formula_228 is viewed as the output matrix (i.e., \"\"). That is, the formula_222 equivalent control provides measurement information about the unmeasured states that can continually move their estimates asymptotically closer to them. Meanwhile, the discontinuous control formula_238 forces the estimate of the measured state to have zero error in finite time. Additionally, white zero-mean symmetric measurement noise (e.g., Gaussian noise) only affects the switching frequency of the control , and hence the noise will have little effect on the equivalent sliding mode control formula_222. Hence, the sliding mode observer has Kalman filter–like features.\nThe final version of the observer is thus\nwhere\nThat is, by augmenting the control vector formula_10 with the switching function formula_245, the sliding mode observer can be implemented as an LTI system. That is, the discontinuous signal formula_245 is viewed as a control \"input\" to the 2-input LTI system.\nFor simplicity, this example assumes that the sliding mode observer has access to a measurement of a single state (i.e., output formula_190). However, a similar procedure can be used to design a sliding mode observer for a vector of weighted combinations of states (i.e., when output formula_248 uses a generic matrix ). In each case, the sliding mode will be the manifold where the estimated output formula_249 follows the measured output formula_250 with zero error (i.e., the manifold where formula_251).", "Automation-Control": 0.9937363267, "Qwen2": "Yes"} {"id": "60099319", "revid": "45708962", "url": "https://en.wikipedia.org/wiki?curid=60099319", "title": "List of printing protocols", "text": "A printing protocol is a protocol for communication between client devices (computers, mobile phones, tablets, etc.) and printers (or print servers). It allows clients to submit one or more print jobs to the printer or print server, and perform tasks such as querying the status of a printer, obtaining the status of print jobs, or cancelling individual print jobs.\nDedicated protocols.\nProtocols listed here are specific for printing.\nGeneric protocols.\nThese protocols put the printer as similar class to remote disks, scanners and multimedia devices. This is especially true for multi-function printers, that also produce image files (scans and faxes) and send them back through the network.\nWireless protocols.\nWireless protocols is designed for wireless devices. This kind of protocol is based on one kind of printing protocols plus Zero-configuration networking (zeroconf) mechanisms. In this way, printers can be used by wireless devices seamlessly. Note that the printer itself is not necessary to be wireless. \nInternet protocols.\nThe computer and the printer usually should locate in the same local area network (LAN) when using all of the above protocols. Internet printing protocols is designed for Internet printing.\nThe service ended on December 31, 2020.", "Automation-Control": 0.7915270329, "Qwen2": "Yes"} {"id": "60105148", "revid": "21112944", "url": "https://en.wikipedia.org/wiki?curid=60105148", "title": "Deep reinforcement learning", "text": "Deep reinforcement learning (deep RL) is a subfield of machine learning that combines reinforcement learning (RL) and deep learning. RL considers the problem of a computational agent learning to make decisions by trial and error. Deep RL incorporates deep learning into the solution, allowing agents to make decisions from unstructured input data without manual engineering of the state space. Deep RL algorithms are able to take in very large inputs (e.g. every pixel rendered to the screen in a video game) and decide what actions to perform to optimize an objective (e.g. maximizing the game score). Deep reinforcement learning has been used for a diverse set of applications including but not limited to robotics, video games, natural language processing, computer vision, education, transportation, finance and healthcare.\nOverview.\nDeep learning.\nDeep learning is a form of machine learning that utilizes a neural network to transform a set of inputs into a set of outputs via an artificial neural network. Deep learning methods, often using supervised learning with labeled datasets, have been shown to solve tasks that involve handling complex, high-dimensional raw input data such as images, with less manual feature engineering than prior methods, enabling significant progress in several fields including computer vision and natural language processing. In the past decade deep RL has achieved remarkable result on range of problem, from a single and multiplayer games-such as GO, Atari Games, and Dota 2-to robotic\nReinforcement learning.\nReinforcement learning is a process in which an agent learns to make decisions through trial and error. This problem is often modeled mathematically as a Markov decision process (MDP), where an agent at every timestep is in a state formula_1, takes action formula_2, receives a scalar reward and transitions to the next state formula_3 according to environment dynamics formula_4. The agent attempts to learn a policy formula_5, or map from observations to actions, in order to maximize its returns (expected sum of rewards). In reinforcement learning (as opposed to optimal control) the algorithm only has access to the dynamics formula_4 through sampling.\nDeep reinforcement learning.\nIn many practical decision-making problems, the states formula_1 of the MDP are high-dimensional (e.g., images from a camera or the raw sensor stream from a robot) and cannot be solved by traditional RL algorithms. Deep reinforcement learning algorithms incorporate deep learning to solve such MDPs, often representing the policy formula_5 or other learned functions as a neural network and developing specialized algorithms that perform well in this setting.\nHistory.\nAlong with rising interest in neural networks beginning in the mid 1980s, interest grew in deep reinforcement learning, where a neural network is used in reinforcement learning to represent policies or value functions. Because in such a system, the entire decision making process from sensors to motors in a robot or agent involves a single neural network, it is also sometimes called end-to-end reinforcement learning. One of the first successful applications of reinforcement learning with neural networks was TD-Gammon, a computer program developed in 1992 for playing backgammon. Four inputs were used for the number of pieces of a given color at a given location on the board, totaling 198 input signals. With zero knowledge built in, the network learned to play the game at an intermediate level by self-play and TD(formula_9).\nSeminal textbooks by Sutton and Barto on reinforcement learning, Bertsekas and Tsitiklis on neuro-dynamic programming, and others advanced knowledge and interest in the field.\nKatsunari Shibata's group showed that various functions emerge in this framework, including image recognition, color constancy, sensor motion (active recognition), hand-eye coordination and hand reaching movement, explanation of brain activities, knowledge transfer, memory, selective attention, prediction, and exploration.\nStarting around 2012, the so called Deep learning revolution led to an increased interest in using deep neural networks as function approximators across a variety of domains. This led to a renewed interest in researchers using deep neural networks to learn the policy, value, and/or Q functions present in existing reinforcement learning algorithms. \nBeginning around 2013, DeepMind showed impressive learning results using deep RL to play Atari video games. The computer player a neural network trained using a deep RL algorithm, a deep version of Q-learning they termed deep Q-networks (DQN), with the game score as the reward. They used a deep convolutional neural network to process 4 frames RGB pixels (84x84) as inputs. All 49 games were learned using the same network architecture and with minimal prior knowledge, outperforming competing methods on almost all the games and performing at a level comparable or superior to a professional human game tester.\nDeep reinforcement learning reached another milestone in 2015 when AlphaGo, a computer program trained with deep RL to play Go, became the first computer Go program to beat a human professional Go player without handicap on a full-sized 19×19 board.\nIn a subsequent project in 2017, AlphaZero improved performance on Go while also demonstrating they could use the same algorithm to learn to play chess and shogi at a level competitive or superior to existing computer programs for those games, and again improved in 2019 with MuZero. Separately, another milestone was achieved by researchers from Carnegie Mellon University in 2019 developing Pluribus, a computer program to play poker that was the first to beat professionals at multiplayer games of no-limit Texas hold 'em. OpenAI Five, a program for playing five-on-five Dota 2 beat the previous world champions in a demonstration match in 2019.\nDeep reinforcement learning has also been applied to many domains beyond games. In robotics, it has been used to let robots perform simple household tasks and solve a Rubik's cube with a robot hand. Deep RL has also found sustainability applications, used to reduce energy consumption at data centers. Deep RL for autonomous driving is an active area of research in academia and industry. Loon explored deep RL for autonomously navigating their high-altitude balloons.\nAlgorithms.\nVarious techniques exist to train policies to solve tasks with deep reinforcement learning algorithms, each having their own benefits. At the highest level, there is a distinction between model-based and model-free reinforcement learning, which refers to whether the algorithm attempts to learn a forward model of the environment dynamics.\nIn model-based deep reinforcement learning algorithms, a forward model of the environment dynamics is estimated, usually by supervised learning using a neural network. Then, actions are obtained by using model predictive control using the learned model. Since the true environment dynamics will usually diverge from the learned dynamics, the agent re-plans often when carrying out actions in the environment. The actions selected may be optimized using Monte Carlo methods such as the cross-entropy method, or a combination of model-learning with model-free methods.\nIn model-free deep reinforcement learning algorithms, a policy formula_5 is learned without explicitly modeling the forward dynamics. A policy can be optimized to maximize returns by directly estimating the policy gradient but suffers from high variance, making it impractical for use with function approximation in deep RL. Subsequent algorithms have been developed for more stable learning and widely applied. Another class of model-free deep reinforcement learning algorithms rely on dynamic programming, inspired by temporal difference learning and Q-learning. In discrete action spaces, these algorithms usually learn a neural network Q-function formula_11 that estimates the future returns taking action formula_2 from state formula_1. In continuous spaces, these algorithms often learn both a value estimate and a policy.\nResearch.\nDeep reinforcement learning is an active area of research, with several lines of inquiry.\nExploration.\nAn RL agent must balance the exploration/exploitation tradeoff: the problem of deciding whether to pursue actions that are already known to yield high rewards or explore other actions in order to discover higher rewards. RL agents usually collect data with some type of stochastic policy, such as a Boltzmann distribution in discrete action spaces or a Gaussian distribution in continuous action spaces, inducing basic exploration behavior. The idea behind novelty-based, or curiosity-driven, exploration is giving the agent a motive to explore unknown outcomes in order to find the best solutions. This is done by \"modify[ing] the loss function (or even the network architecture) by adding terms to incentivize exploration\". An agent may also be aided in exploration by utilizing demonstrations of successful trajectories, or reward-shaping, giving an agent intermediate rewards that are customized to fit the task it is attempting to complete.\nOff-policy reinforcement learning.\nAn important distinction in RL is the difference between on-policy algorithms that require evaluating or improving the policy that collects data, and off-policy algorithms that can learn a policy from data generated by an arbitrary policy. Generally, value-function based methods such as Q-learning are better suited for off-policy learning and have better sample-efficiency - the amount of data required to learn a task is reduced because data is re-used for learning. At the extreme, offline (or \"batch\") RL considers learning a policy from a fixed dataset without additional interaction with the environment.\nInverse reinforcement learning.\nInverse RL refers to inferring the reward function of an agent given the agent's behavior. Inverse reinforcement learning can be used for learning from demonstrations (or apprenticeship learning) by inferring the demonstrator's reward and then optimizing a policy to maximize returns with RL. Deep learning approaches have been used for various forms of imitation learning and inverse RL.\nGoal-conditioned reinforcement learning.\nAnother active area of research is in learning goal-conditioned policies, also called contextual or universal policies formula_14 that take in an additional goal formula_15 as input to communicate a desired aim to the agent. Hindsight experience replay is a method for goal-conditioned RL that involves storing and learning from previous failed attempts to complete a task. While a failed attempt may not have reached the intended goal, it can serve as a lesson for how achieve the unintended result through hindsight relabeling.\nMulti-agent reinforcement learning.\nMany applications of reinforcement learning do not involve just a single agent, but rather a collection of agents that learn together and co-adapt. These agents may be competitive, as in many games, or cooperative as in many real-world multi-agent systems. Multi-agent reinforcement learning studies the problems introduced in this setting.\nGeneralization.\nThe promise of using deep learning tools in reinforcement learning is generalization: the ability to operate correctly on previously unseen inputs. For instance, neural networks trained for image recognition can recognize that a picture contains a bird even it has never seen that particular image or even that particular bird. Since deep RL allows raw data (e.g. pixels) as input, there is a reduced need to predefine the environment, allowing the model to be generalized to multiple applications. With this layer of abstraction, deep reinforcement learning algorithms can be designed in a way that allows them to be general and the same model can be used for different tasks. One method of increasing the ability of policies trained with deep RL policies to generalize is to incorporate representation learning.", "Automation-Control": 0.706273675, "Qwen2": "Yes"} {"id": "15361627", "revid": "169132", "url": "https://en.wikipedia.org/wiki?curid=15361627", "title": "Wireless DNC", "text": "Wireless DNC is a form of wireless data transfer, known as Direct Numerical Control, performed between a computer numerical control (CNC) machine and the computer controlling it. These are very widely used in the automobile, engineering, sheet metal and aeronautic industries. These machines are capable of producing different parts. For each type of part, a sequence of instructions is needed. This list of instructions is stored in a computer script, a computer file written in a programming language, such as G-code. This script is commonly referred to as a part program. When a part is to be produced this part program is uploaded to a CNC machine by RS-232 link.\nNow this RS-232 link between a PC and CNC machine with software is called a DNC system. In a typical machine shop floor, it is difficult to maintain the data cable. Hence wireless data transfer has come into existence. There are mainly two types of wireless hardware units available in the market. One is using Bluetooth technology, while the other uses Wi-Fi technology. \nIn case of Bluetooth, generally one pair of Bluetooth devices is used. One gets plugged on a COM port of a PC or Laptop, and the other is connected to an RS-232 port of a CNC machine. The wireless link is established with the required driver software. Once this is established, the user can run their DNC software for data transfer. In most cases, the file is sent from a remote PC to a selected CNC machine.\nIn Wi-Fi technology, a wireless link is established between a device called wireless access point (generally near the PC) and a device called a wireless node, which is interfaced to the CNC machine. There is one access point and multiple nodes. Each wireless access point and wireless node has one IP address. This IP address must be of the same domain name as the PC. Thus a wireless Ethernet link is created. The wireless node has an RS-232 port, which is connected to an RS-232 port of the CNC machine. Driver software on the PC maps the RS-232 port of the wireless node as a virtual COM port of the PC. Once this is done, the DNC software takes care of the two-way data transfer.", "Automation-Control": 0.6674149036, "Qwen2": "Yes"} {"id": "4221385", "revid": "41814252", "url": "https://en.wikipedia.org/wiki?curid=4221385", "title": "Java Optimized Processor", "text": "Java Optimized Processor (JOP)\nis a Java processor, an implementation of Java virtual machine (JVM) in hardware.\nJOP is free hardware under the GNU General Public License, version 3.\nThe intention of JOP is to provide a small hardware JVM for embedded real-time systems. The main feature is the predictability of the execution time of Java bytecodes. JOP is implemented over an FPGA.", "Automation-Control": 0.7801775336, "Qwen2": "Yes"} {"id": "36647223", "revid": "43073944", "url": "https://en.wikipedia.org/wiki?curid=36647223", "title": "Signature recognition", "text": "Signature recognition is an example of behavioral biometrics that identifies a person based on their handwriting. It can be operated in two different ways:\nStatic: In this mode, users write their signature on paper, and after the writing is complete, it is digitized through an optical scanner or a camera to turn the signature image into bits. The biometric system then recognizes the signature analyzing its shape. This group is also known as \"off-line\".\nDynamic: In this mode, users write their signature in a digitizing tablet, which acquires the signature in real time. Another possibility is the acquisition by means of stylus-operated PDAs. Some systems also operate on smart-phones or tablets with a capacitive screen, where users can sign using a finger or an appropriate pen. Dynamic recognition is also known as \"on-line\". Dynamic information usually consists of the following information:\nThe state-of-the-art in signature recognition can be found in the last major international competition.\nThe most popular pattern recognition techniques applied for signature recognition are dynamic time warping, hidden Markov models and vector quantization. Combinations of different techniques also exist.\nRelated techniques.\nRecently, a handwritten biometric approach has also been proposed. In this case, the user is recognized analyzing his handwritten text (see also Handwritten biometric recognition).\nDatabases.\nSeveral public databases exist, being the most popular ones SVC, and MCYT.", "Automation-Control": 0.8617450595, "Qwen2": "Yes"} {"id": "617980", "revid": "33011235", "url": "https://en.wikipedia.org/wiki?curid=617980", "title": "GNU toolchain", "text": "The GNU toolchain is a broad collection of programming tools produced by the GNU Project. These tools form a toolchain (a suite of tools used in a serial manner) used for developing software applications and operating systems.\nThe GNU toolchain plays a vital role in development of Linux, some BSD systems, and software for embedded systems. Parts of the GNU toolchain are also directly used with or ported to other platforms such as Solaris, macOS, Microsoft Windows (via Cygwin and MinGW/MSYS), Sony PlayStation Portable (used by PSP modding scene) and Sony PlayStation 3.\nComponents.\nProjects included in the GNU toolchain are:", "Automation-Control": 0.7990989089, "Qwen2": "Yes"} {"id": "17627465", "revid": "203434", "url": "https://en.wikipedia.org/wiki?curid=17627465", "title": "LogitBoost", "text": "In machine learning and computational learning theory, LogitBoost is a boosting algorithm formulated by Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The original paper casts the AdaBoost algorithm into a statistical framework. Specifically, if one considers AdaBoost as a generalized additive model and then applies the cost function of logistic regression, one can derive the LogitBoost algorithm.\nMinimizing the LogitBoost cost function.\nLogitBoost can be seen as a convex optimization. Specifically, given that we seek an additive model of the form\nthe LogitBoost algorithm minimizes the logistic loss:", "Automation-Control": 0.9379246831, "Qwen2": "Yes"} {"id": "56543723", "revid": "23646674", "url": "https://en.wikipedia.org/wiki?curid=56543723", "title": "3D makeR Technologies", "text": "3D makeR Technologies (makeR) is a 3D printer manufacturer. The company started out as an open-source printer company. It was founded between Barcelona and Santa Marta by Carlos Camargo, who currently acts as the CEO of the company. Following the traditional RepRap model, the makeR's first products were as do it yourself kits with an alternative version based on open-source FDM 3D printer Prusa i3, called Prusa Tairona. Current makeR 3D printers are designed with a closed frame and selected build sizes.\nProducts.\nTheir product line includes the 3D printer series PEGASUS. The makeR 3D printers are compatible with polylactic acid (PLA), acrylonitrile butadiene styrene (ABS), thermoplastic polyurethane (TPU), high-impact polystyrene (HIPS), polyvinyl alcohol (PVA) and some specials materials for industrial needs, such as PLA filament mixed with particles of metals which through sanding 3D printed parts provide appearance similar to metals (steel, copper, and aluminium). Also, makeR printers print with nylon and carbon fiber.", "Automation-Control": 0.9974737167, "Qwen2": "Yes"} {"id": "39701014", "revid": "7583140", "url": "https://en.wikipedia.org/wiki?curid=39701014", "title": "Cast urethanes", "text": "Cast Urethanes are similar to injection molding. During the process of injection molding, a hard tool is created. The hard tool, made of an A side and a B side, forms a void within and that void is injected with plastics ranging in material property, durability, and consistency. Plastic cups, dishware, and toys are most commonly made using the process of injection molding because they are common consumer items that need to be produced on a mass scale, and injection molding (once the hard tool has been created) is designed for mass production.\nCasting urethanes is similar in that polyurethanes are injected into a tool. But with cast urethanes, the tool is a soft tool, typically made with a type of silicone mold. The mold is created via a master pattern. Master patterns for cast urethanes can be created with CNC machining (which is a common process for injection molding) but cast urethane master patterns are often created with additive manufacturing (or 3D Printing) and the reasons for this vary.\nCreating a cast urethane master pattern is different from the steps involved in creating hard tooling for injection molding. Hard tools for injection molding are going to be subjected to a lot of stress and heat during the injection process. They will see runs of thousands of parts per day. The care that goes into a hard tool involves intense machine programming which costs thousands of dollars alone. The price for hard tooling is balanced by the mass production the tooling brings, which is where cast urethanes begin to differ. Cast urethanes are suited for smaller runs of parts and prototyping. Because the cost for soft tooling is lower, down in the hundreds rather than hundreds of thousands, cast urethanes are excellent resources for creators still testing product design, for one-off products, or for testing market and consumer response to a new product.\nMaster Patterns.\nCast urethane master patterns can be produced using machining, additive manufacturing—even an already existing product. The master pattern is used to create an A side and a B side for a mold. The pattern is used to form a void within a mold. The mold material is one that easily picks up surface detail (such as silicones) because the mold will be responsible for reflecting the surface of the product.\nApplications.\nThere are many types of cast urethane applications including:\nProcess.\nCast urethane starts as a liquid that can be dispensed into a mold, post cured in ovens and where required, secondary machining operations can be added. Cast thermoset urethanes have better physical properties than most injection or extruded thermoplastics. Dispensing liquid urethane into open molds or compression tools makes it possible to cast just about any configuration from affordable tooling.\nSteps include first printing a master pattern for an accurate silicone mold, which is then encased in liquid silicone. After the mold cures, it is cut into distinct sections and the pattern is removed. The cavity formed is used for casting the end product. The cavity or void is filled with a material, which will cure and be removed from the tool.\nIndustries.\nThe types of industries that utilize cast urethane include:", "Automation-Control": 0.9722672105, "Qwen2": "Yes"} {"id": "39711468", "revid": "18872885", "url": "https://en.wikipedia.org/wiki?curid=39711468", "title": "Input-to-state stability", "text": "Input-to-state stability (ISS) is a stability notion widely used to study stability of nonlinear control systems with external inputs. Roughly speaking, a control system is ISS if it is globally asymptotically stable in the absence of external inputs and if its trajectories are bounded by a function of the size of the input for all sufficiently large times.\nThe importance of ISS is due to the fact that the concept has bridged the gap between input–output and state-space methods, widely used within the control systems community.\nISS unified the Lyapunov and input-output stability theories and revolutionized our view on stabilization of nonlinear systems, design of robust nonlinear observers, stability of nonlinear interconnected control systems, nonlinear detectability theory, and supervisory adaptive control. \nThis made ISS the dominating stability paradigm in nonlinear control theory, with such diverse applications as robotics, mechatronics, systems biology, electrical and aerospace engineering, to name a few. \nThe notion of ISS was introduced for systems described by ordinary differential equations by Eduardo Sontag in 1989.\nSince that the concept was successfully used for many other classes of control systems including systems governed by partial differential equations, retarded systems, hybrid systems, etc.\nDefinition.\nConsider a time-invariant system of ordinary differential equations of the form\nwhere formula_1 is a Lebesgue measurable essentially bounded external input and formula_2 is a Lipschitz continuous function w.r.t. the first argument uniformly w.r.t. the second one. This ensures that there exists a unique absolutely continuous solution of the system .\nTo define ISS and related properties, we exploit the following classes of comparison functions. We denote by formula_3 the set of continuous increasing functions formula_4 with formula_5 and formula_6 the set of continuous strictly decreasing functions formula_4 with formula_8. Then we can denote formula_9 as functions where formula_10 for all formula_11 and formula_12 for all formula_13.\nSystem is called globally asymptotically stable at zero (0-GAS) if the corresponding system with zero input\nis globally asymptotically stable, that is there exist \nformula_9 so that for all initial values \nformula_15 and all times formula_11 the following estimate is valid for solutions of \nSystem is called input-to-state stable (ISS) if there exist functions \nformula_17 and formula_9 so that for all initial values formula_15, all admissible inputs formula_20 and all times formula_11 the following inequality holds\nThe function formula_22 in the above inequality is called the gain.\nClearly, an ISS system is 0-GAS as well as BIBO stable (if we put the output equal to the state of the system). The converse implication is in general not true.\nIt can be also proved that if formula_23, then formula_24.\nCharacterizations of input-to-state stability property.\nFor an understanding of ISS its restatements in terms of other stability properties are of great importance.\nSystem is called globally stable (GS) if there exist \nformula_25 such that formula_26, formula_27 and formula_28 it holds that\nSystem satisfies the asymptotic gain (AG) property if there exists \nformula_29: formula_26, formula_27 it holds that\nThe following statements are equivalent for sufficiently regular right-hand side formula_2\n1. is ISS\n2. is GS and has the AG property\n3. is 0-GAS and has the AG property\nThe proof of this result as well as many other characterizations of ISS can be found in the papers\n and. \nOther characterizations of ISS that are valid under very mild restrictions on the regularity of the rhs formula_2 and are applicable to more general infinite-dimensional systems, have been shown in.\nISS-Lyapunov functions.\nAn important tool for the verification of ISS are ISS-Lyapunov functions.\nA smooth function formula_34 is called an ISS-Lyapunov function for , if formula_35, formula_36 and positive-definite function formula_37, such that:\nand \nformula_39 it holds:\nThe function formula_41 is called Lyapunov gain.\nIf a system is without inputs (i.e. formula_42), then the last implication reduces to the condition\nwhich tells us that formula_44 is a \"classic\" Lyapunov function.\nAn important result due to E. Sontag and Y. Wang is that a system is ISS if and only if there exists a smooth ISS-Lyapunov function for it.\nExamples.\nConsider a system\nDefine a candidate ISS-Lyapunov function formula_46 by\nformula_47\nformula_48\nChoose a Lyapunov gain formula_49 by \nThen we obtain that for formula_51 it holds\nThis shows that formula_44 is an ISS-Lyapunov function for a considered system with the Lyapunov gain formula_49.\nInterconnections of ISS systems.\nOne of the main features of the ISS framework is the possibility to study stability properties of interconnections of input-to-state stable systems.\nConsider the system given by\nHere formula_55, formula_56 and formula_57 are Lipschitz continuous in formula_58 uniformly with respect to the inputs from the formula_59-th subsystem.\nFor the formula_59-th subsystem of the definition of an ISS-Lyapunov function can be written as follows.\nA smooth function formula_61 is an ISS-Lyapunov function (ISS-LF)\nfor the formula_59-th subsystem of , if there exist\nfunctions formula_63, formula_64,\nformula_65, formula_66, formula_67 and a positive-definite function formula_68, such that:\nand formula_70 it holds\nCascade interconnections.\nCascade interconnections are a special type of interconnection, where the dynamics of the formula_59-th subsystem does not depend on the states of the subsystems formula_73. Formally, the cascade interconnection can be written as\nIf all subsystems of the above system are ISS, then the whole cascade interconnection is also ISS.\nIn contrast to cascades of ISS systems, the cascade interconnection of 0-GAS systems is in general not 0-GAS. The following example illustrates this fact. Consider a system given by\nBoth subsystems of this system are 0-GAS, but for sufficiently large initial states formula_75 and for a certain finite time formula_76 it holds formula_77 for formula_78, i.e. the system exhibits finite escape time, and thus is not 0-GAS.\nFeedback interconnections.\nThe interconnection structure of subsystems is characterized by the internal Lyapunov gains formula_79. The question, whether the interconnection is ISS, depends on the properties of the gain operator formula_80 defined by \nThe following small-gain theorem establishes a sufficient condition for ISS of the interconnection of ISS systems. Let formula_82 be an ISS-Lyapunov function for formula_59-th subsystem of with corresponding gains formula_79, formula_85. If the nonlinear small-gain condition\nholds, then the whole interconnection is ISS.\nSmall-gain condition holds iff for each cycle in formula_86 (that is for all formula_87, where formula_88) and for all formula_89 it holds\nThe small-gain condition in this form is called also cyclic small-gain condition.\nRelated stability concepts.\nIntegral ISS (iISS).\nSystem is called integral input-to-state stable (ISS) if there exist functions formula_91 and formula_9 so that for all initial values formula_15, all admissible inputs formula_20 and all times formula_11 the following inequality holds\nIn contrast to ISS systems, if a system is integral ISS, its trajectories may be unbounded even for bounded inputs. To see this put formula_96 for all formula_97 and take formula_98. Then the estimate takes the form\nand the right hand side grows to infinity as formula_100.\nAs in the ISS framework, Lyapunov methods play a central role in iISS theory.\nA smooth function formula_34 is called an iISS-Lyapunov function for , if formula_35, formula_36 and positive-definite function formula_37, such that:\nand \nformula_39 it holds:\nAn important result due to D. Angeli, E. Sontag and Y. Wang is that system is integral ISS if and only if there exists an iISS-Lyapunov function for it.\nNote that in the formula above formula_37 is assumed to be only positive definite. \nIt can be easily proved, that if formula_109 is an iISS-Lyapunov function with formula_110, then formula_109 is actually an ISS-Lyapunov function for a system .\nThis shows in particular, that every ISS system is integral ISS. The converse implication is not true, as the following example shows. Consider the system\nThis system is not ISS, since for large enough inputs the trajectories are unbounded. However, it is integral ISS with an iISS-Lyapunov function formula_109 defined by \nLocal ISS (LISS).\nAn important role are also played by local versions of the ISS property. A system is called locally ISS (LISS) if there exist a constant formula_115 and functions\nformula_17 and formula_9 so that for all formula_118, all admissible inputs formula_119 and all times formula_11 it holds that\nAn interesting observation is that 0-GAS implies LISS.\nOther stability notions.\nMany other related to ISS stability notions have been introduced: incremental ISS, input-to-state dynamical stability (ISDS), input-to-state practical stability (ISpS), input-to-output stability (IOS) etc.\nISS of time-delay systems.\nConsider the time-invariant time-delay system\nHere formula_121 is the state of the system at time formula_122, formula_123 and formula_124 satisfies certain assumptions to guarantee existence and uniqueness of solutions of the system .\nSystem is ISS if and only if there exist functions formula_125 and formula_126 such that for every formula_127, every admissible input formula_128 and for all formula_129, it holds that\nIn the ISS theory for time-delay systems two different Lyapunov-type sufficient conditions have been proposed: via ISS Lyapunov-Razumikhin functions and by ISS Lyapunov-Krasovskii functionals. For converse Lyapunov theorems for time-delay systems see.\nISS of other classes of systems.\nInput-to-state stability of the systems based on time-invariant ordinary differential equations is a quite developed theory, see a recent monograph. However, ISS theory of other classes of systems is also being investigated for time-variant ODE systems and hybrid systems. In the last time also certain generalizations of ISS concepts to infinite-dimensional systems have been proposed.\nSeminars and online resources on ISS.\n1. Online Seminar: Input-to-State Stability and its Applications\n2. YouTube Channel on ISS", "Automation-Control": 1.0000047684, "Qwen2": "Yes"} {"id": "39712136", "revid": "9021902", "url": "https://en.wikipedia.org/wiki?curid=39712136", "title": "Prioritised Petri net", "text": "A Prioritised Petri net is a structure (PN, Π) where PN is a Petri net and Π is a priority function that maps transitions into non-negative natural numbers representing their priority level \nThe enabled transitions with a given priority k always fire before any other enabled transition with priority j<k.", "Automation-Control": 0.7845620513, "Qwen2": "Yes"} {"id": "56887767", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=56887767", "title": "Stanford arm", "text": "The Stanford arm is an industrial robot with six degrees of freedom, designed at Stanford University by Victor Scheinman in 1969. \nThe Stanford arm is a serial manipulator whose kinematic chain consists of two revolute joints at the base, a prismatic joint, and a spherical joint. Because it includes several kinematic pairs, it is often used as an educational example in robot kinematics.", "Automation-Control": 0.9592165947, "Qwen2": "Yes"} {"id": "59597756", "revid": "7611264", "url": "https://en.wikipedia.org/wiki?curid=59597756", "title": "Minimum relevant variables in linear system", "text": "MINimum Relevant Variables in Linear System (Min-RVLS) is a problem in mathematical optimization. Given a linear program, it is required to find a feasible solution in which the number of non-zero variables is as small as possible.\nThe problem is known to be NP-hard and even hard to approximate.\nDefinition.\nA Min-RVLS problem is defined by:\nThe linear system is given by: \"A x\" \"R\" \"b.\" It is assumed to be feasible (i.e., satisfied by at least one \"x\"). Depending on R, there are four different variants of this system: \"A x = b, A x ≥ b, A x > b, A x ≠ b\".\nThe goal is to find an \"n\"-by-1 vector \"x\" that satisfies the system \"A x\" \"R\" \"b\", and subject to that, contains as few as possible nonzero elements.\nSpecial case.\nThe problem Min-RVLS[=] was presented by Garey and Johnson, who called it \"minimum weight solution to linear equations\". They proved it was NP-hard, but did not consider approximations.\nApplications.\nThe Min-RVLS problem is important in machine learning and linear discriminant analysis. Given a set of positive and negative examples, it is required to minimize the number of features that are required to correctly classify them. The problem is known as the minimum feature set problem. An algorithm that approximates Min-RVLS within a factor of formula_1 could substantially reduce the number of training samples required to attain a given accuracy level.\nThe shortest codeword problem in coding theory is the same problem as Min-RVLS[=] when the coefficients are in GF(2).\nRelated problems.\nIn MINimum Unsatisfied Linear Relations (Min-ULR), we are given a binary relation \"R\" and a linear system \"A x\" \"R\" \"b\", which is now assumed to be \"infeasible\". The goal is to find a vector \"x\" that violates as few relations as possible, while satisfying all the others.\nMin-ULR[≠] is trivially solvable, since any system with real variables and a finite number of inequality constraints is feasible. As for the other three variants:\nIn the complementary problem MAXimum Feasible Linear Subsystem (Max-FLS), the goal is to find a maximum subset of the constraints that can be satisfied simultaneously. \nHardness of approximation.\nAll four variants of Min-RVLS are hard to approximate. In particular all four variants cannot be approximated within a factor of formula_3, for any formula_4, unless NP is contained in DTIME(formula_5). The hardness is proved by reductions:\nOn the other hand, there is a reduction from Min-RVLS[=] to Min-ULR[=]. It also applies to Min-ULR[≥] and Min-ULR[>], since each equation can be replaced by two complementary inequalities.\nTherefore, when R is in {=,>,≥}, Min-ULR and Min-RVLS are equivalent in terms of approximation hardness.", "Automation-Control": 0.738014102, "Qwen2": "Yes"} {"id": "53267005", "revid": "1893804", "url": "https://en.wikipedia.org/wiki?curid=53267005", "title": "Design for verification", "text": "Design for verification (DfV) is a set of engineering guidelines to aid designers in ensuring right first time manufacturing and assembly of large-scale components. The guidelines were developed as a tool to inform and direct designers during early stage design phases to trade off estimated measurement uncertainty against tolerance, cost, assembly, measurability and product requirements.\nBackground.\nIncreased competition in the aerospace market has placed additional demands on aerospace manufacturers to reduce costs, increase product flexibility and improve manufacturing efficiency. There is a knowledge gap within the sphere of digital to physical dimensional verification and on how to successfully achieve dimensional specifications within real-world assembly factories that are subject to varying environmental conditions. \nThe DfV framework is an engineering principle to be used within low rate and high value and complexity manufacturing industries to aid in achieving high productivity in assembly via the effective dimensional verification of large volume structures, during final assembly. The DfV framework has been developed to enable engineers to design and plan the effective dimensional verification of large volume, complex structures in order to reduce failure rates and end-product costs, improve process integrity and efficiency, optimise metrology processes, decrease tooling redundancy and increase product quality and conformance to specification. The theoretical elements of the DfV methods were published in 2016, together with their testing using industrial case studies of representative complexity. The industrial tests published on ScienceDirect proved that by using the new design for verification methods alongside the traditional ‘design for X’ toolbox, the resultant process achieved improved tolerance analysis and synthesis, optimized large volume metrology and assembly processes and more cost-effective tool and jig design.", "Automation-Control": 0.9391713142, "Qwen2": "Yes"} {"id": "42961059", "revid": "44592611", "url": "https://en.wikipedia.org/wiki?curid=42961059", "title": "ZF 5HP transmission", "text": "5HP is ZF Friedrichshafen AG's trademark name for its five-speed automatic transmission models (5-speed transmission with Hydraulic converter and Planetary gearsets) for longitudinal engine applications, designed and built by ZF's subsidiary in Saarbrücken.\nSpecifications.\nFinal Conventionally Designed Gearbox.\nThe 5HP is the last transmission family that utilized a conventional design. To meet the requirements for a greater number of gear ratios, the only viable option entailed adding more components. That made these gearboxes bigger, heavier and even more expensive to build. As the presence of up to ten main components (two Ravigneaux gearsets in series, along with brakes and clutches) showed, this marked the end of conventional gearbox design. The successor, the 6HP-family, employed an all-new Lepelletier gear mechanism design, which needed only eight main components in order to achieve six gear ratios. This demonstrated a new paradigm in gearbox design.\nRavigneaux Planetary Gearset Types.\n5HP 18.\nApplications\n5HP 19.\nApplications\nBMW — longitudinal engine, rear wheel drive\n5HP 19FL.\nApplications\nVolkswagen Group — longitudinal engine transaxle, front-wheel drive\n5HP 19FLA.\nApplications\nVolkswagen Group — longitudinal engine, transaxle permanent four-wheel drive\n1999 (DRN/EKX) transmissions used Induction speed sensors and 2000+ (FAS) transmissions used Hall Effect sensors. These transmissions are mechanically the same, but are not interchangeable.\n5HP 19HL.\nApplications\nPorsche — longitudinal engine rear engine transaxle\n5HP 19HLA.\nApplications\nPorsche — longitudinal engine rear engine transaxle\nPorsche — mid-engine design flat-six engine, 5-speed tiptronic #1060, rear-wheel drive A87.01-xxx, A87.02-xxx, A87.21-xxx, [5HP19FL Valve Body, Solenoids, and Speed Sensor. Different Wiring Harness.] [Speed Sensor/Pulser part # ZF 0501314432]\nSimpson Planetary Gearset Types.\n5HP 24.\nApplications\n5HP 24A.\nApplications\n5HP 30.\nApplications", "Automation-Control": 0.7174073458, "Qwen2": "Yes"} {"id": "42966106", "revid": "575347", "url": "https://en.wikipedia.org/wiki?curid=42966106", "title": "Instrumentation and control engineering", "text": "Instrumentation and control engineering (ICE) is a branch of engineering that studies the measurement and control of process variables, and the design and implementation of systems that incorporate them. Process variables include pressure, temperature, humidity, flow, pH, force and speed. \nICE combines two branches of engineering. Instrumentation engineering is the science of the measurement and control of process variables within a production or manufacturing area. Meanwhile, control engineering, also called control systems engineering, is the engineering discipline that applies control theory to design systems with desired behaviors. \nControl engineers are responsible for the research, design, and development of control devices and systems, typically in manufacturing facilities and process plants. Control methods employ sensors to measure the output variable of the device and provide feedback to the controller so that it can make corrections toward desired performance. Automatic control manages a device without the need of human inputs for correction, such as cruise control for regulating a car's speed. \nControl systems engineering activities are multi-disciplinary in nature. They focus on the implementation of control systems, mainly derived by mathematical modeling. Because instrumentation and control play a significant role in gathering information from a system and changing its parameters, they are a key part of control loops.\nAs profession.\nHigh demand for engineering professionals is found in fields associated with process automation. Specializations include industrial instrumentation, system dynamics, process control, and control systems. Additionally, technological knowledge, particularly in computer systems, is essential to the job of an instrumentation and control engineer; important technology-related topics include human–computer interaction, programmable logic controllers, and SCADA. The tasks center around designing, developing, maintaining and managing control systems.\nThe goals of the work of an instrumentation and control engineer are to maximize:\nAs academic discipline.\nMany universities teach instrumentation and control engineering as an academic courses at the graduate and postgraduate levels. It is possible to approach this field coming from many standard engineering backgrounds, being the most common among them Electrical and Mechanical Engineering, since these branches cover strong foundational subjects in control systems, system dynamics, electro-mechanical machines and devices, as well as electric circuits. ", "Automation-Control": 1.0000063181, "Qwen2": "Yes"} {"id": "5656649", "revid": "14965160", "url": "https://en.wikipedia.org/wiki?curid=5656649", "title": "Spillage", "text": "In industrial production, spillage is the loss of production output due to production of a series of defective or unacceptable products which must be rejected. Spillage is an often costly event which occurs in manufacturing when a process degradation or failure occurs that is not immediately detected and corrected, and in which defective or reject product therefore continues to be produced for some extended period of time.\nSpillage results in costs due to lost production volume, excessive scrap, delayed delivery of product, and wastage of human and capital equipment resources. Minimization of the occurrence and duration of manufacturing spillage requires that closed-loop control and associated process monitoring and metrology functions be integrated into critical steps of the overall manufacturing process. The extent to which process control is complete and metrology is high resolution so as to be comprehensive determines the extent to which spillages will be prevented.", "Automation-Control": 0.7311636806, "Qwen2": "Yes"} {"id": "24482634", "revid": "38448542", "url": "https://en.wikipedia.org/wiki?curid=24482634", "title": "MazaCAM", "text": "MazaCAM is a CNC programming system for the Mazak CNC (Numerical control) machine-tools (see Yamazaki Mazak Corporation), sold and supported by SolutionWare Corporation.\nMazaCAM differs from most other CNC programming systems in that it can generate CNC programs in both Mazatrol and G-code .\nReferences.\nOther Sources of Notability\n\n\n\n\n", "Automation-Control": 0.7917863727, "Qwen2": "Yes"} {"id": "39284847", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=39284847", "title": "Set estimation", "text": "In statistics, a random vector \"x\" is classically represented by a probability density function. \nIn a set-membership approach or set estimation, \"x\" is represented by a set \"X\" to which \"x\" is assumed to belong. This means that the support of the probability distribution function of \"x\" is included inside \"X\". On the one hand, representing random vectors by sets makes it possible to provide fewer assumptions on the random variables (such as independence) and dealing with nonlinearities is easier. On the other hand, a probability distribution function provides a more accurate information than a set enclosing its support.\nSet-membership estimation.\nSet membership estimation (or \"set estimation\" for short) is an estimation approach which considers that measurements are represented by a set \"Y\" (most of the time a box of R\"m\", where \"m\"\nis the number of measurements) of the measurement space. If \"p\" is the parameter vector and \"f\" is the model function, then the set of all feasible parameter vectors is\nwhere \"P\"0 is the prior set for the parameters. Characterizing \"P\" corresponds to a set-inversion problem.\nResolution.\nWhen \"f\" is linear the feasible set \"P\" can be described by linear inequalities and can be approximated using linear programming techniques.\nWhen \"f\" is nonlinear, the resolution can be performed using interval analysis. The feasible set \"P\" is then approximated by an inner and an outer subpavings. The main limitation of the method is its exponential complexity with respect to the number of parameters.\nExample.\nConsider the following model\nwhere \"p\"1 and \"p\"2 are the two parameters\nto be estimated.\nAssume that at times \"t\"1=−1, \"t\"2=1, \"t\"3=2,\nthe following interval measurements have been collected:\nas illustrated by Figure 1. The corresponding measurement set (here a box) is\nThe model function is defined by\nThe components of \"f\" are obtained using the model for each time measurement.\nAfter solving the set inversion problem, we get the approximation depicted on Figure 2.\nRed boxes are inside the feasible set \"P\" and blue boxes are outside \"P\".\nRecursive case.\nSet estimation can be used to estimate the state of a system described by state equations using a recursive implementation.\nWhen the system is linear, the corresponding feasible set for the state vector can be described by polytopes or by ellipsoids\nWhen the system is nonlinear, the set can be enclosed by subpavings.\nRobust case.\nWhen outliers occur, the set estimation method generally returns an empty set. This is\ndue to the fact that the intersection between of sets of parameter vectors that are consistent\nwith the \"i\"th data bar is empty. To be robust with respect to outliers,\nwe generally characterize the set of parameter vectors that are consistent with\nall data bars except \"q\" of them. This is possible using the notion of \"q\"-relaxed intersection.", "Automation-Control": 0.8698977828, "Qwen2": "Yes"} {"id": "14755952", "revid": "16944068", "url": "https://en.wikipedia.org/wiki?curid=14755952", "title": "Machine Sazi Tabriz", "text": "Machine Sazi Tabriz Co. (Tabriz Machinery Manufacturing Co.) which is also called by its abbreviation MST, is a Machine tool manufacturing factory in Tabriz, Iran. The major products of the factory are machinery tools such as turning machines, milling machines, drilling machines, grinding machines. A large variety of MST's products are CNC controlled machines. The MST manufacturing complex established on 1969 with technological helps from east European countries. The MST serves as a nationwide base for design and manufacturing of machine tools. MST owns the Machine Sazi football club, since 1969 to now.", "Automation-Control": 0.9997486472, "Qwen2": "Yes"} {"id": "6343916", "revid": "1153681755", "url": "https://en.wikipedia.org/wiki?curid=6343916", "title": "Soft sensor", "text": "Soft sensor or virtual sensor is a common name for software where several measurements are processed together. Commonly soft sensors are based on control theory and also receive the name of state observer. There may be dozens or even hundreds of measurements. The interaction of the signals can be used for calculating new quantities that need not be measured. Soft sensors are especially useful in data fusion, where measurements of different characteristics and dynamics are combined. It can be used for fault diagnosis as well as control applications.\nWell-known software algorithms that can be seen as soft sensors include Kalman filters. More recent implementations of soft sensors use neural networks or fuzzy computing.\nExamples of soft sensor applications:", "Automation-Control": 0.9992749095, "Qwen2": "Yes"} {"id": "45486879", "revid": "12396222", "url": "https://en.wikipedia.org/wiki?curid=45486879", "title": "Rapid Heat Cycle Molding", "text": "Rapid Heat Cycle Molding (RHCM) is also known as steam injection molding. Dr. Chao-Tsai Huang has written an extensive 68 page paper outlining, among other things, a case study on RHCM. His paper is entitled In-depth Study of RHCM and IHM Technologiesand Industrial Applications.\nIn general, ABS material is used as the raw material. The primary advantage of steam injection is that it eliminates weld-lines on molded parts. This allows companies to eliminate future processes such as painting. In non-steam molding, water will heat the tool to a constant temperature. Plastic will be injected to the warm tool.\nIn steam molding, steam is injected at 160 degrees to heat the tool. When the tool reaches a predetermined temperature (about 140 degrees), the plastic is injected. Cold water is immediately added to the process to cool the plastic down to around 40 degrees.\nBecause the mold is so hot when the plastic is injected, there are no weld lines, and a \"perfect\" product results. Steam injection molding is now being extensively used to produce the front covers of LCD TVs.", "Automation-Control": 0.9587019682, "Qwen2": "Yes"} {"id": "66501526", "revid": "7611264", "url": "https://en.wikipedia.org/wiki?curid=66501526", "title": "Time metrology", "text": "Time metrology or time and frequency metrology is the application of metrology for time keeping, including frequency stability.\nIts main tasks are the realization of the second as the SI unit of measurement for time and the establishment of time standards and frequency standards as well as their dissemination.", "Automation-Control": 0.993149519, "Qwen2": "Yes"} {"id": "18373732", "revid": "42727488", "url": "https://en.wikipedia.org/wiki?curid=18373732", "title": "Castability", "text": "Castability is the ease of forming a quality casting. A very castable part design is easily developed, incurs minimal tooling costs, requires minimal energy, and has few rejections. Castability can refer to a part design or a material property.\nPart design.\nPart design and geometry directly affect the castability, with volume, surface area and the number of features being the most important attributes. \nIf the design has undercuts or interior cavities it decreases castability due to tooling complexity. Long thin sections in a design are hard to fill. Sudden changes in wall thickness reduce castability because it induces turbulence during filling; fillets should be added to avoid this. Annulars in the path of flow should be avoided because they can cause cold shuts or misruns. A design that causes isolated hot spots decreases castability. An ideal design would have progressive directional solidification from the thinnest section to the thickest.\nLocation of the mold's parting line also affects castability, because a non-planar parting line also increases tooling complexity.\nIf a design requires a high degree of accuracy, fine surface finish or defect free surface it reduces the castability of the part. However, the casting process can be very economical for part designs that require intricate contoured surfaces, thickness variations, and internal features.\nQuantitative analysis.\nThe castability of a design can be partially quantitatively determined by the following three equations. Better castability is denoted by a larger number.\nWhere Vc is the volume of the casting and Vb is the volume of the smallest box that the casting could fit in.\nWhere Vc is the volume of the casting and Ac is the surface area of the casting\nWhere nf is the number of features (holes, pockets, slots, bosses, ribs, etc.)\nMaterial properties.\nMaterial properties that influence their castability include their pouring temperature, fluidity, solidification shrinkage, and slag/dross formation tendencies.", "Automation-Control": 0.6721929908, "Qwen2": "Yes"} {"id": "35867897", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=35867897", "title": "Bayesian interpretation of kernel regularization", "text": "Within bayesian statistics for machine learning, kernel methods arise from the assumption of an inner product space or similarity structure on inputs. For some such methods, such as support vector machines (SVMs), the original formulation and its regularization were not Bayesian in nature. It is helpful to understand them from a Bayesian perspective. Because the kernels are not necessarily positive semidefinite, the underlying structure may not be inner product spaces, but instead more general reproducing kernel Hilbert spaces. In Bayesian probability kernel methods are a key component of Gaussian processes, where the kernel function is known as the covariance function. Kernel methods have traditionally been used in supervised learning problems where the \"input space\" is usually a \"space of vectors\" while the \"output space\" is a \"space of scalars\". More recently these methods have been extended to problems that deal with multiple outputs such as in multi-task learning.\nA mathematical equivalence between the regularization and the Bayesian point of view is easily proved in cases where the reproducing kernel Hilbert space is \"finite-dimensional\". The infinite-dimensional case raises subtle mathematical issues; we will consider here the finite-dimensional case. We start with a brief review of the main ideas underlying kernel methods for scalar learning, and briefly introduce the concepts of regularization and Gaussian processes. We then show how both points of view arrive at essentially equivalent estimators, and show the connection that ties them together.\nThe supervised learning problem.\nThe classical supervised learning problem requires estimating the output for some new input point formula_1 by learning a scalar-valued estimator formula_2 on the basis of a training set formula_3 consisting of formula_4 input-output pairs, formula_5. Given a symmetric and positive bivariate function formula_6 called a \"kernel\", one of the most popular estimators in machine learning is given by\nwhere formula_7 is the kernel matrix with entries formula_8, formula_9, and formula_10. We will see how this estimator can be derived both from a regularization and a Bayesian perspective.\nA regularization perspective.\nThe main assumption in the regularization perspective is that the set of functions formula_11 is assumed to belong to a reproducing kernel Hilbert space formula_12.\nReproducing kernel Hilbert space.\nA reproducing kernel Hilbert space (RKHS) formula_12 is a Hilbert space of functions defined by a symmetric, positive-definite function formula_14 called the \"reproducing kernel\" such that the function formula_15 belongs to formula_12 for all formula_17. There are three main properties make an RKHS appealing:\n1. The \"reproducing property\", which gives name to the space,\nwhere formula_19 is the inner product in formula_12.\n2. Functions in an RKHS are in the closure of the linear combination of the kernel at given points,\nThis allows the construction in a unified framework of both linear and generalized linear models.\n3. The squared norm in an RKHS can be written as\nand could be viewed as measuring the \"complexity\" of the function.\nThe regularized functional.\nThe estimator is derived as the minimizer of the regularized functional\nwhere formula_23 and formula_24 is the norm in formula_12. The first term in this functional, which measures the average of the squares of the errors between the formula_26 and the formula_27, is called the \"empirical risk\" and represents the cost we pay by predicting formula_26 for the true value formula_27. The second term in the functional is the squared norm in a RKHS multiplied by a weight formula_30 and serves the purpose of stabilizing the problem as well as of adding a trade-off between fitting and complexity of the estimator. The weight formula_30, called the \"regularizer\", determines the degree to which instability and complexity of the estimator should be penalized (higher penalty for increasing value of formula_30).\nDerivation of the estimator.\nThe explicit form of the estimator in equation is derived in two steps. First, the representer theorem states that the minimizer of the functional can always be written as a linear combination of the kernels centered at the training-set points,\nfor some formula_33. The explicit form of the coefficients formula_34 can be found by substituting for formula_35 in the functional . For a function of the form in equation , we have that\nWe can rewrite the functional as\nThis functional is convex in formula_38 and therefore we can find its minimum by setting the gradient with respect to formula_38 to zero,\nSubstituting this expression for the coefficients in equation , we obtain the estimator stated previously in equation ,\nA Bayesian perspective.\nThe notion of a kernel plays a crucial role in Bayesian probability as the covariance function of a stochastic process called the \"Gaussian process\".\nA review of Bayesian probability.\nAs part of the Bayesian framework, the Gaussian process specifies the \"prior distribution\" that describes the prior beliefs about the properties of the function being modeled. These beliefs are updated after taking into account observational data by means of a \"likelihood function\" that relates the prior beliefs to the observations. Taken together, the prior and likelihood lead to an updated distribution called the \"posterior distribution\" that is customarily used for predicting test cases.\nThe Gaussian process.\nA Gaussian process (GP) is a stochastic process in which any finite number of random variables that are sampled follow a joint Normal distribution. The mean vector and covariance matrix of the Gaussian distribution completely specify the GP. GPs are usually used as a priori distribution for functions, and as such the mean vector and covariance matrix can be viewed as functions, where the covariance function is also called the \"kernel\" of the GP. Let a function formula_42 follow a Gaussian process with mean function formula_43 and kernel function formula_44,\nIn terms of the underlying Gaussian distribution, we have that for any finite set formula_46 if we let formula_47 then\nwhere formula_49 is the mean vector and formula_50 is the covariance matrix of the multivariate Gaussian distribution.\nDerivation of the estimator.\nIn a regression context, the likelihood function is usually assumed to be a Gaussian distribution and the observations to be independent and identically distributed (iid),\nThis assumption corresponds to the observations being corrupted with zero-mean Gaussian noise with variance formula_52. The iid assumption makes it possible to factorize the likelihood function over the data points given the set of inputs formula_53 and the variance of the noise formula_52, and thus the posterior distribution can be computed analytically. For a test input vector formula_1, given the training data formula_56, the posterior distribution is given by\nwhere formula_58 denotes the set of parameters which include the variance of the noise formula_52 and any parameters from the covariance function formula_44 and where\nThe connection between regularization and Bayes.\nA connection between regularization theory and Bayesian theory can only be achieved in the case of \"finite dimensional RKHS\". Under this assumption, regularization theory and Bayesian theory are connected through Gaussian process prediction.\nIn the finite dimensional case, every RKHS can be described in terms of a feature map formula_62 such that\nFunctions in the RKHS with kernel formula_64 can be then be written as\nand we also have that\nWe can now build a Gaussian process by assuming formula_67 to be distributed according to a multivariate Gaussian distribution with zero mean and identity covariance matrix,\nIf we assume a Gaussian likelihood we have\nwhere formula_70. The resulting posterior distribution is the given by\nWe can see that a \"maximum posterior (MAP)\" estimate is equivalent to the minimization problem defining Tikhonov regularization, where in the Bayesian case the regularization parameter is related to the noise variance.\nFrom a philosophical perspective, the loss function in a regularization setting plays a different role than the likelihood function in the Bayesian setting. Whereas the loss function measures the error that is incurred when predicting formula_72 in place of formula_73, the likelihood function measures how likely the observations are from the model that was assumed to be true in the generative process. From a mathematical perspective, however, the formulations of the regularization and Bayesian frameworks make the loss function and the likelihood function to have the same mathematical role of promoting the inference of functions formula_42 that approximate the labels formula_73 as much as possible.", "Automation-Control": 0.8194496632, "Qwen2": "Yes"} {"id": "35887507", "revid": "1160866039", "url": "https://en.wikipedia.org/wiki?curid=35887507", "title": "Representer theorem", "text": " \nFor computer science, in statistical learning theory, a representer theorem is any of several related results stating that a minimizer formula_1 of a regularized empirical risk functional defined over a reproducing kernel Hilbert space can be represented as a finite linear combination of kernel products evaluated on the input points in the training set data.\nFormal statement.\nThe following Representer Theorem and its proof are due to Schölkopf, Herbrich, and Smola: \nTheorem: Consider a positive-definite real-valued kernel formula_2 on a non-empty set formula_3 with a corresponding reproducing kernel Hilbert space formula_4. Let there be given\nwhich together define the following regularized empirical risk functional on formula_4:\nThen, any minimizer of the empirical risk\nadmits a representation of the form:\nwhere formula_12 for all formula_13.\nProof:\nDefine a mapping\n(so that formula_15 is itself a map formula_16). Since formula_17 is a reproducing kernel, then\nwhere formula_19 is the inner product on formula_4.\nGiven any formula_21, one can use orthogonal projection to decompose any formula_22 into a sum of two functions, one lying in formula_23, and the other lying in the orthogonal complement:\nwhere formula_25 for all formula_26.\nThe above orthogonal decomposition and the reproducing property together show that applying formula_27 to any training point formula_28 produces\nwhich we observe is independent of formula_30. Consequently, the value of the error function formula_31 in (*) is likewise independent of formula_30. For the second term (the regularization term), since formula_30 is orthogonal to formula_34 and formula_35 is strictly monotonic, we have\nTherefore setting formula_37 does not affect the first term of (*), while it strictly decreases the second term. Consequently, any minimizer formula_1 in (*) must have formula_37, i.e., it must be of the form\nwhich is the desired result.\nGeneralizations.\nThe Theorem stated above is a particular example of a family of results that are collectively referred to as \"representer theorems\"; here we describe several such.\nThe first statement of a representer theorem was due to Kimeldorf and Wahba for the special case in which\nfor formula_42. Schölkopf, Herbrich, and Smola generalized this result by relaxing the assumption of the squared-loss cost and allowing the regularizer to be any strictly monotonically increasing function formula_43 of the Hilbert space norm.\nIt is possible to generalize further by augmenting the regularized empirical risk functional through the addition of unpenalized offset terms. For example, Schölkopf, Herbrich, and Smola also consider the minimization\ni.e., we consider functions of the form formula_45, where formula_22 and formula_47 is an unpenalized function lying in the span of a finite set of real-valued functions formula_48. Under the assumption that the formula_49 matrix formula_50 has rank formula_51, they show that the minimizer formula_52 in formula_53\nadmits a representation of the form\nwhere formula_55 and the formula_56 are all uniquely determined.\nThe conditions under which a representer theorem exists were investigated by Argyriou, Micchelli, and Pontil, who proved the following:\nTheorem: Let formula_3 be a nonempty set, formula_17 a positive-definite real-valued kernel on formula_59 with corresponding reproducing kernel Hilbert space formula_4, and let formula_61 be a differentiable regularization function. Then given a training sample formula_62 and an arbitrary error function formula_63, a minimizer\nof the regularized empirical risk admits a representation of the form\nwhere formula_12 for all formula_13, if and only if there exists a nondecreasing function formula_68 for which\nEffectively, this result provides a necessary and sufficient condition on a differentiable regularizer formula_70 under which the corresponding regularized empirical risk minimization formula_71 will have a representer theorem. In particular, this shows that a broad class of regularized risk minimizations (much broader than those originally considered by Kimeldorf and Wahba) have representer theorems.\nApplications.\nRepresenter theorems are useful from a practical standpoint because they dramatically simplify the regularized empirical risk minimization problem formula_71. In most interesting applications, the search domain formula_4 for the minimization will be an infinite-dimensional subspace of formula_74, and therefore the search (as written) does not admit implementation on finite-memory and finite-precision computers. In contrast, the representation of formula_75 afforded by a representer theorem reduces the original (infinite-dimensional) minimization problem to a search for the optimal formula_76-dimensional vector of coefficients formula_77; formula_78 can then be obtained by applying any standard function minimization algorithm. Consequently, representer theorems provide the theoretical basis for the reduction of the general machine learning problem to algorithms that can actually be implemented on computers in practice.\nThe following provides an example of how to solve for the minimizer whose existence is guaranteed by the representer theorem. This method works for any positive definite kernel formula_79, and allows us to transform a complicated (possibly infinite dimensional) optimization problem into a simple linear system that can be solved numerically. \nAssume that we are using a least squares error function \nformula_80 \nand a regularization function formula_81\nfor some formula_42. By the representer theorem, the minimizer\nformula_83\nhas the form\nformula_84\nfor some formula_85. Noting that\nformula_86\nwe see that formula_87 has the form\nformula_88\nwhere formula_89 and formula_90. This can be factored out and simplified to\nformula_91\nSince formula_92 is positive definite, there is indeed a single global minimum for this expression. Let formula_93 and note that formula_94 is convex. Then formula_87, the global minimum, can be solved by setting formula_96. Recalling that all positive definite matrices are invertible, we see that\nformula_97\nso the minimizer may be found via a linear solve.", "Automation-Control": 0.9295998812, "Qwen2": "Yes"} {"id": "347128", "revid": "42267714", "url": "https://en.wikipedia.org/wiki?curid=347128", "title": "Costate equation", "text": "The costate equation is related to the state equation used in optimal control. It is also referred to as auxiliary, adjoint, influence, or multiplier equation. It is stated as a vector of first order differential equations\nwhere the right-hand side is the vector of partial derivatives of the negative of the Hamiltonian with respect to the state variables.\nInterpretation.\nThe costate variables formula_2 can be interpreted as Lagrange multipliers associated with the state equations. The state equations represent constraints of the minimization problem, and the costate variables represent the marginal cost of violating those constraints; in economic terms the costate variables are the shadow prices.\nSolution.\nThe state equation is subject to an initial condition and is solved forwards in time. The costate equation must satisfy a transversality condition and is solved backwards in time, from the final time towards the beginning. For more details see Pontryagin's maximum principle.", "Automation-Control": 0.9011552334, "Qwen2": "Yes"} {"id": "2186444", "revid": "20483999", "url": "https://en.wikipedia.org/wiki?curid=2186444", "title": "Laser-hybrid welding", "text": "Laser-hybrid welding is a type of welding process that combines the principles of laser beam welding and arc welding.\nThe combination of laser light and an electrical arc into an amalgamated welding process has existed since the 1970s, but has only recently been used in industrial applications. There are three main types of hybrid welding process, depending on the arc used: TIG, plasma arc or MIG augmented laser welding. While TIG-augmented laser welding was the first to be researched, MIG is the first to go into industry and is commonly known as hybrid laser welding.\nWhereas in the early days laser sources still had to prove their suitability for industrial use, today they are standard equipment in many manufacturing enterprises.\nThe combination of laser welding with another weld process is called a \"hybrid welding process\". This means that a laser beam and an electrical arc act simultaneously in one welding zone, influencing and supporting each other.\nLaser.\nLaser welding not only requires high laser power but also a high quality beam to obtain the desired \"deep-weld effect\". The resulting higher quality of beam can be exploited either to obtain a smaller focus diameter or a larger focal distance. A variety of laser types are used for this process, in particular where the laser light can be transmitted via a water-cooled glass fiber. The beam is projected onto the workpiece by collimating and focusing optics. Carbon dioxide laser can also be used where the beam is transmitted via lens or mirrors.\nLaser-hybrid process.\nFor welding metallic objects, the laser beam is focused to obtain intensities of more than 1 MW/cm2. When the laser beam hits the surface of the material, this spot is heated up to vaporization temperature, and a vapor cavity is formed in the weld metal due to the escaping metal vapor. This is known as a keyhole. The extraordinary feature of the weld seam is its high depth-to-width ratio. The energy-flow density of the freely burning arc is slightly more than 100 kW/cm2. Unlike a dual process where two separate weld processes act in succession, hybrid welding may be viewed as a combination of both weld processes acting simultaneously in one and the same process zone. Depending on the kind of arc or laser process used, and depending on the process parameters, the two systems will influence each other in different ways.\nThe combination of the laser process and the arc process results in an increase in both weld penetration depth and welding speed (as compared to each process alone). The metal vapor escaping from the vapor cavity acts upon the arc plasma. Absorption of the laser radiation in the processing plasma remains negligible. Depending on the ratio of the two power inputs, the character of the overall process may be mainly determined either by the laser or by the arc.\nAbsorption of the laser radiation is substantially influenced by the temperature of the workpiece surface. Before the laser welding process can start, the initial reflectance must be overcome, especially on aluminum surfaces. This can be achieved by preheating the material. In the hybrid process, the arc heats the metal, helping the laser beam to couple in. After the vaporisation temperature has been reached, the vapor cavity is formed, and nearly all radiation energy can be put into the workpiece. The energy required for this is thus determined by the temperature-dependent absorption and by the amount of energy lost by conduction into the rest of the workpiece. In laser-hybrid welding, using MIG, vaporisation takes place not only from the surface of the workpiece but also from the filler wire, so that more metal vapor is available to facilitate the absorption of the laser radiation.\nFatigue behavior.\nOver the years a great deal of research has been done to understand fatigue behavior, particularly for new techniques like laser-hybrid welding, but knowledge is still limited. Laser-hybrid welding is an advanced welding technology that creates narrow deep welds and offers greater freedom to control the weld surface geometry. Therefore, fatigue analysis and life prediction of hybrid weld joints has become more important and is the subject of ongoing research.", "Automation-Control": 0.6257020235, "Qwen2": "Yes"} {"id": "1554398", "revid": "20483999", "url": "https://en.wikipedia.org/wiki?curid=1554398", "title": "Minimal realization", "text": "In control theory, given any transfer function, any state-space model that is both controllable and observable and has the same input-output behaviour as the transfer function is said to be a minimal realization of the transfer function. The realization is called \"minimal\" because it describes the system with the minimum number of states.\nThe minimum number of state variables required to describe a system equals the order of the differential equation; more state variables than the minimum can be defined. For example, a second order system can be defined by two or more state variables, with two being the minimal realization.\nGilbert's realization.\nGiven a matrix transfer function, it is possible to directly construct a minimal state-space realization by using Gilbert's method (also known as Gilbert's realization).", "Automation-Control": 0.9999716282, "Qwen2": "Yes"} {"id": "31929570", "revid": "13649084", "url": "https://en.wikipedia.org/wiki?curid=31929570", "title": "Step detection", "text": "In statistics and signal processing, step detection (also known as step smoothing, step filtering, shift detection, jump detection or edge detection) is the process of finding abrupt changes (steps, jumps, shifts) in the mean level of a time series or signal. It is usually considered as a special case of the statistical method known as change detection or change point detection. Often, the step is small and the time series is corrupted by some kind of noise, and this makes the problem challenging because the step may be hidden by the noise. Therefore, statistical and/or signal processing algorithms are often required.\nThe step detection problem occurs in multiple scientific and engineering contexts, for example in statistical process control (the control chart being the most directly related method), in exploration geophysics (where the problem is to segment a well-log recording into stratigraphic zones), in genetics (the problem of separating microarray data into similar copy-number regimes), and in biophysics (detecting state transitions in a molecular machine as recorded in time-position traces). For 2D signals, the related problem of edge detection has been studied intensively for image processing.\nAlgorithms.\nWhen the step detection must be performed as and when the data arrives, then online algorithms are usually used, \nand it becomes a special case of sequential analysis.\nSuch algorithms include the classical CUSUM method applied to changes in mean.\nBy contrast, \"offline\" algorithms are applied to the data potentially long after it has been received. Most offline algorithms for step detection in digital data can be categorised as \"top-down\", \"bottom-up\", \"sliding window\", or \"global\" methods.\nTop-down.\nThese algorithms start with the assumption that there are no steps and introduce possible candidate steps one at a time, testing each candidate to find the one that minimizes some criteria (such as the least-squares fit of the estimated, underlying piecewise constant signal). An example is the \"stepwise jump placement\" algorithm, first studied in geophysical problems, that has found recent uses in modern biophysics.\nBottom-up.\nBottom-up algorithms take the \"opposite\" approach to top-down methods, first assuming that there is a step in between every sample in the digital signal, and then successively merging steps based on some criteria tested for every candidate merge.\nSliding window.\nBy considering a small \"window\" of the signal, these algorithms look for evidence of a step occurring within the window. The window \"slides\" across the time series, one time step at a time. The evidence for a step is tested by statistical procedures, for example, by use of the two-sample Student's t-test. Alternatively, a nonlinear filter such as the median filter is applied to the signal. Filters such as these attempt to remove the noise whilst preserving the abrupt steps.\nGlobal.\nGlobal algorithms consider the entire signal in one go, and attempt to find the steps in the signal by some kind of optimization procedure. Algorithms include wavelet methods, and total variation denoising which uses methods from convex optimization. Where the steps can be modelled as a Markov chain, then Hidden Markov Models are also often used (a popular approach in the biophysics community). When there are only a few unique values of the mean, then k-means clustering can also be used.\nLinear versus nonlinear signal processing methods for step detection.\nBecause steps and (independent) noise have theoretically infinite bandwidth and so overlap in the Fourier basis, signal processing approaches to step detection generally do not use classical smoothing techniques such as the low pass filter. Instead, most algorithms are explicitly nonlinear or time-varying.\nStep detection and piecewise constant signals.\nBecause the aim of step detection is to find a series of instantaneous jumps in the mean of a signal, the wanted, underlying, mean signal is piecewise constant. For this reason, step detection can be profitably viewed as the problem of recovering a piecewise constant signal corrupted by noise. There are two complementary models for piecewise constant signals: as 0-degree splines with a few knots, or as level sets with a few unique levels. Many algorithms for step detection are therefore best understood as either 0-degree spline fitting, or level set recovery, methods.\nStep detection as level set recovery.\nWhen there are only a few unique values of the mean, clustering techniques such as k-means clustering or mean-shift are appropriate. These techniques are best understood as methods for finding a level set description of the underlying piecewise constant signal.\nStep detection as 0-degree spline fitting.\nMany algorithms explicitly fit 0-degree splines to the noisy signal in order to detect steps (including stepwise jump placement methods), but there are other popular algorithms that can also be seen to be spline fitting methods after some transformation, for example total variation denoising.\nGeneralized step detection by piecewise constant denoising.\nAll the algorithms mentioned above have certain advantages and disadvantages in particular circumstances, yet, a surprisingly large number of these step detection algorithms are special cases of a more general algorithm. This algorithm involves the minimization of a global functional:\nHere, \"x\"\"i\" for \"i\" = 1, ..., \"N\" is the discrete-time input signal of length \"N\", and \"m\"\"i\" is the signal output from the algorithm. The goal is to minimize \"H\"[\"m\"] with respect to the output signal \"m\". The form of the function formula_1 determines the particular algorithm. For example, choosing:\nwhere \"I\"(\"S\") = 0 if the condition \"S\" is false, and one otherwise, obtains the total variation denoising algorithm with regularization parameter formula_3. Similarly:\nleads to the mean shift algorithm, when using an adaptive step size Euler integrator initialized with the input signal \"x\". Here \"W\" > 0 is a parameter that determines the support of the mean shift kernel. Another example is:\nleading to the bilateral filter, where formula_6 is the tonal kernel parameter, and \"W\" is the spatial kernel support. Yet another special case is:\nspecifying a group of algorithms that attempt to greedily fit 0-degree splines to the signal. Here, formula_8 is defined as zero if \"x\" = 0, and one otherwise.\nMany of the functionals in equation defined by the particular choice of formula_1 are convex: they can be minimized using methods from convex optimization. Still others are non-convex but a range of algorithms for minimizing these functionals have been devised.\nStep detection using the Potts model.\nA classical variational method for step detection is the Potts model. It is given by the non-convex optimization problem\nThe term formula_11 penalizes the number of jumps and the term formula_12 measures fidelity to data \"x\". The parameter γ > 0 controls the tradeoff between regularity and data fidelity. Since the minimizer formula_13 is piecewise constant the steps are given by the non-zero locations of the gradient formula_14.\nFor formula_15 and formula_16 there are fast algorithms which give an exact solution of the Potts problem in formula_17. ", "Automation-Control": 0.9037124515, "Qwen2": "Yes"} {"id": "21430056", "revid": "42425010", "url": "https://en.wikipedia.org/wiki?curid=21430056", "title": "GloMoSim", "text": "Global Mobile Information System Simulator (GloMoSim) is a network protocol simulation software that simulates wireless and wired network systems. \nGloMoSim is designed using the parallel discrete event simulation capability provided by \"Parsec\", a parallel programming language. GloMoSim currently supports protocols for a purely wireless network. \nIt uses the Parsec compiler to compile the simulation protocols.\nParsec.\nParsec is a C-based simulation language, developed by the Parallel Computing Laboratory at UCLA, for sequential and parallel execution of discrete-event simulation models.\nDevelopment.\nGloMoSim is no longer under active development", "Automation-Control": 0.9018194079, "Qwen2": "Yes"} {"id": "26730418", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=26730418", "title": "Integral sliding mode", "text": "In 1996, V. Utkin and J. Shi proposed an improved sliding control method named integral sliding mode control (ISMC). In contrast with conventional sliding mode control, the system motion under integral sliding mode has a dimension equal to that of the state space. In ISMC, the system trajectory always starts from the sliding surface.\nAccordingly, the reaching phase is eliminated, and robustness in the whole state space is promised.\nControl scheme.\nFor a system formula_1, formula_2 bounded uncertainty.\nMathews and DeCarlo [1] suggested to select an integral sliding surface as \n formula_3\nIn this case there exists a unit or discontinuous sliding mode controller compensating uncertainty formula_2.\nUtkin and Shi [2] have remarked that, if formula_5 is guaranteed, the reaching phase is eliminated.\nIn the case, when unmatched uncertainties occur formula_6 should be selected as formula_7\nwhere formula_8 is a pseudo inverse matrix [3-5].\nReferences.\n1.G.P. Matthews, R.A. DeCarlo, Decentralized tracking for a class of interconnected\nnonlinear systems using variable structure control. Automatica 24,\n187–193 (1988)\n2. V.I. Utkin, J. Shi, Integral sliding mode in systems operating under uncertainty\nconditions, in Proceedings of the 35th IEEE-CDC, Kobe, Japan, 1996\n3. Y. Shtessel, C. Edwards, L. Fridman, A. Levant. Sliding Mode Control and Observation, Series: Control Engineering, Birkhauser: Basel, 2014, ISBN 978-0-81764-8923.\n4. L. Fridman, A. Poznyak, F.J. Bejarano. Robust Ouput LQ Optimal Control via Integral Sliding Modes. Birkhäuser Basel, 2014, ISBN 978-0-8176-4961-6. \n5. Rubagotti, M.; Estrada, A.; Castaños, F.; Ferrara, A., L. Fridman. Integral Sliding Mode Control for Nonlinear Systems With Matched and Unmatched Perturbations IEEE Transactions on Automatic Control, 2011, Vol. 56, 11, pp. 2699-2704", "Automation-Control": 0.9998790026, "Qwen2": "Yes"} {"id": "26730591", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=26730591", "title": "Terminal sliding mode", "text": "In the early 1990s, a new type of sliding mode control, named terminal sliding modes (TSM) was invented at the Jet Propulsion Laboratory (JPL) by Venkataraman and Gulati. TSM is robust non-linear control approach.\nThe main idea of terminal sliding mode control evolved out of seminal work on terminal attractors done by Zak in the JPL, and is evoked by the concept of terminal attractors which guarantee finite time convergence of the states. While, in normal sliding mode, asymptotic stability is promised which leads to the convergence of the states to the origin.\nBut this convergence may only be guaranteed within infinite time. In TSM, a nonlinear term is introduced in the sliding surface design so that the manifold is formulated as an attractor. After the sliding surface is intercepted, the trajectory is attracted within the manifold and converges to the origin following a power rule.\nThere are some variations of the TSM including: Non-singular TSM, Fast TSM,\nTerminal sliding mode also has been widely applied to nonlinear process control, for example, rigid robot control etc.. Several open questions still remain on the mathematical treatment of the system's behavior at the origin since it is non-Lipschitz.\nControl Scheme.\nConsider a continuous nonlinear system in canonical form\nformula_1 ...\nformula_2\nformula_3\nwhere formula_4 is the state vector, formula_5 is the control\ninput, formula_6 and formula_7 are nonlinear functions in formula_8.\nThen a sequence of terminal sliding surfaces can be designed as follows:\nformula_9\nformula_10 ...\nformula_11 where formula_12 and formula_13 . formula_14 are positive odd numbers and formula_15.\nReferences.\nVenkataraman, S., Gulati, S., Control of Nonlinear Systems Using Terminal Sliding Modes\nJ. Dyn. Sys., Meas., Control, Sept 1993, Volume 115, Issue 3.", "Automation-Control": 0.9959245324, "Qwen2": "Yes"} {"id": "28487427", "revid": "41840956", "url": "https://en.wikipedia.org/wiki?curid=28487427", "title": "Witsenhausen's counterexample", "text": "Witsenhausen's counterexample, shown in the figure below, is a deceptively simple toy problem in decentralized stochastic control. It was formulated by Hans Witsenhausen in 1968. It is a counterexample to a natural conjecture that one can generalize a key result of centralized linear–quadratic–Gaussian control systems—that in a system with linear dynamics, Gaussian disturbance, and quadratic cost, affine (linear) control laws are optimal—to decentralized systems. Witsenhausen constructed a two-stage linear quadratic Gaussian system where two decisions are made by decision makers with decentralized information and showed that for this system, there exist nonlinear control laws that outperform all linear laws. The problem of finding the optimal control law remains unsolved.\nStatement of the counterexample.\nThe statement of the counterexample is simple: two controllers attempt to control the system by attempting to bring the state close to zero in exactly two time steps. The first controller observes the initial state formula_1 There is a cost on the input formula_2 of the first controller, and a cost on the state formula_3 after the input of the second controller. The input formula_4 of the second controller is free, but it is based on noisy observations formula_5 of the state formula_6 after the first controller's input. The second controller cannot communicate with the first controller and thus cannot observe either the original state formula_7 or the input formula_2 of the first controller. Thus the system dynamics are\nwith the second controller's observation equation\nThe objective is to minimize an expected cost function,\nwhere the expectation is taken over the randomness in the initial state formula_7 and the observation noise formula_14, which are distributed independently. The observation noise formula_14 is assumed to be distributed in a Gaussian manner, while the distribution of the initial state value formula_7 differs depending on the particular version of the problem.\nThe problem is to find control functions\nthat give at least as good a value of the objective function as do any other pair of control functions. Witsenhausen showed that the optimal functions formula_18 and formula_19 cannot be linear.\nSpecific results of Witsenhausen.\nWitsenhausen obtained the following results:\nThe significance of the problem.\nThe counterexample lies at the intersection of control theory and information theory. Due to its hardness, the problem of finding the optimal control law has also received attention from the theoretical computer science community. The importance of the problem was reflected upon in the 47th IEEE Conference on Decision and Control (CDC) 2008, Cancun, Mexico, where an entire session was dedicated to understanding the counterexample 40 years after it was first formulated.\nThe problem is of conceptual significance in decentralized control because it shows that it is important for the controllers to communicate with each other implicitly in order to minimize the cost. This suggests that control actions in decentralized control may have a dual role: those of control and communication.\nThe hardness of the problem.\nThe hardness of the problem is attributed to the fact that information of the second controller depends on the decisions of the first controller. Variations considered by Tamer Basar show that the hardness is also because of the structure of the performance index and the coupling of different decision variables. It has also been shown that problems of the spirit of Witsenhausen's counterexample become simpler if the transmission delay along an external channel that connects the controllers is smaller than the propagation delay in the problem. However, this result requires the channels to be perfect and instantaneous, and hence is of limited applicability. In practical situations, the channel is always imperfect, and thus one can not assume that decentralized control problems are simple in presence of external channels.\nA justification of the failure of attempts that discretize the problem came from the computer science literature: Christos Papadimitriou and John Tsitsiklis showed that the discrete version of the counterexample is NP-complete.\nAttempts at obtaining a solution.\nA number of numerical attempts have been made to solve the counterexample. Focusing on a particular choice of problem parameters formula_25, researchers have obtained strategies by discretization and using neural networks. Further research (notably, the work of Yu-Chi Ho, and the work of Li, Marden and Shamma) has obtained slightly improved costs for the same parameter choice. The best known numerical results for a variety of parameters, including the one mentioned previously, are obtained by a local search algorithm proposed by S.-H. Tseng and A. Tang in 2017. The first provably approximately optimal strategies appeared in 2010 (Grover, Park, Sahai) where information theory is used to understand the communication in the counterexample. The optimal solution of the counterexample is still an open problem.", "Automation-Control": 0.8807951212, "Qwen2": "Yes"} {"id": "2033005", "revid": "28979433", "url": "https://en.wikipedia.org/wiki?curid=2033005", "title": "Quantitative feedback theory", "text": "In control theory, quantitative feedback theory (QFT), developed by Isaac Horowitz (Horowitz, 1963; Horowitz and Sidi, 1972), is a frequency domain technique utilising the Nichols chart (NC) in order to achieve a desired robust design over a specified region of plant uncertainty. Desired time-domain responses are translated into frequency domain tolerances, which lead to bounds (or constraints) on the loop transmission function. The design process is highly transparent, allowing a designer to see what trade-offs are necessary to achieve a desired performance level.\nPlant templates.\nUsually any system can be represented by its Transfer Function (Laplace in continuous time domain), after getting the model of a system.\nAs a result of experimental measurement, values of coefficients in the Transfer Function have a range of uncertainty. Therefore, in QFT every parameter of this function is included into an interval of possible values, and the system may be represented by a family of plants rather than by a standalone expression.\nformula_1\nA frequency analysis is performed for a finite number of representative frequencies and a set of \"templates\" are obtained in the NC diagram which encloses the behaviour of the open loop system at each frequency.\nFrequency bounds.\nUsually system performance is described as robustness to instability (phase and gain margins), rejection to input and output noise disturbances and reference tracking. In the QFT design methodology these requirements on the system are represented as frequency constraints, conditions that the compensated system loop (controller and plant) could not break.\nWith these considerations and the selection of the same set of frequencies used for the templates, the frequency constraints for the behaviour of the system loop are computed and represented on the Nichols Chart (NC) as curves.\nTo achieve the problem requirements, a set of rules on the Open Loop Transfer Function, for the nominal plant formula_2 may be found. That means the nominal loop is not allowed to have its frequency value below the constraint for the same frequency, and at high frequencies the loop should not cross the \"Ultra High Frequency Boundary\" (UHFB), which has an oval shape in the center of the NC.\nLoop shaping.\nThe controller design is undertaken on the NC considering the frequency constraints and the \"nominal loop\" formula_3 of the system. At this point, the designer begins to introduce controller functions (formula_4) and tune their parameters, a process called Loop Shaping, until the best possible controller is reached without violation of the frequency constraints.\nThe experience of the designer is an important factor in finding a satisfactory controller that not only complies with the frequency restrictions but with the possible realization, complexity, and quality.\nFor this stage there currently exist different CAD (\"Computer Aided Design\") packages to make the controller tuning easier.\nPrefilter design.\nFinally, the QFT design may be completed with a pre-filter (formula_5) design when it is required. In the case of tracking conditions a shaping on the Bode diagram may be used. Post design analysis is then performed to ensure the system response is satisfactory according with the problem requirements.\nThe QFT design methodology was originally developed for \"Single-Input Single-Output\" (SISO) and \"Linear Time Invariant Systems\" (LTI), with the design process being as described above. However, it has since been extended to weakly nonlinear systems, time varying systems, distributed parameter systems, multi-input multi-output (MIMO) systems (Horowitz, 1991), discrete systems (these using the Z-Transform as transfer function), and non minimum phase systems. The development of CAD tools has been an important, more recent development, which simplifies and automates much of the design procedure (Borghesani et al., 1994).\nTraditionally, the pre-filter is designed by using the Bode-diagram magnitude information. The use of both phase and magnitude information for the design of pre-filter was first discussed in (Boje, 2003) for SISO systems. The method was then developed to MIMO problems in (Alavi et al., 2007).", "Automation-Control": 0.9376196861, "Qwen2": "Yes"} {"id": "1658855", "revid": "37102400", "url": "https://en.wikipedia.org/wiki?curid=1658855", "title": "Grinding dresser", "text": "A grinding dresser or wheel dresser is a tool to dress (slightly trim) the surface of a grinding wheel. Grinding dressers are used to return a wheel to its original round shape (to true it up), to expose fresh grains for renewed cutting action (including cleaning away clogged areas), or to make a different profile (cross-sectional shape) on the wheel's edge. Utilizing pre-determined dressing parameters will allow the wheel to be conditioned for optimum grinding performance while truing and restoring the form simultaneously. \nPurpose.\nThe objective of \"dressing the wheel\" is to:\nTypes.\nAlso an abrasive wheel type that has a small \"grinding wheel\" in a holder that is held against the spinning grinding wheel to dress and clean the face of the grinding wheel.\nGrinding complex shapes.\nFour types of dressers are used to dress the wheels of CNC grinders used for grinding complex shapes. This type of dresser is mainly in use on CNC grinding machine tools to automatically dress the grinding wheel via computer control in specialist areas requiring complex shapes such as grinding bearing raceways.\nWheel Conditioning\nProper wheel conditioning is a critical part of any grinding process. The condition of the wheel will determine the wheel's ability to meet the part finish requirements and metal removal rate capabilities. The grains in the wheel can be sheared to create a smooth condition or fractured to create a course open condition utilizing dressing parameters. \nSkate sharpening.\nGrinders used for sharpening skate blades typically have one or more thin grinding wheels mounted on vertical spindles, with a single-diamond dresser mounted on a gymbal with a horizontal axis level with the centerline of the wheel, so it can swing above and below the plane of the wheel, producing a convex grinding surface of a predetermined radius. This allows the blade to be sharpened to a specified hollow, typically very deep for hockey skates, very shallow for skates used for school figures, and moderate for skates used for freestyle skating.", "Automation-Control": 0.8613858223, "Qwen2": "Yes"} {"id": "52584796", "revid": "45370187", "url": "https://en.wikipedia.org/wiki?curid=52584796", "title": "Rapid casting", "text": "Rapid casting is an integration of investment casting with rapid prototyping/3D printing. In this technique disposable patterns that are used for forming molds are created with 3D printing techniques like fused deposition modeling, stereolithography or any other 3D printing technique.", "Automation-Control": 0.9998919368, "Qwen2": "Yes"} {"id": "42200192", "revid": "45179011", "url": "https://en.wikipedia.org/wiki?curid=42200192", "title": "System Information (Windows)", "text": "System Information (codice_1) is a system profiler included with Microsoft Windows that displays diagnostic and troubleshooting information related to the operating system, hardware and software. It has been bundled with Windows since Windows NT 4.0.\nIt compiles technical information on the overall system, hardware resources (including memory, I/O, etc.), physical hardware components (CD-ROM, sound, network, etc.), and the Windows environment as well (drivers, environment variables, services, etc.). It can export this information in the plain text format or in files with a .nfo extension, which can be used to diagnose problems. In addition, System Information can be used to gather technical information on a remote computer on the same network.", "Automation-Control": 0.7532180548, "Qwen2": "Yes"} {"id": "3367341", "revid": "5229428", "url": "https://en.wikipedia.org/wiki?curid=3367341", "title": "Magnetorheological finishing", "text": "Magnetorheological finishing (MRF) is a precision surface finishing technology. Optical surfaces are polished in a computer-controlled magnetorheological (MR) finishing slurry. Unlike conventional rigid lap polishing, the MR fluid's shape and stiffness can be magnetically manipulated and controlled in real time. The optic's final surface form and finishing results are predicted through the use of computer algorithms.\nLiterature.\n W.I. Kordonski (2014). \"Magnetorheological Fluid-Based High Precision Finishing Technology.\" Magnetorheology: Advances and Applications, Norman M. Wereley, Ed., RSC Smart Materials, Cambridge, UK, Chapter 11, 261–277. \nDOI:10.1039/9781849737548-00261\n S.D. Jacobs, W.I. Kordonski, I.V. Prokhorov, D. Golini, G.R. Gorodkin, T.D. Strafford (2002). \"Deterministic Magnetorheological Finishing.\" US Patent: US5449313A\n Shorey et al. \"Experiments and Observations Regarding the Mechanisms of Glass Removal in Magnetorheological Finishing\", abstract and full text (pdf)\n Chunlin Miao, et al., \"Shear stress in magnetorheological finishing for glasses,\" Applied Optics 48, 2585-2594 (2009)\n Chunlin Miao, et al., \"Process parameter effects on material removal in magnetorheological finishing of borosilicate glass,\" Applied Optics 49, 1951-1963 (2010)", "Automation-Control": 0.7581306696, "Qwen2": "Yes"} {"id": "38623159", "revid": "831534151", "url": "https://en.wikipedia.org/wiki?curid=38623159", "title": "Thread control block", "text": "Thread Control Block (TCB) is a data structure in the operating system kernel which contains thread-specific information needed to manage it. The TCB is \"the manifestation of a thread in an operating system.\"\nAn example of information contained within a TCB is:\nThe Thread Control Block acts as a library of information about the threads in a system. Specific information is stored in the thread control block highlighting important information about each process.", "Automation-Control": 0.9969672561, "Qwen2": "Yes"} {"id": "38635315", "revid": "10547048", "url": "https://en.wikipedia.org/wiki?curid=38635315", "title": "Network Virtualization using Generic Routing Encapsulation", "text": "Network Virtualization using Generic Routing Encapsulation (NVGRE) is a network virtualization technology that attempts to alleviate the scalability problems associated with large cloud computing deployments. It uses Generic Routing Encapsulation (GRE) to tunnel layer 2 packets over layer 3 networks.\nIts principal backer is Microsoft.", "Automation-Control": 0.68019557, "Qwen2": "Yes"} {"id": "10571853", "revid": "33594889", "url": "https://en.wikipedia.org/wiki?curid=10571853", "title": "Centerless grinding", "text": "Centerless grinding is a machining process that uses abrasive cutting to remove material from a workpiece. Centerless grinding differs from centered grinding operations in that no spindle or fixture is used to locate and secure the workpiece; the workpiece is secured between two rotary grinding wheels, and the speed of their rotation relative to each other determines the rate at which material is removed from the workpiece.\nCenterless grinding is typically used in preference to other grinding processes for operations where many parts must be processed in a short time.\nWorking principle.\nIn centerless grinding, the workpiece is held between two wheels, rotating in the same direction at different speeds, and a work-holding platform. One wheel, known as the grinding wheel (stationary wheel in the diagram), is on a fixed axis and rotates such that the force applied to the workpiece is directed downward, against the work-holding platform. This wheel usually performs the grinding action by having a higher tangential speed than the workpiece at the point of contact. The other wheel, known as the regulating wheel (moving wheel in the diagram), is movable. This wheel is positioned to apply lateral pressure to the workpiece, and usually has either a very rough or rubber-bonded abrasive to trap the workpiece.\nThe speed of the two wheels relative to each other provides the grinding action and determines the rate at which material is removed from the workpiece. During operation the workpiece turns with the regulating wheel, with the same linear velocity at the point of contact and (ideally) no slipping. The grinding wheel turns faster, slipping past the surface of the workpiece at the point of contact and removing chips of material as it passes.\nTypes.\nThere are three forms of centerless grinding, differentiated primarily by the method used to feed the workpiece through the machine.\nThrough-feed.\nIn through-feed centerless grinding, the workpiece is fed through the grinding wheels completely, entering on one side and exiting on the opposite. The regulating wheel in through-feed grinding is canted away from the plane of the grinding wheel in such a way as to provide an axial force component, feeding the workpiece through between the two wheels. Through-feed grinding can be very efficient because it does not require a separate feed mechanism; however, it can only be used for parts with a simple cylindrical shape.\nEnd-feed.\nIn end-feed centerless grinding, the workpiece is fed axially into the machine on one side and comes to rest against an end stop; the grinding operation is performed, and then the workpiece is fed in the opposite direction to exit the machine. End-feed grinding is best for tapered workpieces.\nIn-feed.\nIn-feed centerless grinding is used to grind workpieces with relatively complex shapes, such as an hourglass shape. Before the process begins, the workpiece is loaded manually into the grinding machine and the regulating wheel is moved into place. The complexity of the part shapes and grinding wheel shapes required to grind them accurately prevent the workpiece from being fed axially through the machine.\nEquipment.\nCenterless grinding uses purpose-built centerless grinding machines. Such a machine will always include the grinding wheel, regulating wheel, and some means of supporting a workpiece. Modern machines may involve computer numerical control to allow automation and improve precision. Grinding wheels are interchangeable, to allow for different grits and shapes. Machines designed to accommodate through-feed grinding operations will allow the angle of the regulating wheel to be adjusted, to accommodate parts of different sizes.", "Automation-Control": 0.9990544319, "Qwen2": "Yes"} {"id": "18305300", "revid": "16809467", "url": "https://en.wikipedia.org/wiki?curid=18305300", "title": "Circle criterion", "text": "In nonlinear control and stability theory, the circle criterion is a stability criterion for nonlinear time-varying systems. It can be viewed as a generalization of the Nyquist stability criterion for linear time-invariant (LTI) systems.\nOverview.\nConsider a linear system subject to non-linear feedback, i.e. a non linear element formula_1 is present in the feedback loop. Assume that the element satisfies a sector condition formula_2, and (to keep things simple) that the open loop system is stable. Then the closed loop system is globally asymptotically stable if the Nyquist locus does not penetrate the circle having as diameter the segment formula_3 located on the \"x\"-axis.\nGeneral description.\nConsider the nonlinear system\nSuppose that\nThen formula_10 such that for any solution of the system the following relation holds:\nCondition 3 is also known as the \"frequency condition\". Condition 1 the \"sector condition\".", "Automation-Control": 0.9925095439, "Qwen2": "Yes"} {"id": "9789742", "revid": "44274926", "url": "https://en.wikipedia.org/wiki?curid=9789742", "title": "ISO 13399", "text": "ISO 13399 (Cutting tool data representation and exchange) is an international technical standard by ISO (the International Organization for Standardization) for the computer-interpretable representation and exchange of industrial product data about cutting tools and toolholders. The objective is to provide a mechanism capable of describing product data regarding cutting tools, independent from any particular system. The nature of this description makes it suitable not only for neutral file exchange (free of proprietary format constraints), but also as a basis for implementing and sharing product databases and archiving, regarding cutting tools.\nTypically ISO 13399 can be used to exchange data between computer-aided design (CAD), computer-aided manufacturing (CAM), computer-aided engineering (CAE), tool management software, product data management (PDM/EDM), manufacturing resource planning (MRP) or enterprise resource planning (ERP), and other computer-aided technologies (CAx) and systems.\nThe usage of the ISO 13399 standard will simplify the exchange of data for cutting tools. Expected results are lower cost for managing the information about tools and a more accurate and efficient usage of manufacturing resources. The ISO 13399 has been developed with contributions from AB Sandvik Coromant, the Royal Institute of Technology in Stockholm, Kennametal Inc, and Ferroday Ltd.\nISO 13399 is developed and maintained by the ISO technical committee TC 29, Small tools, sub-committee WG34. Like other ISO and IEC standards ISO 13399 is copyright by ISO and is not freely available. Other standards developed and maintained by ISO TC29/WG34 are:\nStructure.\nISO 13399 is divided into several parts:\nISO 13399 defines a data model for cutting tool information using the EXPRESS modelling language. Application data according to this data model can be exchanged either by a STEP-File, STEP-XML or via shared database access using SDAI. \nThe dictionary (reference data library) of ISO 13399 currently uses PLIB (ISO 13584, IEC 61360).\nSee also.\nList of ISO standards 12000–13999#ISO_13000_–_ISO_13999", "Automation-Control": 0.9615912437, "Qwen2": "Yes"} {"id": "21114537", "revid": "1127293777", "url": "https://en.wikipedia.org/wiki?curid=21114537", "title": "Electrochemical grinding", "text": "Electrochemical grinding is a process that removes electrically conductive material by grinding with a negatively charged abrasive grinding wheel, an electrolyte fluid, and a positively charged workpiece. Materials removed from the workpiece stay in the electrolyte fluid. Electrochemical grinding is similar to electrochemical machining but uses a wheel instead of a tool shaped like the contour of the workpiece.\nProcess.\nThe electrochemical grinding process combines traditional electrochemical machining and grinding processes to remove material from a workpiece. A grinding wheel is used as a cutting tool as a cathode and the workpiece is an anode. During the process, electrolytic fluid, typically sodium nitrate, is pumped into the space between the workpiece and the grinding wheel. Other electrolytes used include sodium hydroxide, sodium carbonate, and sodium chloride. This electrolytic fluid will cause electrochemical reactions to occur at the workpiece surface which oxidize the surface, thereby removing material. As a consequence of the oxidation which occurs, layers of oxide films will form on the workpiece surface, and these need to be removed by the grinding wheel. A couple schematics of the process are provided below.\nAbrasive materials, either diamond or aluminum oxide, are bonded to the grinding wheel, which allows the wheel to remove the oxide layers on the workpiece surface by abrasive action. Appropriate materials used for electrolyte fluid and the grinding wheel abrasives are summarized in the table below.\nMost material removal is by the electrochemical reactions which occur at the workpiece surface. Five percent or less of the material removal is carried out by the abrasive action of the grinding wheel. The fact that most material is not removed by abrasive action helps increase the life of the grinding wheel; that is, the tool will take a long time to wear down. The electrolytic fluid serves another useful purpose - it flushes out leftover material in between the grinding wheel and workpiece. The abrasive particles bonded to the grinding wheel will help to electrically insulate the space between the grinding wheel and workpiece. An equation giving the material removal rate for an electrochemical grinding process is provided in and is stated here as:\nMRR = GI/ρF\nwhere ρ is the workpiece density, G is the total mass of the workpiece, I is the current supplied, MRR is the material removal rate, and F is Faraday's constant.\nSome of the main factors which govern the performance of an electrochemical grinding process include current supplied, rotation speed of the grinding wheel, the workpiece feed rate, the type of electrolyte used, electrolyte feed rate, and the workpiece's chemical properties. By changing these parameters, one can alter the material removal rate. Increasing the supplied current, rotation speed of the wheel, electrolyte feed rate, or the workpiece feed rate will lead to an increase in material removal rate (MRR), while decreasing these properties will do the opposite. If the workpiece is more reactive to the electrolyte used, then the material removal rate will increase. The grinding wheel is usually rotated with a surface speed of 1200–2000 m/min and supplied currents are around 1000A.\nThe accuracy of parts made by electrochemical grinding is strongly dictated by the chemical properties of the workpiece and electrolytic fluid used. If the workpiece is very reactive to the electrolyte, and if too much electrolyte is pumped into the space between the grinding wheel and workpiece, it may be difficult to control the material removal, which can lead to loss of accuracy. Also, accuracy may be reduced if the workpiece feed rate is too high.\nThe wheels are metal disks with abrasive particles embedded. Copper, brass, and nickel are the most commonly used materials; aluminum oxide is typically used as an abrasive when grinding steel. A thin layer of diamond particles will be used when grinding carbides or steels harder than 65 Rc.\nAn electrolytic spindle with carbon brushes, acting as a commutator, holds the wheel. The spindle receives a negative charge from the DC power supply, which gives the workpiece a positive charge. The electrolytic fluid is applied where the work contacts the tool by a nozzle similar to that which supplies coolant in conventional grinding. The fluid works with the wheel to form electrochemical cells that oxidize the surface of the workpiece. As the wheel carries away the oxide, fresh metal is exposed. Removing the oxidized fluid may only require a pressure of 20 psi or less, causing much less distortion than mechanical grinding. The wheel is subject to little wear, reducing the need for truing and dressing.\nApplications.\nElectrochemical grinding is often used for hard materials where conventional machining is difficult and time-consuming, such as stainless steel and some exotic metals. For materials with hardness greater than 65 HRC, ECG can have a material removal rate 10 times that of conventional machining. Because ECG involves little abrasion, it is often used for processes where the surface of the part is needs to be free of burrs, scratches, and residual stresses. Because of these properties, electrochemical grinding has a number of useful applications.\nAdvantages and disadvantages.\nOne of the key advantages of electrochemical grinding is the minimal wear that the grinding wheel tool experiences. This is because the majority of the material is removed by the electrochemical reaction that occurs between the cathode and anode. The only time that abrasive grinding actually occurs is in removing the film that develops on the surface of the workpiece. Another advantage of electrochemical grinding is that it can be used to machine hard materials. Hard materials pose a difficulty to other types of machining due to the tool wear that is associated with machining hard materials. It may come as a bit of a surprise that electrochemical grinding can remove material from a hard surface and experience minimal wear. Because most material is removed through electrochemical reactions, the workpiece does not experience heat damage like it would in a conventional grinding process.\nElectrochemical grinding also has a few disadvantages as well. The system consists of the anode workpiece and the cathode grinding wheel. In order to create those conditions both the workpiece and the grinding wheel must be conductive. This limits the types of workpiece materials that are suitable for electrochemical grinding. Another disadvantage of electrochemical grinding is that it is only applicable to surface grinding. It is not possible to apply electrochemical grinding to workpieces that have cavities, due to the grinding wheels inability to remove the film deposit with in the cavity. One other disadvantage is the electrolytic fluid can cause corrosion at the workpiece and grinding wheel surfaces. Lastly, electrochemical grinding is more complicated than traditional machining methods. This will require more experienced personnel to operate the machinery, which will lead to higher production cost. Another disadvantage is that chemical used during grinding process need to be properly disposed of depending on the environmental regulation.", "Automation-Control": 0.9999314547, "Qwen2": "Yes"} {"id": "3282143", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=3282143", "title": "Robust control", "text": "In control theory, robust control is an approach to controller design that explicitly deals with uncertainty. Robust control methods are designed to function properly provided that uncertain parameters or disturbances are found within some (typically compact) set. Robust methods aim to achieve robust performance and/or stability in the presence of bounded modelling errors.\nThe early methods of Bode and others were fairly robust; the state-space methods invented in the 1960s and 1970s were sometimes found to lack robustness, prompting research to improve them. This was the start of the theory of robust control, which took shape in the 1980s and 1990s and is still active today.\nIn contrast with an adaptive control policy, a robust control policy is static, rather than adapting to measurements of variations, the controller is designed to work assuming that certain variables will be unknown but\nbounded.\nCriteria for robustness.\nInformally, a controller designed for a particular set of parameters is said to be robust if it also works well under a different set of assumptions. High-gain feedback is a simple example of a robust control method; with sufficiently high gain, the effect of any parameter variations will be negligible. From the closed-loop transfer function perspective, high open-loop gain leads to substantial disturbance rejection in the face of system parameter uncertainty. Other examples of robust control include sliding mode and terminal sliding mode control.\nThe major obstacle to achieving high loop gains is the need to maintain system closed-loop stability. Loop shaping which allows stable closed-loop operation can be a technical challenge.\nRobust control systems often incorporate advanced topologies which include multiple feedback loops and feed-forward paths. The control laws may be represented by high order transfer functions required to simultaneously accomplish desired disturbance rejection performance with the robust closed-loop operation.\nHigh-gain feedback is the principle that allows simplified models of operational amplifiers and emitter-degenerated bipolar transistors to be used in a variety of different settings. This idea was already well understood by Bode and Black in 1927.\nThe modern theory of robust control.\nThe theory of robust control system began in the late 1970s and early 1980s and soon developed a number of techniques for dealing with bounded system uncertainty.\nProbably the most important example of a robust control technique is H-infinity loop-shaping, which was developed by Duncan McFarlane and Keith Glover of Cambridge University; this method minimizes the sensitivity of a system over its frequency spectrum, and this guarantees that the system will not greatly deviate from expected trajectories when disturbances enter the system.\nAn emerging area of robust control from application point of view is sliding mode control (SMC), which is a variation of variable structure control (VSC). The robustness properties of SMC with respect to matched uncertainty as well as the simplicity in design attracted a variety of applications.\nWhile robust control has been traditionally dealt with along deterministic approaches, in the last two decades this approach has been criticized on the basis that it is too rigid to describe real uncertainty, while it often also leads to over conservative solutions. Probabilistic robust control has been introduced as an alternative, see e.g. that interprets robust control within the so-called scenario optimization theory.\nAnother example is loop transfer recovery (LQG/LTR), which was developed to overcome the robustness problems of linear-quadratic-Gaussian control (LQG) control.\nOther robust techniques includes quantitative feedback theory (QFT), passivity based control, Lyapunov based control, etc.\nWhen system behavior varies considerably in normal operation, multiple control laws may have to be devised. Each distinct control law addresses a specific system behavior mode. An example is a computer hard disk drive. Separate robust control system modes are designed in order to address the rapid magnetic head traversal operation, known as the seek, a transitional settle operation as the magnetic head approaches its destination, and a track following mode during which the disk drive performs its data access operation.\nOne of the challenges is to design a control system that addresses these diverse system operating modes and enables smooth transition from one mode to the next as quickly as possible.\nSuch state machine-driven composite control system is an extension of the gain scheduling idea where the entire control strategy changes based upon changes in system behavior.", "Automation-Control": 0.9995124936, "Qwen2": "Yes"} {"id": "3283459", "revid": "20398877", "url": "https://en.wikipedia.org/wiki?curid=3283459", "title": "Die cutting (web)", "text": "Die cutting is the general process of using a die to shear webs of low-strength materials, such as rubber, fibre, foil, cloth, paper, corrugated fibreboard, chipboard, paperboard, plastics, pressure-sensitive adhesive tapes, foam, and sheet metal. In the metalworking and leather industries, the process is known as clicking and the machine may be referred to as a \"clicking machine\". When a \"dinking die\" or \"dinking machine\" is used, the process is known as dinking. Commonly produced items using this process include gaskets, labels, tokens, corrugated boxes, and envelopes.\nDie cutting started as a process of cutting leather for the shoe industry in the mid-19th century. It is now sophisticated enough to cut through just one layer of a laminate, so it is now used on labels, postage stamps, and other stickers; this type of die cutting is known as \"\".\nDie cutting can be done on either flatbed or rotary presses. Rotary die cutting is often done inline with printing. The primary difference between rotary die cutting and flatbed die cutting is that the flatbed is not as fast but the tools are cheaper. This process lends itself to smaller production runs where it is not as easy to absorb the added cost of a rotary die.\nRotary die cutting.\nRotary die cutting is die cutting using a cylindrical die on a rotary press and may be known as a rotary die cutter or RDC. A long sheet or web of material will be fed through the press into an area known as a \"station\" which holds a rotary tool that will cut out shapes, make perforations or creases, or even cut the sheet or web into smaller parts. A series of gears will force the die to rotate at the same speed as the rest of the press, ensuring that any cuts the die makes line up with the printing on the material. The machines used for this process can incorporate multiple \"stations\" that die cut a particular shape in the material. In each of these stations lie one or more of these geared tools or printing cylinders, and some machines use automatic eye registration to make sure the cuts and/or printing are lined up with one another when lower tolerances are required.\nDies used in rotary die cutting are either solid engraved dies, adjustable dies, or magnetic plate tooling. Engraved dies have a much higher tolerance and are machined out of a solid steel bar normally made out of tool steel. Adjustable dies have removable blades that can be easily replaced with other blades, either due to wear or to cut a different material, while magnetic plate tooling has a cylinder that has magnets placed in it, and an engraved metal plate is attached or wrapped around the base cylinder holding onto it by the force of the magnets.\nDinking.\nDinking is a manufacturing process. Dinking uses special dies called dinking dies, which are hollow cutters. The edges of the dies are usually beveled about 20° and sharpened. The material is punched through into a wood or soft metal block in order to not dull the edges. The die may be pressed into the material with a hammer or a mechanical press.", "Automation-Control": 0.6528950334, "Qwen2": "Yes"} {"id": "3285313", "revid": "36945343", "url": "https://en.wikipedia.org/wiki?curid=3285313", "title": "State variable", "text": "A state variable is one of the set of variables that are used to describe the mathematical \"state\" of a dynamical system. Intuitively, the state of a system describes enough about the system to determine its future behaviour in the absence of any external forces affecting the system. Models that consist of coupled first-order differential equations are said to be in state-variable form.\nControl systems engineering.\nIn control engineering and other areas of science and engineering, state variables are used to represent the states of a general system. The set of possible combinations of state variable values is called the state space of the system. The equations relating the current state of a system to its most recent input and past states are called the state equations, and the equations expressing the values of the output variables in terms of the state variables and inputs are called the output equations. As shown below, the state equations and output equations for a linear time invariant system can be expressed using coefficient matrices: \"A\", \"B\", \"C\", and \"D\"\nwhere \"N\", \"L\" and \"M\" are the dimensions of the vectors describing the state, input and output, respectively.\nDiscrete-time systems.\nThe state vector (vector of state variables) representing the current state of a discrete-time system (i.e. digital system) is formula_2, where \"n\" is the discrete point in time at which the system is being evaluated. The discrete-time state equations are \nwhich describes the next state of the system (\"x\"[\"n\"+1]) with respect to current state and inputs \"u\"[\"n\"] of the system. The output equations are\nwhich describes the output \"y\"[\"n\"] with respect to current states and inputs \"u\"[\"n\"] to the system.\nContinuous time systems.\nThe state vector representing the current state of a continuous-time system (i.e. analog system) is formula_5, and the continuous-time state equations giving the evolution of the state vector are\nwhich describes the continuous rate of change formula_7 of the state of the system with respect to current state \"x\"(\"t\") and inputs \"u\"(\"t\") of the system. The output equations are\nwhich describes the output \"y\"(\"t\") with respect to current states \"x\"(\"t\") and inputs \"u\"(\"t\") to the system.", "Automation-Control": 0.9999759197, "Qwen2": "Yes"} {"id": "41790415", "revid": "36083290", "url": "https://en.wikipedia.org/wiki?curid=41790415", "title": "Caratheodory-π solution", "text": "A Carathéodory- solution is a generalized solution to an ordinary differential equation. The concept is due to I. Michael Ross and named in honor of Constantin Carathéodory. Its practicality was demonstrated in 2008 by Ross et al. in a laboratory implementation of the concept. The concept is most useful for implementing feedback controls, particularly those generated by an application of Ross' pseudospectral optimal control theory.\nMathematical background.\nA Carathéodory- solution addresses the fundamental problem of defining a solution to a differential equation,\nwhen \"g\"(\"x\",\"t\") is not differentiable with respect to \"x\". Such problems arise quite naturally in defining the meaning of a solution to a controlled differential equation,\nwhen the control, \"u\", is given by a feedback law,\nwhere the function \"k\"(\"x\",\"t\") may be non-smooth with respect to \"x\". Non-smooth feedback controls arise quite often in the study of optimal feedback controls and have been the subject of extensive study going back to the 1960s.\nRoss' concept.\nAn ordinary differential equation,\nis equivalent to a controlled differential equation,\nwith feedback control,\nformula_6. Then, given an initial value problem, Ross partitions the time interval formula_7 to a grid, formula_8 with formula_9. From formula_10 to formula_11, generate a control trajectory,\nto the controlled differential equation,\nA Carathéodory solution exists for the above equation because formula_14 has discontinuities at most in \"t\", the independent variable. At formula_15, set formula_16 and restart\nthe system with formula_17,\nContinuing in this manner, the Carathéodory segments are stitched together to form a Carathéodory- solution.\nEngineering applications.\nA Carathéodory- solution can be applied towards the practical stabilization of a control system. It has been used to stabilize an inverted pendulum, control and optimize the motion of robots, slew and control the NPSAT1 spacecraft and produce guidance commands for low-thrust space missions.", "Automation-Control": 0.7366831899, "Qwen2": "Yes"} {"id": "35314108", "revid": "42522270", "url": "https://en.wikipedia.org/wiki?curid=35314108", "title": "Double integrator", "text": "In systems and control theory, the double integrator is a canonical example of a second-order control system. It models the dynamics of a simple mass in one-dimensional space under the effect of a time-varying force input formula_1.\nDifferential equations.\nThe differential equations which represent a double integrator are:\nwhere both formula_4\nLet us now represent this in state space form with the vector formula_5\nIn this representation, it is clear that the control input formula_1 is the second derivative of the output formula_8. In the scalar form, the control input is the second derivative of the output formula_9.\nState space representation.\nThe normalized state space model of a double integrator takes the form \nAccording to this model, the input formula_1 is the second derivative of the output formula_13, hence the name double integrator.\nTransfer function representation.\nTaking the Laplace transform of the state space input-output equation, we see that the transfer function of the double integrator is given by\nUsing the differential equations dependent on formula_15 and formula_16, and the state space representation:", "Automation-Control": 0.9095054269, "Qwen2": "Yes"} {"id": "35314983", "revid": "16809467", "url": "https://en.wikipedia.org/wiki?curid=35314983", "title": "Viability theory", "text": "Viability theory is an area of mathematics that studies the evolution of dynamical systems under constraints on the system state. It was developed to formalize problems arising in the study of various natural and social phenomena, and has close ties to the theories of optimal control and set-valued analysis.\nMotivation.\nMany systems, organizations, and networks arising in biology and the social sciences do not evolve in a deterministic way, nor even in a stochastic way. Rather they evolve with a Darwinian flavor, driven by random fluctuations but yet constrained to remain \"viable\" by their environment. Viability theory started in 1976 by translating mathematically the title of the book Chance and Necessity by Jacques Monod to the differential inclusion formula_1 for chance and \nformula_2 for necessity. The differential inclusion is a type of “evolutionary engine” (called an evolutionary system associating with any initial state x a subset of evolutions starting at x. The system is said to be deterministic if this set is made of one and only one evolution and contingent otherwise. \nNecessity is the requirement that at each instant, the evolution is \"viable\" (remains) in the \"environment\" K described by \"viability constraints\", a word encompassing polysemous concepts as \"stability, confinement, homeostasis, adaptation\", etc., expressing the idea that some variables must obey some constraints (representing physical, social, biological and economic constraints, etc.) that can never be violated. So, viability theory starts as the confrontation of evolutionary systems governing evolutions and viability constraints that such evolutions must obey. They share common features:\nViability theory thus designs and develops mathematical and algorithmic methods for investigating the \"adaptation to viability constraints\" of evolutions governed by complex systems under uncertainty that are found in many domains involving living beings, from biological evolution to economics, from environmental sciences to financial markets, from control theory and robotics to cognitive sciences. It needed to forge a differential calculus of set-valued maps (set-valued analysis), differential inclusions and differential calculus in metric spaces (mutational analysis).\nViability kernel.\nThe basic problem of viability theory is to find the \"viability kernel\" of an environment, the subset of initial states in the environment such that there exists at least one evolution \"viable\" in the environment, in the sense that at each time, the state of the evolution remains confined to the environment. The second question is then to provide the regulation map selecting such viable evolutions starting from the viability kernel. The viability kernel may be equal to the environment, in which case the environment is called viable under the evolutionary system, and the empty set, in which case it is called a repellor, because all evolutions eventually violate the constraints.\nThe viability kernel assumes that some kind of \"decision maker\" controls or regulates evolutions of the system. If not, the next problem looks at the \"tychastic kernel\" (from tyche, meaning chance in Greek) or \"invariance kernel\", the subset of initial states in the environment such that all evolutions are \"viable\" in the environment, an alternative way to stochastic differential equations encapsulating the concept of \"insurance\" against uncertainty, providing a way of eradicating it instead of evaluating it.", "Automation-Control": 0.8965301514, "Qwen2": "Yes"} {"id": "62556833", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=62556833", "title": "Impulse vector", "text": "An impulse vector is a mathematical tool to graphically design and analyze input shapers that could suppress residual vibration. The impulse vector can be applied for both undamped and underdamped systems, and for both positive and negative impulses in a unified way. The impulse vector makes it easy to obtain impulse time and magnitude of the input shaper graphically. \nA vector concept for an input shaper was first introduced by W. Singhose for undamped systems with positive impulses, and an impulse vector was first introduced by C.-G. Kang to generalize Singhose idea to underdamped systems with positive and negative impulses. \nDefinition.\nFor a vibratory second-order system formula_1 with undamped natural frequency formula_2 and damping ratio formula_3, the magnitude formula_4 and angle formula_5 of an impulse vector formula_6 corresponding to an impulse function formula_7, formula_8 is defined in a 2-dimensional polar coordinate system as\nwhere formula_11 implies the magnitude of an impulse function, formula_12 implies the time location of the impulse function, and formula_13 implies damped natural frequency formula_14. For a positive impulse function with formula_15, the initial point of the impulse vector is located at the origin of the polar coordinate system, while for a negative impulse function with formula_16, the terminal point of the impulse vector is located at the origin. □\nIn this definition, the magnitude formula_4 is the product of formula_11 and a scaling factor for damping during time interval formula_12, which represents the magnitude formula_11 before being damped; the angle formula_5 is the product of the impulse time and damped natural frequency. formula_22 represents the Dirac delta function with impulse time at formula_23. \nNote that an impulse function is a purely mathematical quantity, while the impulse vector includes a physical quantity (that is, formula_2 and formula_3 of a second-order system) as well as a mathematical impulse function. Representing more than two impulse vectors in the same polar coordinate system makes an \"impulse vector diagram\". The impulse vector diagram is a graphical representation of an impulse sequence.\nConsider two impulse vectors formula_26 and formula_27 in the figure on the right-hand side, in which formula_26 is an impulse vector with magnitude formula_29 and angle formula_30 corresponding to a positive impulse with formula_31, and formula_27 is an impulse vector with magnitude formula_33 and angle formula_34 corresponding to a negative impulse with formula_35. Since the two time-responses corresponding to formula_26 and formula_27 are exactly same after the final impulse time formula_38 as shown in the figure, the two impulse vectors formula_26 and formula_27 can be regarded as the same vector for vector addition and subtraction. Impulse vectors satisfy the commutative and associative laws, as well as the distributive law for scalar multiplication. \nThe magnitude of the impulse vector determines the magnitude of the impulse, and the angle of the impulse vector determines the time location of the impulse. One rotation, formula_41 angle, on an impulse vector diagram corresponds to one (damped) period of the corresponding impulse response. \nIf it is an undamped system (formula_42), the magnitude and angle of the impulse vector become formula_43 and formula_44.\nProperties.\nProperty 1: Resultant of two impulse vectors..\nThe impulse response of a second-order system corresponding to the resultant of two impulse vectors is same as the time response of the system with a two-impulse input corresponding to two impulse vectors after the final impulse time regardless of whether the system is undamped or underdamped. □\nProperty 2: Zero resultant of impulse vectors..\nIf the resultant of impulse vectors is zero, the time response of a second-order system for the input of the impulse sequence corresponding to the impulse vectors becomes zero also after the final impulse time regardless of whether the system is undamped or underdamped. □\nConsider an underdamped second-order system with the transfer function formula_45. This system has formula_46 and formula_47. For given impulse vectors formula_26 and formula_27 as shown in the figure, the resultant can be represented in two ways, formula_50 and formula_51, in which formula_50 corresponds to a negative impulse with formula_53 and formula_54, and formula_51 corresponds to a positive impulse with formula_56 and formula_57. \nThe resultants formula_50, formula_51 can be found as follows.\nNote that formula_63. The impulse responses formula_64 and formula_65 corresponding to formula_50 and formula_51 are exactly same with formula_68 after each impulse time location as shown in green lines of the figure (b).\nNow, place an impulse vector formula_69 on the impulse vector diagram to cancel the resultant formula_70 as shown in the figure. The impulse vector formula_69 is given by\nWhen the impulse sequence corresponding to three impulse vectors formula_73 and formula_69 is applied to a second-order system as an input, the resulting time response causes no residual vibration after the final impulse time formula_75 as shown in the red line of the bottom figure (b). Of course, another canceling vector formula_76 can exist, which is the impulse vector with the same magnitude as formula_69 but with an opposite arrow direction. However, this canceling vector has a longer impulse time that can be as much as a half period compared to formula_69.\nApplications: Design of input shapers using impulse vectors.\nZVD\"n\" shaper.\nUsing impulse vectors, we can redesign known input shapers such as zero vibration (ZV), zero vibration and derivative (ZVD), and ZVD\"n\" shapers. \nThe ZV shaper is composed of two impulse vectors, in which the first impulse vector is located at 0°, and the second impulse vector with the same magnitude is located at 180° for formula_79. Then from the impulse vector diagram of the ZV shaper on the right-hand side,\nTherefore, formula_82.\nSince formula_83 (normalization constraint) must be hold, and formula_84,\nTherefoere, formula_86.\nThus, the ZV shaper formula_87 is given by\nformula_89\nThe ZVD shaper is composed of three impulse vectors, in which the first impulse vector is located at 0 rad, the second vector at formula_90 rad, and the third vector at formula_41 rad, and the magnitude ratio is formula_92. Then formula_93. From the impulse vector diagram,\nTherefore, formula_95.\nAlso from the impulse vector diagram,\nSince formula_97 must be hold,\nTherefore, formula_99.\nThus, the ZVD shaper formula_100 is given by\nformula_89\nThe ZVD2 shaper is composed of four impulse vectors, in which the first impulse vector is located at 0 rad, the second vector at formula_90 rad, the third vector at formula_41 rad, and the fourth vector at formula_105 rad, and the magnitude ratio is formula_106. Then formula_107. From the impulse vector diagram,\nTherefore, formula_109. \nAlso, from the impulse vector diagram,\nSince formula_111 must be hold,\nTherefore, formula_113.\nThus, the ZVD2 shaper formula_114 is given by\nformula_89\nSimilarly, the ZVD3 shaper with five impulse vectors can be obtained, in which the first vector is located at 0 rad, the second vector at formula_90 rad, third vector at formula_41 rad, the fourth vector at formula_105 rad, and the fifth vector at formula_120 rad, and the magnitude ratio is formula_121. In general, for the ZVD\"n\" shaper, \"i\"-th impulse vector is located at formula_122 rad, and the magnitude ratio is formula_123 where formula_124 implies a mathematical combination.\nETM shaper.\nNow, consider \"equal shaping-time and magnitudes\" (ETM) shapers, with the same magnitude of impulse vectors and with the same angle between impulse vectors. The ETM\"n\" shaper satisfies the conditions\nThus, the resultant of the impulse vectors of the ETM\"n\" shaper becomes always zero for all formula_128. One merit of the ETM\"n\" shaper is that, unlike the ZVD\"n\" or extra insensitive (EI) shapers, the shaping time is always one (damped) period of the time response even if \"n\" increases. \nThe ETM4 shaper with four impulse vectors is obtained from the above conditions together with impulse vector definitions as\nThe ETM5 shaper with five impulse vectors is obtained similarly as\nIn the same way, the ETM\"n\" shaper with formula_133 can be obtained easily. In general, ETM shapers are less sensitive to modeling errors than ZVD\"n\" shapers in a large positive error range. Note that the ZVD shaper is an ETM3 shaper with formula_134.\nNMe shaper.\nMoreover, impulse vectors can be applied to design input shapers with negative impulses. Consider a \"negative equal-magnitude\" (NMe) shaper, in which the magnitudes of three impulse vectors are formula_135, and the angles are formula_136. Then the resultant of three impulse vectors becomes zero, and thus the residual vibration is suppressed. Impulse time formula_137 of the NMe shaper are obtained as formula_138, and impulse magnitudes are obtained easily by solving the simultaneous equations\nThe resulting NMe shaper formula_141 is\nThe NMe shaper has faster rise time than the ZVD shaper, but it is more sensitive to modeling error than the ZVD shaper. Note that the NMe shaper is the same with the UM shaper if the system is undamped (formula_42).\nFigure (a) in the right side shows a typical block diagram of an input-shaping control system, and figure (b) shows residual vibration suppressions in unit-step responses by ZV, ZVD, ETM4 and NMe shapers.\nRefer to the reference for sensitivity curves of the above input shapers, which represent the robustness to modeling errors in formula_2 and formula_3.", "Automation-Control": 0.9279139042, "Qwen2": "Yes"} {"id": "41599694", "revid": "19244234", "url": "https://en.wikipedia.org/wiki?curid=41599694", "title": "Programming station", "text": "A programming station is a terminal or computer that allows a machine operator to control a machine remotely, rather than being on the factory or shop floor. The programming station usually provides all the functionality including management and diagnostics that are found on the main control station.", "Automation-Control": 0.9999012947, "Qwen2": "Yes"} {"id": "13698301", "revid": "11677590", "url": "https://en.wikipedia.org/wiki?curid=13698301", "title": "Out-of-band control", "text": "Out-of-band control is a characteristic of network protocols with which data control is regulated. Out-of-band control passes control data on a separate connection from main data. Protocols such as FTP use out-of-band control. \nFTP sends its control information, which includes user identification, password, and put/get commands, on one connection, and sends data files on a separate parallel connection. Because it uses a separate connection for the control information, FTP uses out-of-band control.", "Automation-Control": 0.9998910427, "Qwen2": "Yes"} {"id": "13569582", "revid": "27823944", "url": "https://en.wikipedia.org/wiki?curid=13569582", "title": "Intelligent pump", "text": "An intelligent pump is a pump that has the ability to regulate and control flow or pressure. Typical advantages are energy savings, lifetime improvements and system cost reductions. Intelligent pumps are used in boilers and systems, temperature control, water treatment, industrial water supply, wash and clean, machining and desalination.", "Automation-Control": 0.9499266744, "Qwen2": "Yes"} {"id": "25155005", "revid": "1165924360", "url": "https://en.wikipedia.org/wiki?curid=25155005", "title": "Automatic lubrication system", "text": "An automatic lubrication system (ALS), sometimes referred to as a centralized lubrication system (CLS), is a system that delivers controlled amounts of lubricant to multiple locations on a machine while the machine is operating. Even though these systems are usually fully automated, a system that requires a manual pump or button activation is still identified as a centralized lubrication system. The system can be classified into two different categories that can share a lot of the same components.\nOil systems: Oil systems primary use is for stationary manufacturing equipment such as CNC milling\nGrease systems: Grease primary use is on mobile units such as trucks, mining or construction equipment.\nAutomatic lubrication systems are key aspects in maintenance and reliability programs. They supply lube points with metered amounts of grease or oil from a central location. The pump supplies the system with the chosen lubricant and is fed from a reservoir that is easily accessible. Depending on the application, the reservoir ranges in size and can be as small as 2 liters all the way up to an intermediate bulk container or even a bulk tank. The options are almost limitless and are application-specific. These systems have the option to be monitored remotely with feedback and can be tied directly into your plant's PLC. So whether you're running an excavator, driving a ready-mix truck, operating a crusher, or making steel, you can rest assured that your assets are being properly lubricated at all times.\nReasons for Automatic Lubrication Systems.\nAutomatic lubrication systems or centralized lubrication system s are designed to apply lubricant in precise, metered amounts over short, frequent time intervals. Time and human resource constraints and often the physical location on the machine often makes it impractical to manually lubricate the points. As a result, production cycles, machine availability, and manpower availability dictate the intervals at which machinery is lubricated, which is not optimal for the point requiring lubrication. Automatic lubrication systems are installed on machinery to circumvent these issues.\nBenefits.\nAuto lube systems have many advantages over traditional methods of manual lubrication:\nComponents.\nA typical system consists of controller/timer, pump w/reservoir, supply line, metering valves, and feed lines. Regardless of the manufacturer or type of system, all automatic lubrication systems share these 5 main components:\nTypes.\nThere are several different types of automatic lubrication systems including:\nThe 4 most commonly used Automatic Lubrication System types are:\nSingle line progressive.\nA single line progressive system uses lubricant flow to cycle individual metering valves and valve assemblies. The valves consist of dispensing pistons moving back and forth in a specific bore. Each piston depends on flow from the previous piston to shift and displace lubricant. If one piston doesn't shift, none of the following pistons will shift. Valve output is not adjustable.\nOperation begins when the controller/timer sends a signal to the pump to start the lube event. The pump then feeds lubricant into the supply line which connects to the primary metering valve, for either a preprogrammed amount of time or number of times as monitored through a designated piston cycle switch. Lubricant is fed to the multiple lubrication points one after another via secondary progressive metering valves sized for each series of lubrication points, and then directly to each point via the feed lines.\nSingle line parallel.\nThe first single-line parallel system for industry was introduced in 1937 by Lincoln Engineering (now known as Lincoln Industrial) in the United States.\nA single line parallel system can service a single machine, different zones on a single machine or even several separate machines and is ideal when the volume of lubricant varies for each point. In this type of system, a central pump station automatically delivers lubricant through a single supply line to multiple branches of injectors. Each injector serves a single lubrication point, operates independently and may be individually adjusted to deliver the desired amount of lubricant.\nOperation begins when the controller/timer sends a signal to the pump starting the lube cycle. The pump begins pumping lubricant to build up pressure in the supply line connecting the pump to the injectors. Once the required pressure is reached, the lube injectors dispense a predetermined amount of lubricant to the lubrication points via feed lines.\nOnce the entire system reaches the required pressure, a pressure switch sends a signal to the controller indicating that grease has cycled through to all the distribution points. The pump shuts off. Pressure is vented out of the system and grease in the line is redirected back to the pump reservoir, until the normal system pressure level is restored.\nDual line parallel.\nA dual line parallel system is similar to the single line parallel system in that it uses hydraulic pressure to cycle adjustable valves to dispense measured shots of lubricant. It has 2 main supply lines which are alternatively used as pressure / vent lines. The advantage of a two-line system is that it can handle hundreds of lubrication points from a single pump station over several thousand feet using significantly smaller tubing or pipe.\nOperation begins when the controller/timer sends a signal to the pump to start the lubrication cycle. The pump begins pumping lubricant to build up pressure in the first (the pressure) supply line while simultaneously venting the second (vent) return line. Once the required pressure is reached, a predetermined amount of lubricant is dispensed by the metering devices to half of the lubrication points via feed lines.\nOnce the pressure switch monitoring main supply line pressure indicates a preset pressure in the line has been reached, the system is hydraulically closed. The controller shuts off the pump and signals a changeover valve to redirect lubricant to the second main supply line.\nThe next time the controller activates the system, the second main line now becomes the pressure line while the first line becomes the vent line. The second line is pressurized and the entire process is repeated lubricating the remaining lube points.\nMulti point direct lubricator\nWhen the controller in the pump or external controller activates the drive motor, a set of cams turns and activates individual injectors or pump elements to dispense a fixed amount of lubricant to each individual lubrication point. Systems are easy to design, direct pump to lube point without added accessories and easy to troubleshoot.", "Automation-Control": 0.906099081, "Qwen2": "Yes"} {"id": "2067581", "revid": "1030826", "url": "https://en.wikipedia.org/wiki?curid=2067581", "title": "Routh–Hurwitz theorem", "text": "In mathematics, the Routh–Hurwitz theorem gives a test to determine whether all roots of a given polynomial lie in the left half-plane. Polynomials with this property are called Hurwitz stable polynomials. The Routh–Hurwitz theorem is important in dynamical systems and control theory, because the characteristic polynomial of the differential equations of a stable linear system has roots limited to the left half plane (negative eigenvalues). Thus the theorem provides a mathematical test, the Routh-Hurwitz stability criterion, to determine whether a linear dynamical system is stable without solving the system. The Routh–Hurwitz theorem was proved in 1895, and it was named after Edward John Routh and Adolf Hurwitz.\nNotations.\nLet \"f\"(\"z\") be a polynomial (with complex coefficients) of degree \"n\" with no roots on the imaginary axis (i.e. the line \"Z\" = \"ic\" where \"i\" is the imaginary unit and \"c\" is a real number). Let us define formula_1 (a polynomial of degree \"n\") and formula_2 (a nonzero polynomial of degree strictly less than \"n\") by formula_3, respectively the real and imaginary parts of \"f\" on the imaginary line.\nFurthermore, let us denote by:\nStatement.\nWith the notations introduced above, the Routh–Hurwitz theorem states that:\nFrom the first equality we can for instance conclude that when the variation of the argument of \"f\"(\"iy\") is positive, then \"f\"(\"z\") will have more roots to the left of the imaginary axis than to its right.\nThe equality \"p\" − \"q\" = \"w\"(+∞) − \"w\"(−∞) can be viewed as the complex counterpart of Sturm's theorem. Note the differences: in Sturm's theorem, the left member is \"p\" + \"q\" and the \"w\" from the right member is the number of variations of a Sturm chain (while \"w\" refers to a generalized Sturm chain in the present theorem).\nRouth–Hurwitz stability criterion.\nWe can easily determine a stability criterion using this theorem as it is trivial that \"f\"(\"z\") is Hurwitz-stable iff \"p\" − \"q\" = \"n\". We thus obtain conditions on the coefficients of \"f\"(\"z\") by imposing \"w\"(+∞) = \"n\" and \"w\"(−∞) = 0.", "Automation-Control": 0.9709200859, "Qwen2": "Yes"} {"id": "2068993", "revid": "38627444", "url": "https://en.wikipedia.org/wiki?curid=2068993", "title": "Stability radius", "text": "In mathematics, the stability radius of an object (system, function, matrix, parameter) at a given nominal point is the radius of the largest ball, centered at the nominal point, all of whose elements satisfy pre-determined stability conditions. The picture of this intuitive notion is this:\nwhere formula_1 denotes the nominal point, formula_2 denotes the space of all possible values of the object formula_3, and the shaded area, formula_4, represents the set of points that satisfy the stability conditions. The radius of the blue circle, shown in red, is the stability radius.\nAbstract definition.\nThe formal definition of this concept varies, depending on the application area. The following abstract definition is quite useful\nwhere formula_6 denotes a closed ball of radius formula_7 in formula_2 centered at formula_1.\nHistory.\nIt looks like the concept was invented in the early 1960s. In the 1980s it became popular in control theory and optimization. It is widely used as a model of local robustness against small perturbations in a given nominal value of the object of interest.\nRelation to Wald's maximin model.\nIt was shown that the stability radius model is an instance of Wald's maximin model. That is,\nwhere\nThe large penalty (formula_12) is a device to force the formula_13 player not to perturb the nominal value beyond the stability radius of the system. It is an indication that the stability model is a model of local stability/robustness, rather than a global one.\nInfo-gap decision theory.\nInfo-gap decision theory is a recent non-probabilistic decision theory. It is claimed to be radically different from all current theories of decision under uncertainty. But it has been shown that its robustness model, namely\nis actually a stability radius model characterized by a simple stability requirement of the form formula_15 where formula_16 denotes the decision under consideration, formula_17 denotes the parameter of interest, formula_18 denotes the estimate of the true value of formula_17 and formula_20 denotes a ball of radius formula_21 centered at formula_18.\nSince stability radius models are designed to deal with small perturbations in the nominal value of a parameter, info-gap's robustness model measures the \"local robustness\" of decisions in the neighborhood of the estimate formula_18.\nSniedovich argues that for this reason the theory is unsuitable for the treatment of severe uncertainty characterized by a poor estimate and a vast uncertainty space.\nAlternate definition.\nThere are cases where it is more convenient to define the stability radius slightly different. For example, in many applications in control theory the radius of stability is defined as the size of the smallest destabilizing perturbation in the nominal value of the parameter of interest. The picture is this:\nMore formally,\nwhere formula_25 denotes the \"distance\" of formula_26 from formula_1.\nStability radius of functions.\nThe stability radius of a continuous function \"f\" (in a functional space \"F\") with respect to an open stability domain \"D\" is the distance between \"f\" and the set of unstable functions (with respect to \"D\"). We say that a function is \"stable\" with respect to \"D\" if its spectrum is in \"D\". Here, the notion of spectrum is defined on a case-by-case basis, as explained below.\nDefinition.\nFormally, if we denote the set of stable functions by \"S(D)\" and the stability radius by \"r(f,D)\", then:\nwhere \"C\" is a subset of \"F\".\nNote that if \"f\" is already unstable (with respect to \"D\"), then \"r(f,D)=0\" (as long as \"C\" contains zero).\nApplications.\nThe notion of stability radius is generally applied to special functions as polynomials (the spectrum is then the roots) and matrices (the spectrum is the eigenvalues). The case where \"C\" is a proper subset of \"F\" permits us to consider structured perturbations (e.g. for a matrix, we could only need perturbations on the last row). It is an interesting measure of robustness, for example in control theory.\nProperties.\nLet \"f\" be a (complex) polynomial of degree \"n\", \"C=F\" be the set of polynomials of degree less than (or equal to) \"n\" (which we identify here with the set formula_29 of coefficients). We take for \"D\" the open unit disk, which means we are looking for the distance between a polynomial and the set of Schur stable polynomials. Then:\nwhere \"q\" contains each basis vector (e.g. formula_31 when \"q\" is the usual power basis). This result means that the stability radius is bound with the minimal value that \"f\" reaches on the unit circle.", "Automation-Control": 0.950350523, "Qwen2": "Yes"} {"id": "21036975", "revid": "10027499", "url": "https://en.wikipedia.org/wiki?curid=21036975", "title": "Rubber pad forming", "text": "Rubber pad forming (RPF) is a metalworking process where sheet metal is pressed between a die and a rubber block, made of polyurethane. Under pressure, the rubber and sheet metal are driven into the die and conform to its shape, forming the part. The rubber pads can have a general purpose shape, like a membrane. Alternatively, they can be machined in the shape of die or punch.\nRubber pad forming is a deep drawing technique that is ideally suited for the production of small and medium-sized series. Deep drawing makes it possible to deform sheet metal in two directions, which offers great benefits in terms of function integration, weight reduction, cleanability and such.\nThe disadvantage of regular deep drawing is that expensive tools consisting of an upper and lower mold are needed. Once these tools have been made, the variable costs are low, which makes regular deep drawing very suitable for large and very large numbers of products.\nTechnique.\nIn the rubber pad forming process only a milled lower die is required on which a metal plate is placed. Afterwards, the shape of the lower die is pressed in the plate with the rubber mold. In most cases, the contour, hole patterns and the like will be cut with a 3D laser cutter.\nThe simplicity of the rubber press tool causes tooling costs to be around 85 to 90% lower than those of regular deep drawing while the variable costs are higher. This combination makes rubber pad pressing very suitable for smaller and medium-sized series (up to 5,000-10,000 pieces per year), even though traditional cutting, lace, welding, finishing etc. is used more due to the unfamiliarity with rubber pad forming.\n\n380 Ton Rubber Pad Press\nRubber pad forming has been used in production lines for many years. Up to 60% of all sheet metal parts in the aerospace industry are fabricated using this process.\nThe most relevant applications are indeed in the aerospace field. It is frequently used in prototyping shops and for the production of kitchenware. For a decade, rubber pad pressing has developed greatly into a widely used technology for many industrial applications.\nPressing power.\nEnormous pressing forces are required for the rubber presses to work. In the Netherlands there are several rubber pad presses, of which the largest one has a press force of no less than 8,000 tons with a maximum surface area of 1.10x2.20m, these presses are used for very diverse industrial applications.\nWorldwide, presses are in use up to about 14,000 tonnes.\nPros and cons.\nIn summary, the benefits of rubber pad pressing are:\nAnd the disadvantages:\nDefinition.\nRubber pad forming can be accomplished in many different ways, and as technology has advanced, so have the applications for this simple process. In general, an elastic upper die, usually made of rubber, is connected to a hydraulic press. A rigid lower die, often called a form block, provides the mold for the sheeted metal to be formed to. Because the upper (male) die can be used with separate lower (female) dies, the process is relatively cheap and flexible. The worked metal is not worn as quickly as in more conventional processes such as deep drawing, however, rubber pads exert less pressure in the same circumstances as non-elastic parts, which may lead to less definition in forming, and rubber pads wear more quickly than steel parts.\nThe Guerin process.\nThe Guerin process, also called Guerin Stamping, is a manufacturing process used in the shaping of sheet metals. It is the oldest and most basic of the production rubber-pad forming processes. It was developed in the late 1930s by Henry Guerin, an employee of the Douglas Aircraft Co. in California. Thereafter, it was used extensively by all major aircraft manufacturers to shape the many complex shapes inherent in the design of aircraft.", "Automation-Control": 0.9375602007, "Qwen2": "Yes"} {"id": "42443688", "revid": "23914831", "url": "https://en.wikipedia.org/wiki?curid=42443688", "title": "Lead frame", "text": "A lead frame (pronounced ) is a metal structure inside a chip package that carries signals from the die to the outside, used in DIP, QFP and other packages where connections to the chip are made on its edges.\nThe lead frame consists of a central die pad, where the die is placed, surrounded by leads, metal conductors leading away from the die to the outside world. The end of each lead closest to the die ends in a bond pad. Small bond wires connect the die to each bond pad. Mechanical connections fix all these parts into a rigid structure, which makes the whole lead frame easy to handle automatically.\nManufacturing.\nLead frames are manufactured by removing material from a flat plate of copper, copper-alloy, or iron-nickel alloy like alloy 42. Two processes used for this are etching (suitable for high density of leads), or stamping (suitable for low density of leads). The mechanical bending process can be applied after both techniques.\nThe die is glued or soldered to the die pad inside the lead frame, and then bond wires are attached between the die and the bond pads to connect the die to the leads. In a process called encapsulation, a plastic case is moulded around the lead frame and die, exposing only the leads. The leads are cut off outside the plastic body and any exposed supporting structures are cut away. The external leads are then bent to the desired shape.\nUses.\nAmongst others, lead frames are used to manufacture a quad flat no-leads package (QFN), a quad flat package (QFP), or a dual in-line package (DIP).", "Automation-Control": 0.7059823871, "Qwen2": "Yes"} {"id": "353763", "revid": "42069556", "url": "https://en.wikipedia.org/wiki?curid=353763", "title": "Moldmaker", "text": "A moldmaker (mouldmaker in English-speaking countries other than the US) or molder is a skilled tradesperson who fabricates molds for use in casting metal products. \nMoldmakers are generally employed in foundries, where molds are used to cast products from metals such as aluminium and cast iron.\nInjection molding.\nThe term moldmaker may also be used to describe workers employed in fabricating dies and metal moulds for use in injection molding and die-casting, such as in the plastics, rubber or ceramics industries, in which case it is sometimes regarded as a variety of the trade of the toolmaker. The process of manufacturing molds is now often highly automated.\nWhile much of the machining processes involved in mold making use computer-controlled equipment for the actual manufacturing of molds (particularly plastic and rubber injection and transfer). Moldmaking is still a highly skilled trade requiring expertise in manual machining, CNC machining, CNC wire EDM, CNC Ram EDM, surface grinding, hand polishing and more. Because of the high skill and intense labor involved much of the mold making in the US has been outsourced to low wage countries.\nThe majority of plastic and rubber parts that are in existence today are made using injection or transfer molds, requiring a mold to be manufactured by a moldmaker.", "Automation-Control": 0.9474423528, "Qwen2": "Yes"} {"id": "18656877", "revid": "39166520", "url": "https://en.wikipedia.org/wiki?curid=18656877", "title": "Bearing reducer", "text": "A Bearing reducer in engineering is a bearing that designates the full integration of high-precision reduction gear and high-precision radial-axial bearing in a compact unit. This transmission system allows the utilization of the bearing reducer in several technics, such as robotics and automation, machine tools, measuring equipment, navigation systems, the aircraft industry, the military and medicine field, the woodworking field, the printers branch, the machines for the textile industry and glass treatment, and the filling machines.", "Automation-Control": 0.7779442668, "Qwen2": "Yes"} {"id": "59317908", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=59317908", "title": "Triplet loss", "text": "Triplet loss is a loss function for machine learning algorithms where a reference input (called anchor) is compared to a matching input (called positive) and a non-matching input (called negative). The distance from the anchor to the positive is minimized, and the distance from the anchor to the negative input is maximized.\nAn early formulation equivalent to triplet loss was introduced (without the idea of using anchors) for metric learning from relative comparisons by M. Schultze and T. Joachims in 2003.\nBy enforcing the order of distances, triplet loss models embed in the way that a pair of samples with same labels are smaller in distance than those with different labels. Unlike t-SNE which preserves embedding orders via probability distributions, triplet loss works directly on embedded distances. Therefore, in its common implementation, it needs soft margin treatment with a slack variable formula_1 in its hinge loss-style formulation. It is often used for learning similarity for the purpose of learning embeddings, such as learning to rank, word embeddings, thought vectors, and metric learning.\nConsider the task of training a neural network to recognize faces (e.g. for admission to a high security zone). A classifier trained to classify an instance would have to be retrained every time a new person is added to the face database. This can be avoided by posing the problem as a similarity learning problem instead of a classification problem. Here the network is trained (using a contrastive loss) to output a distance which is small if the image belongs to a known person and large if the image belongs to an unknown person. However, if we want to output the closest images to a given image, we want to learn a ranking and not just a similarity. A triplet loss is used in this case.\nThe loss function can be described by means of the Euclidean distance function\nThis can then be used in a cost function, that is the sum of all losses, which can then be used for minimization of the posed optimization problem", "Automation-Control": 0.6997202635, "Qwen2": "Yes"} {"id": "59339981", "revid": "76", "url": "https://en.wikipedia.org/wiki?curid=59339981", "title": "Bowl Prechamber Ignition", "text": "Bowl Prechamber Ignition, abbreviated BPI, is a combustion process designed for Otto cycle engines running on an air-fuel mixture leaner than stochiometric formula_1. Its distinguishing feature is a special type of spark plug, capable of reliably igniting very lean air-fuel mixtures. This spark plug is called \"prechamber spark plug\". The ignition electrodes of this spark plug are housed in a perforated enclosure, the \"prechamber\". At the engine's compression stroke, some fuel (usually less than 5 % of the total injected fuel) is injected into the piston bowl; this fuel is then forced through the small holes into the prechamber due to the high pressure in the cylinder near top dead centre. Inside the prechamber spark plug, the air-fuel mixture is ignitable by the ignition spark. Flame jets occurring due to the small holes in the prechamber then ignite the air-fuel mixture in the main combustion chamber, that would not catch fire using a regular spark plug.", "Automation-Control": 0.6633177996, "Qwen2": "Yes"} {"id": "4628", "revid": "2810812", "url": "https://en.wikipedia.org/wiki?curid=4628", "title": "Bilinear transform", "text": "The bilinear transform (also known as Tustin's method, after Arnold Tustin) is used in digital signal processing and discrete-time control theory to transform continuous-time system representations to discrete-time and vice versa.\nThe bilinear transform is a special case of a conformal mapping (namely, a Möbius transformation), often used to convert a transfer function formula_1 of a linear, time-invariant (LTI) filter in the continuous-time domain (often called an analog filter) to a transfer function formula_2 of a linear, shift-invariant filter in the discrete-time domain (often called a digital filter although there are analog filters constructed with switched capacitors that are discrete-time filters). It maps positions on the formula_3 axis, formula_4, in the s-plane to the unit circle, formula_5, in the z-plane. Other bilinear transforms can be used to warp the frequency response of any discrete-time linear system (for example to approximate the non-linear frequency resolution of the human auditory system) and are implementable in the discrete domain by replacing a system's unit delays formula_6 with first order all-pass filters.\nThe transform preserves stability and maps every point of the frequency response of the continuous-time filter, formula_7 to a corresponding point in the frequency response of the discrete-time filter, formula_8 although to a somewhat different frequency, as shown in the Frequency warping section below. This means that for every feature that one sees in the frequency response of the analog filter, there is a corresponding feature, with identical gain and phase shift, in the frequency response of the digital filter but, perhaps, at a somewhat different frequency. This is barely noticeable at low frequencies but is quite evident at frequencies close to the Nyquist frequency.\nDiscrete-time approximation.\nThe bilinear transform is a first-order Padé approximant of the natural logarithm function that is an exact mapping of the \"z\"-plane to the \"s\"-plane. When the Laplace transform is performed on a discrete-time signal (with each element of the discrete-time sequence attached to a correspondingly delayed unit impulse), the result is precisely the Z transform of the discrete-time sequence with the substitution of\nwhere formula_10 is the numerical integration step size of the trapezoidal rule used in the bilinear transform derivation; or, in other words, the sampling period. The above bilinear approximation can be solved for formula_11 or a similar approximation for formula_12 can be performed.\nThe inverse of this mapping (and its first-order bilinear approximation) is\nThe bilinear transform essentially uses this first order approximation and substitutes into the continuous-time transfer function, formula_1\nThat is\nStability and minimum-phase property preserved.\nA continuous-time causal filter is stable if the poles of its transfer function fall in the left half of the complex s-plane. A discrete-time causal filter is stable if the poles of its transfer function fall inside the unit circle in the complex z-plane. The bilinear transform maps the left half of the complex s-plane to the interior of the unit circle in the z-plane. Thus, filters designed in the continuous-time domain that are stable are converted to filters in the discrete-time domain that preserve that stability.\nLikewise, a continuous-time filter is minimum-phase if the zeros of its transfer function fall in the left half of the complex s-plane. A discrete-time filter is minimum-phase if the zeros of its transfer function fall inside the unit circle in the complex z-plane. Then the same mapping property assures that continuous-time filters that are minimum-phase are converted to discrete-time filters that preserve that property of being minimum-phase.\nTransformation of a General LTI System.\nA general LTI system has the transfer function\nformula_17\nThe order of the transfer function is the greater of and (in practice this is most likely as the transfer function must be proper for the system to be stable). Applying the bilinear transform\nformula_18\nwhere is defined as either or otherwise if using frequency warping, gives\nformula_19\nMultiplying the numerator and denominator by the largest power of present, gives\nformula_20\nIt can be seen here that after the transformation, the degree of the numerator and denominator are both .\nConsider then the pole-zero form of the continuous-time transfer function\nformula_21\nThe roots of the numerator and denominator polynomials, and , are the zeros and poles of the system. The bilinear transform is a one-to-one mapping, hence these can be transformed to the z-domain using\nformula_22\nyielding some of the discretized transfer function's zeros and poles and \nformula_23\nAs described above, the degree of the numerator and denominator are now both , in other words there is now an equal number of zeros and poles. The multiplication by means the additional zeros or poles are\nformula_24\nGiven the full set of zeros and poles, the z-domain transfer function is then\nformula_25\nExample.\nAs an example take a simple low-pass RC filter. This continuous-time filter has a transfer function\nIf we wish to implement this filter as a digital filter, we can apply the bilinear transform by substituting for formula_27 the formula above; after some reworking, we get the following filter representation:\nThe coefficients of the denominator are the 'feed-backward' coefficients and the coefficients of the numerator are the 'feed-forward' coefficients used to implement a real-time digital filter.\nTransformation for a general first-order continuous-time filter.\nIt is possible to relate the coefficients of a continuous-time, analog filter with those of a similar discrete-time digital filter created through the bilinear transform process. Transforming a general, first-order continuous-time filter with the given transfer function\nusing the bilinear transform (without prewarping any frequency specification) requires the substitution of\nwhere\nHowever, if the frequency warping compensation as described below is used in the bilinear transform, so that both analog and digital filter gain and phase agree at frequency formula_31, then\nThis results in a discrete-time digital filter with coefficients expressed in terms of the coefficients of the original continuous time filter:\nNormally the constant term in the denominator must be normalized to 1 before deriving the corresponding difference equation. This results in\nThe difference equation (using the Direct form I) is\nGeneral second-order biquad transformation.\nA similar process can be used for a general second-order filter with the given transfer function\nThis results in a discrete-time digital biquad filter with coefficients expressed in terms of the coefficients of the original continuous time filter:\nAgain, the constant term in the denominator is generally normalized to 1 before deriving the corresponding difference equation. This results in\nThe difference equation (using the Direct form I) is\nFrequency warping.\nTo determine the frequency response of a continuous-time filter, the transfer function formula_1 is evaluated at formula_41 which is on the formula_3 axis. Likewise, to determine the frequency response of a discrete-time filter, the transfer function formula_43 is evaluated at formula_44 which is on the unit circle, formula_5. The bilinear transform maps the formula_3 axis of the \"s\"-plane (of which is the domain of formula_1) to the unit circle of the \"z\"-plane, formula_5 (which is the domain of formula_43), but it is not the same mapping formula_50 which also maps the formula_3 axis to the unit circle. When the actual frequency of formula_52 is input to the discrete-time filter designed by use of the bilinear transform, then it is desired to know at what frequency, formula_53, for the continuous-time filter that this formula_52 is mapped to.\nThis shows that every point on the unit circle in the discrete-time filter z-plane, formula_56 is mapped to a point on the formula_57 axis on the continuous-time filter s-plane, formula_58. That is, the discrete-time to continuous-time frequency mapping of the bilinear transform is\nand the inverse mapping is\nThe discrete-time filter behaves at frequency formula_61 the same way that the continuous-time filter behaves at frequency formula_62. Specifically, the gain and phase shift that the discrete-time filter has at frequency formula_61 is the same gain and phase shift that the continuous-time filter has at frequency formula_64. This means that every feature, every \"bump\" that is visible in the frequency response of the continuous-time filter is also visible in the discrete-time filter, but at a different frequency. For low frequencies (that is, when formula_65 or formula_66), then the features are mapped to a \"slightly\" different frequency; formula_67.\nOne can see that the entire continuous frequency range\nis mapped onto the fundamental frequency interval\nThe continuous-time filter frequency formula_70 corresponds to the discrete-time filter frequency formula_71 and the continuous-time filter frequency formula_72 correspond to the discrete-time filter frequency formula_73\nOne can also see that there is a nonlinear relationship between formula_53 and formula_75 This effect of the bilinear transform is called frequency warping. The continuous-time filter can be designed to compensate for this frequency warping by setting formula_59 for every frequency specification that the designer has control over (such as corner frequency or center frequency). This is called pre-warping the filter design.\nIt is possible, however, to compensate for the frequency warping by pre-warping a frequency specification formula_77 (usually a resonant frequency or the frequency of the most significant feature of the frequency response) of the continuous-time system. These pre-warped specifications may then be used in the bilinear transform to obtain the desired discrete-time system. When designing a digital filter as an approximation of a continuous time filter, the frequency response (both amplitude and phase) of the digital filter can be made to match the frequency response of the continuous filter at a specified frequency formula_77, as well as matching at DC, if the following transform is substituted into the continuous filter transfer function. This is a modified version of Tustin's transform shown above.\nHowever, note that this transform becomes the original transform\nas formula_81.\nThe main advantage of the warping phenomenon is the absence of aliasing distortion of the frequency response characteristic, such as observed with Impulse invariance.", "Automation-Control": 0.8241970539, "Qwen2": "Yes"} {"id": "9519121", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=9519121", "title": "Quadratically constrained quadratic program", "text": "In mathematical optimization, a quadratically constrained quadratic program (QCQP) is an optimization problem in which both the objective function and the constraints are quadratic functions. It has the form\nwhere \"P\"0, …, \"P\"\"m\" are \"n\"-by-\"n\" matrices and \"x\" ∈ R\"n\" is the optimization variable.\nIf \"P\"0, …, \"P\"\"m\" are all positive semidefinite, then the problem is convex. If these matrices are neither positive nor negative semidefinite, the problem is non-convex. If \"P\"1, … ,\"P\"\"m\" are all zero, then the constraints are in fact linear and the problem is a quadratic program.\nHardness.\nSolving the general case is an NP-hard problem. To see this, note that the two constraints \"x\"1(\"x\"1 − 1) ≤ 0 and \"x\"1(\"x\"1 − 1) ≥ 0 are equivalent to the constraint \"x\"1(\"x\"1 − 1) = 0, which is in turn equivalent to the constraint \"x\"1 ∈ {0, 1}. Hence, any 0–1 integer program (in which all variables have to be either 0 or 1) can be formulated as a quadratically constrained quadratic program. Since 0–1 integer programming is NP-hard in general, QCQP is also NP-hard.\nRelaxation.\nThere are two main relaxations of QCQP: using semidefinite programming (SDP), and using the reformulation-linearization technique (RLT). For some classes of QCQP problems (precisely, QCQPs with zero diagonal elements in the data matrices), second-order cone programming (SOCP) and linear programming (LP) relaxations providing the same objective value as the SDP relaxation are available.\nNonconvex QCQPs with non-positive off-diagonal elements can be exactly solved by the SDP or SOCP relaxations, and there are polynomial-time-checkable sufficient conditions for SDP relaxations of general QCQPs to be exact. Moreover, it was shown that a class of random general QCQPs has exact semidefinite relaxations with high probability as long as the number of constraints grows no faster than a fixed polynomial in the number of variables.\nSemidefinite programming.\nWhen \"P\"0, …, \"P\"\"m\" are all positive-definite matrices, the problem is convex and can be readily solved using interior point methods, as done with semidefinite programming.\nExample.\nMax Cut is a problem in graph theory, which is NP-hard. Given a graph, the problem is to divide the vertices in two sets, so that as many edges as possible go from one set to the other. Max Cut can be formulated as a QCQP, and SDP relaxation of the dual provides good lower bounds.", "Automation-Control": 0.9951612949, "Qwen2": "Yes"} {"id": "4631993", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=4631993", "title": "SEMCI", "text": "Single Entry Multiple Company Interface (SEMCI) is a computer system based on service-oriented architecture (SOA) that is used to submit the same information to multiple companies. SEMCI is use by insurance agents to insurance quotes from several insurance companies at once. \nSEMCI is an interface which connects the software of the agent to the software of multiple companies so that an inquiry can go to all the companies at once. Previously, the agent would need to send the inquiry to each company individually.", "Automation-Control": 0.8873412013, "Qwen2": "Yes"} {"id": "43125367", "revid": "20254111", "url": "https://en.wikipedia.org/wiki?curid=43125367", "title": "Predictive control of switching power converters", "text": "Predictive controllers rely on optimum control systems theory and aim to solve a cost function minimization problem. Predictive controllers are relatively easy to numerically implement but electronic power converters are non-linear time-varying dynamic systems, so a different approach to predictive must be taken.\nPrinciples of non-linear predictive optimum control.\nThe first step to designing a predictive controller is to derive a detailed direct dynamic model (including non-linearities) of the switching power converter. This model must contain enough detail of the converter dynamics to allow, from initial conditions, a forecast in real time and with negligible error, of the future behavior of the converter.\nSliding mode control of switching power converters chooses a vector to reach sliding mode as fast as possible (high switching frequency).\nIt would be better to choose a vector to ensure zero error at the end of the sampling period Δt.To find such a vector, a previous calculation can be made (prediction);\nThe converter has a finite number of vectors (states) and is usually non-linear: one way is to try all vectors to find the one that minimizes the control errors, prior to the application of that vector to the converter.", "Automation-Control": 0.9999108911, "Qwen2": "Yes"} {"id": "43154679", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=43154679", "title": "Factory automation infrastructure", "text": "Factory automation infrastructure describes the process of incorporating automation into the manufacturing environment and processing of input goods into final products. \nThe manufacturing environment is defined by its ability to manufacture and/or assemble goods by machines, integrated assembly lines, and robotic arms. Automated environments are also defined by their coordination with (and usually their systematic integration with) the required automatic equipment to form a complete system. \nFactory automation intends to decrease risks associated with laborious and dangerous work faced by human workers. This system is essentially a solution for the automation and manufacturing of a particular production process of an intended output and/or final/end product.\nAutomation.\nAutomation has produced sophisticated parts with similar or higher output qualities with minor quality fluctuation. It also can help cut overall manufacturing costs and create safer working environments for workers. \nThe use of automation in manufacturing started by using technologies such as pneumatic and hydraulic systems in applications where their mechanical advantages could be used to raise output quality and efficiency in production. Complex and highly integrated systems have since evolved, composed of many different technologies and innovative procedures controlled under High Language programming environments with sophisticated operation drivers. These drivers often are running languages that support 6, 7, and 8-axis controls for sophisticated robotics.\nRobotic arm.\nA robotic arm is a type of mechanical arm, usually programmable, with functions similar to a human arm; the arm may be the total of the mechanism or may be part of a more complex robot. The links of such a manipulator are connected by joints allowing either rotational motion (such as in an articulated robot) or transnational (linear) displacement. The links of the manipulator can be considered to form a kinematic chain. The terminus of the kinematic chain of the manipulator is called the end effector and is analogous to the human hand.\nAdvantages and disadvantages.\nThe main advantages of automation are:\nThe following methods are often employed to improve productivity, quality, or robustness.\nThe main disadvantages of automation are:\nExternal links.\nkinematic chain. ", "Automation-Control": 0.9425875545, "Qwen2": "Yes"} {"id": "67995256", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=67995256", "title": "Identical-machines scheduling", "text": "Identical-machines scheduling is an optimization problem in computer science and operations research. We are given \"n\" jobs \"J\"1, \"J\"2, ..., \"Jn\" of varying processing times, which need to be scheduled on \"m\" identical machines, such that a certain objective function is optimized, for example, the makespan is minimized. \nIdentical machine scheduling is a special case of uniform machine scheduling, which is itself a special case of optimal job scheduling. In the general case, the processing time of each job may be different on different machines; in the case of identical machine scheduling, the processing time of each job is the same on each machine. Therefore, identical machine scheduling is equivalent to multiway number partitioning. A special case of identical machine scheduling is single-machine scheduling.\nIn the standard three-field notation for optimal job scheduling problems, the identical-machines variant is denoted by P in the first field. For example, \" P||formula_1\" is an identical machine scheduling problem with no constraints, where the goal is to minimize the maximum completion time. \nIn some variants of the problem, instead of minimizing the \"maximum\" completion time, it is desired to minimize the \"average\" completion time (averaged over all \"n\" jobs); it is denoted by P||formula_2. More generally, when some jobs are more important than others, it may be desired to minimize a \"weighted average\" of the completion time, where each job has a different weight. This is denoted by P||formula_3. \nAlgorithms.\nMinimizing average and weighted-average completion time.\nMinimizing the \"average\" completion time (P||formula_2) can be done in polynomial time. The SPT algorithm (Shortest Processing Time First), sorts the jobs by their length, shortest first, and then assigns them to the processor with the earliest end time so far. It runs in time O(\"n\" log \"n\"), and minimizes the average completion time on identical machines, P||formula_2.\nMinimizing the \"weighted average\" completion time is NP-hard even on identical machines, by reduction from the knapsack problem. It is NP-hard even if the number of machines is fixed and at least 2, by reduction from the partition problem.\nSahni presents an exponential-time algorithm and a polynomial-time approximation scheme for solving both these NP-hard problems on identical machines:\nMinimizing the maximum completion time (makespan).\nMinimizing the \"maximum\" completion time (P||formula_1) is NP-hard even for \"identical\" machines, by reduction from the partition problem. Many exact and approximation algorithms are known.\nGraham proved that:\nCoffman, Garey and Johnson presented a different algorithm called multifit algorithm, using techniques from bin packing, which has an approximation factor of 13/11≈1.182.\nHuang and Lu presented a simple polynomial-time algorithm that attains an 11/9≈1.222 approximation in time O(\"m\" log \"m\" + \"n\"), through the more general problem of \"maximin-share allocation of chores\".\nSahni presented a PTAS that attains (1+ε)OPT in time formula_9. It is an FPTAS if \"m\" is fixed. For m=2, the run-time improves to formula_10. The algorithm uses a technique called \"interval partitioning\".\nHochbaum and Shmoys presented several approximation algorithms for any number of identical machines (even when the number of machines is not fixed):\nLeung improved the run-time of this algorithm to formula_14.\nMaximizing the minimum completion time.\nMaximizing the minimum completion time (P||formula_15) is applicable when the \"jobs\" are actually spare parts that are required to keep the machines running, and they have different life-times. The goal is to keep machines running for as long as possible. The LPT algorithm attains at least formula_16 of the optimum. \nWoeginger presented a PTAS that attains an approximation factor of formula_17 in time formula_18, where formula_19 a huge constant that is exponential in the required approximation factor ε. The algorithm uses Lenstra's algorithm for integer linear programming.\nGeneral objective functions.\nAlon, Azar, Woeginger and Yadid consider a more general objective function. Given a positive real function \"f\", which depends only on the completion times \"Ci\", they consider the objectives of minimizing formula_20, minimizing formula_21, maximizing formula_20, and maximizing formula_23. They prove that, if \"f\" is non-negative, convex, and satisfies a strong continuity assumption that they call \"F*\", then both minimization problems have a PTAS. Similarly, if \"f\" is non-negative, concave, and satisfies F*, then both maximization problems have a PTAS. In both cases, the run-time of the PTAS is O(\"n\"), but with constants that are exponential in 1/\"ε.\"", "Automation-Control": 0.8123072982, "Qwen2": "Yes"} {"id": "4911272", "revid": "19921271", "url": "https://en.wikipedia.org/wiki?curid=4911272", "title": "Disco Corporation", "text": " is a Japanese precision tools maker, especially for the semiconductor production industry.\nThe company makes dicing saws and laser saws to cut semiconductor silicon wafers and other materials; grinders to process silicon and compound semiconductor wafers to ultra-thin levels; polishing machines to remove the grinding damage layer from the wafer back-side and to increase chip strength. \nHistory.\nThe company was founded as Daiichi-Seitosho in May 1937, as an industrial abrasive wheel manufacturer.\nAfter World War II Japan faced a construction boom which also helped DISCO to boost its sales. The company's grinder discs were in high demand from utility companies, which needed them to manufacture watt-meters.\nIn December 1968 the company developed and released an ultra-thin resinoid cutting wheel, \"Microncut\". The wheel contained diamond powder and as a result it was capable of making sharp, precision cuts as demanded in the semiconductor manufacturing process. There were no cutting machines available in the market on which ultra-thin precision wheels could be mounted and run, DISCO decided to develop its own machine in 1975. The cutting machine, DAD-2h, received instant recognition from semiconductor companies, including Texas Instruments.\nThe company adopted the name of DISCO Corporation in May 1977, was listed with the Japan Securities Dealers' Association in October 1989, and entered the First Section of the Tokyo Stock Exchange in December 1999.", "Automation-Control": 0.7607566118, "Qwen2": "Yes"} {"id": "13187972", "revid": "37080642", "url": "https://en.wikipedia.org/wiki?curid=13187972", "title": "RoboCup Small Size League", "text": "The RoboCup Small Size League (SSL) is one of the RoboCup soccer leagues.\nOld Format.\nTwo teams of six robots which are limited to an 18 cm diameter and 15 cm height play soccer with an orange golf ball. They are identified and tracked by four overhead cameras connected to an off-field computer. The field size is 9metersx6meters. Then robots' and balls' status including their position and id are sent to teams' computers. Their AI software sends commands to the robots based on the vision data.\nNew Format.\nAs of 2018, there are now two divisions, the B division that continued to use the same parameters as before (6 robots, 9x6 meter field, etc) and the A division that changed a lot of parameters. This change was brought to improve the tournament experience by making the more experienced teams face each other while letting the newer teams improve their algorithms with more balanced face offs.", "Automation-Control": 0.8760839701, "Qwen2": "Yes"} {"id": "57116400", "revid": "5042921", "url": "https://en.wikipedia.org/wiki?curid=57116400", "title": "Pyragas method", "text": "In the mathematics of chaotic dynamical systems, in the Pyragas method of stabilizing a periodic orbit, an appropriate continuous controlling signal is injected into the system, whose intensity is nearly zero as the system evolves close to the desired periodic orbit but increases when it drifts away from the desired orbit. Both the Pyragas and OGY (Ott, Grebogi and Yorke) methods are part of a general class of methods called \"closed loop\" or \"feedback\" methods which can be applied based on knowledge of the system obtained through solely observing the behavior of the system as a whole over a suitable period of time.\nThe method was proposed by Lithuanian physicist .", "Automation-Control": 0.9773364067, "Qwen2": "Yes"} {"id": "8017558", "revid": "42522270", "url": "https://en.wikipedia.org/wiki?curid=8017558", "title": "Continuous-flow manufacturing", "text": "Continuous-flow manufacturing, or repetitive-flow manufacturing, is an approach to discrete manufacturing that contrasts with batch production. It is associated with a just-in-time and kanban production approach, and calls for an ongoing examination and improvement efforts which ultimately requires integration of all elements of the production system. The goal is an optimally balanced production line with little waste, the lowest possible cost, on-time and defect-free production.\nThis strategy is typically applied in discrete manufacturing as an attempt to handle production volumes comprising discrete units of product in a flow which is more naturally found in process manufacturing. The basic fact is that in most cases, discrete units of a solid product cannot be handled in the same way as continuous quantities of liquid, gas or powder.\nDiscrete manufacturing is more likely to be performed in batches of product units that are routed from process to process in the factory. Each process may add value to the batch during a run-time or work-time. There is usually some time spent waiting for the process during a queue-time or wait-time. The larger the batch, the longer each unit has to wait for the rest of the batch to be completed, before it can go forward to the next process. This queue-time is waste, \"Muda\", and represents time lost that is not value-added in the eyes of the customer. This waste is one of the most important elements targeted for reduction and elimination in lean manufacturing.\nReducing the batch size in discrete manufacturing is therefore a desirable goal: it improves the speed of response to the customer, whilst improving the ratio of value-added to non value-added work. However, it should be balanced against the finite capacity of resources at the value-adding processes. Capacity is consumed by changeover whenever a process is required to perform work on a different part or product model than the preceding one. Time consumed in changeover is also considered waste, and it reduces the amount of resource capacity that is available to perform value-adding work. Reducing batch sizes can also increase handling time, risk and complexity in planning and controlling production.\nThe paradigm aim is to achieve single-piece flow where a single discrete unit of product flows from process to process. In effect, the batch quantity is one. If there is no change in part or product model, then this objective needs to be balanced against the additional handling time, and the work-centres that perform the process will typically have to be arranged in close proximity to one another in a flow-line. This is often a characteristic of Repetitive-flow manufacturing and most manual assembly work is performed this way in the modern factory.\nIf there is a change in part or product model, then the process engineer should also consider to balance the changeover time with run-time. If the changeover time is long, as it might be on a machine, batch size reduction is typically preceded with setup reduction techniques such as Single-Minute Exchange of Die.\nOne methodology for Repetitive-flow manufacturing is Demand Flow Technology which combines the principles of Repetitive-flow and demand-driven manufacturing. The production planning and control is linked to a pull signal that is triggered from a customer order or consumption of finished goods stock. A pull signal can also link a process to the down-stream, and synchronize the flow to the demand of the customer.", "Automation-Control": 0.8730199933, "Qwen2": "Yes"} {"id": "7817272", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=7817272", "title": "Hirschberg–Sinclair algorithm", "text": "The Hirschberg–Sinclair algorithm is a distributed algorithm designed for leader election problem in a synchronous ring network. It is named after its inventors, Dan Hirschberg and J. B. Sinclair.\nThe algorithm requires the use of unique IDs (UID) for each process. The algorithm works in phases and sends its UID out in both directions. The message goes out a distance of 2Phase Number hops and then the message heads back to the originating process. While the messages are heading \"out\" each receiving process will compare the incoming UID to its own. If the UID is greater than its own UID then it will continue the message on. Otherwise if the UID is less than its own UID, it will not pass the information on. At the end of a phase, a process can determine if it will send out messages in the next round by if it received both of its incoming messages. Phases continue until a process receives both of its out messages, from both of its neighbors. At this time the process knows it is the largest UID in the ring and declares itself the leader.", "Automation-Control": 0.9993590713, "Qwen2": "Yes"} {"id": "12480136", "revid": "43051325", "url": "https://en.wikipedia.org/wiki?curid=12480136", "title": "Automation Studio", "text": "Automation Studio is a circuit design, simulation and project documentation software for fluid power systems and electrical projects conceived by Famic Technologies Inc.. It is used for CAD, maintenance, and training purposes. Mainly used by engineers, trainers, and service and maintenance personnel. Automation Studio can be applied in the design, training and troubleshooting of hydraulics, pneumatics, HMI, and electrical control systems.\nTwo versions of the software exist:\nThe educational version of Automation Studio is a limited features version used by engineering and technical schools to train students who are future engineers or technicians. The software is designed for schools that teach technical subjects such as industrial technologies, mechatronics, electromechanical technologies, electrical & electronics, automation, and maintenance. Modeling and simulation are used to illustrate theoretical aspects.\nLibraries.\nAutomation Studio has various symbol libraries. All libraries follow standards such as ISO, IEC, JIC and NEMA.\nLibraries features.\nAutomation Studio is used as a design and simulation tool in the fields of hydraulics, pneumatics, electrical and automation.\nAutomation Studio Hydraulics.\nAutomation Studio Hydraulics’ functions are used for hydraulic system engineering purposes. Automation Studio Hydraulics includes a specific symbol library and uses modeling techniques such the Bernoulli's law and the gradient method.\nAutomation Studio Hydraulics is the main aspect of Automation Studio: it is used to conceive and to test hydraulic systems while taking into account thermal parameters. It displays inside views of the elements in the schematics. The Automation Studio library includes additional elements such as commands and control devices (PID controller, CAN bus, and servo-direction).\nFluid power is one of the central elements in such simulation.\nAutomation Studio Pneumatics.\nAutomation Studio Pneumatics is similar to Automation Studio Hydraulics, but the simulation is done for air rather than fluids. This library, like Automation Studio Hydraulics, is used to design and test models.\nThus, the simulation elements that are used are not the same as those in the hydraulics library.\nAutomation Studio Electrotechnical.\nThe electrotechnical module in Automation Studio is used for design, simulation, validation, documentation and troubleshooting of electrical diagrams. It includes multi-line and one-line representation according to the users' choice. The different aspects of the IEC and NEMA international standards are respected: components’ identification, symbols, ratings, port names, … etc.\nThe electrotechnical module works simultaneously with the fluid power technologies which allows the users to design and simulate complete systems.", "Automation-Control": 0.9255579114, "Qwen2": "Yes"} {"id": "74607085", "revid": "10951369", "url": "https://en.wikipedia.org/wiki?curid=74607085", "title": "Conley's fundamental theorem of dynamical systems", "text": "Conley's fundamental theorem of dynamical systems or Conley's decomposition theorem states that every flow of a dynamical system with compact phase portrait admits a decomposition into a chain-recurrent part and a gradient-like flow part. Due to the concise yet complete description of many dynamical systems, Conley's theorem is also known as the fundamental theorem of dynamical systems. Conley's fundamental theorem has been extended to systems with non-compact phase portraits and also to hybrid dynamical systems.\nComplete Lyapunov functions.\nConley's decomposition is characterized by a function known as complete Lyapunov function. Unlike traditional Lyapunov functions that are used to assert the stability of an equilibrium point (or a fixed point) and can be defined only on the basin of attraction of the corresponding attractor, complete Lyapunov functions must be defined on the whole phase-portrait.\nIn the particular case of an autonomous differential equation defined on a compact set \"X\", a complete Lyapunov function \"V\" from \"X\" to R is a real-valued function on \"X\" satisfying:\nConley's theorem states that a continuous complete Lyapunov function exists for any differential equation on a compact metric space. Similar result hold for discrete-time dynamical systems.", "Automation-Control": 0.9999958277, "Qwen2": "Yes"} {"id": "61525719", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=61525719", "title": "Control (optimal control theory)", "text": "In optimal control theory, a control is a variable chosen by the controller or agent to manipulate state variables, similar to an actual control valve. Unlike the state variable, it does not have a predetermined equation of motion. The goal of optimal control theory is to find some sequence of controls (within an admissible set) to achieve an optimal path for the state variables (with respect to a loss function).\nA control given as a function of time only is referred to as an \"open-loop control\". In contrast, a control that gives optimal solution during some remainder period as a function of the state variable at the beginning of the period is called a \"closed-loop control\".", "Automation-Control": 0.9992206693, "Qwen2": "Yes"} {"id": "61539873", "revid": "1165951887", "url": "https://en.wikipedia.org/wiki?curid=61539873", "title": "Diamond norm", "text": "In quantum information, the diamond norm, also known as completely bounded trace norm, is a norm on the space of quantum operations, or more generally on any linear map that acts on complex matrices. Its main application is to measure the \"single use distinguishability\" of two quantum channels. If an agent is randomly given one of two quantum channels, permitted to pass one state through the unknown channel, and then measures the state in an attempt to determine which operation they were given, then their maximal probability of success is determined by the diamond norm of the difference of the two channels.\nAlthough the diamond norm can be efficiently computed via semidefinite programming, it is in general difficult to obtain analytical expressions and those are known only for a few particular cases.\nDefinition.\nThe diamond norm is the trace norm of the output of a trivial extension of a linear map, maximized over all possible inputs with trace norm at most one. More precisely, let formula_1 be a linear transformation, where formula_2 denotes the formula_3 complex matrices, let formula_4 be the identity map on formula_3 matrices, and formula_6. Then the diamond norm of formula_7 is given by\nwhere formula_9 denotes the trace norm.\nThe diamond norm induces the diamond distance, which in the particular case of completely positive, trace non-increasing maps formula_10 is given by\nwhere the maximization is done over all density matrices formula_12 of dimension formula_13.\nDiscrimination of quantum channels.\nIn the task of single-shot discrimination of quantum channels, an agent is given one of the channels formula_10 with probabilities \"p\" and \"1-p\", respectively, and attempts to guess which channel they received by preparing a state formula_12, passing it through the unknown channel, and making a measurement on the resulting state. The maximal probability that the agent guesses correctly is given by\nSemidefinite programming formulation.\nThe diamond norm can be efficiently calculated via semidefinite programming. Let formula_17 be a linear map, as before, and formula_18 its Choi state, defined as \nThe diamond norm of formula_7 is then given by the solution of the following semidefinite programming problem:\nwhere formula_22 are Hermitian matrices and formula_23 is the usual spectral norm.", "Automation-Control": 0.6098927259, "Qwen2": "Yes"} {"id": "31945943", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=31945943", "title": "PhotoModeler", "text": "PhotoModeler is a software application that performs image-based modeling and close range photogrammetry – producing 3D models and measurements from photography. The software is used for close-range, aerial and uav photogrammetry.\nClose Range Photogrammetry (CRP) can mean photographs taken from the ground with a handheld camera, or taken from a UAV/drone at a relatively low altitude. PhotoModeler and CRP are used for performing measurement and modeling in agriculture, archaeology, architecture, biology, engineering, fabrication, film production, forensics, mining, stockpile volumes, etc.\nHow Photomodeler works.\n1) Take photos from different angles: Ensure that many images are taken to capture the entire object. The amount of required images varies accordingly to the size and complexity of the object being captured. \n2) Load images into Photomodeler software: Upload the images by any means on to the computer. \n3) Chose the method: Chose one amongst the four following options.\n4) Review, measure, and export\nApplications.\nSome of the applications of PhotoModeler are:\nNotes and references.\nNotes", "Automation-Control": 0.644770503, "Qwen2": "Yes"} {"id": "4850396", "revid": "1120935820", "url": "https://en.wikipedia.org/wiki?curid=4850396", "title": "Netsh", "text": "In computing, codice_1, or network shell, is a command-line utility included in Microsoft's Windows NT line of operating systems beginning with Windows 2000. It allows local or remote configuration of network devices such as the interface.\nOverview.\nA common use of codice_1 is to reset the TCP/IP stack to default, known-good parameters, a task that in Windows 98 required reinstallation of the TCP/IP adapter.\nnetsh, among many other things, also allows the user to change the IP address on their machine.\nStarting from Windows Vista, one can also edit wireless settings (for example, SSID) using codice_1.\ncodice_1 can also be used to read information from the IPv6 stack.\nThe command codice_5 can be used to reset TCP/IP problems when communicating with a networked device.", "Automation-Control": 0.8068981171, "Qwen2": "Yes"} {"id": "2361092", "revid": "38448542", "url": "https://en.wikipedia.org/wiki?curid=2361092", "title": "Reconfigurable manufacturing system", "text": "A reconfigurable manufacturing system (RMS) is one designed at the outset for rapid change in its structure, as well as its hardware and software components, in order to quickly adjust its production capacity and functionality within a part family in response to sudden market changes or intrinsic system change.\nFrom 1996 to 2007 Yoram Koren received an NSF grant of $32.5 million to develop the RMS science-base and its software and hardware tools, which were implemented in the automotive, aerospace, and engine factories.\nThe term reconfigurability in manufacturing was likely coined by Kusiak and Lee.\nThe RMS, as well as one of its components—the reconfigurable machine tool (RMT)—were invented in 1998 in the Engineering Research Center for Reconfigurable Manufacturing Systems (ERC/RMS) at the University of Michigan College of Engineering. The RMS goal is summarized by the statement: \"Exactly the capacity and functionality needed, exactly when needed\".\nIdeal reconfigurable manufacturing systems possess six core RMS characteristics: modularity, integrability, customized flexibility, scalability, convertibility, and diagnosability. A typical RMS will have several of these characteristics, though not necessarily all. When possessing these characteristics, RMS increases the speed of responsiveness of manufacturing systems to unpredicted events, such as sudden market demand changes or unexpected machine failures.. The RMS facilitates a quick production launch of new products, and allows for adjustment of production quantities that might unexpectedly vary. The ideal reconfigurable system provides exactly the functionality and production capacity needed, and can be economically adjusted exactly when needed. These systems are designed and operated according to Yoram Koren's RMS principles.\nThe components of RMS are CNC machines, reconfigurable machine tools, reconfigurable inspection machines and material transport systems (such as gantries and conveyors) that connect the machines to form the system. Different arrangements and configurations of these machines will affect the system's productivity. A collection of mathematical tools, which are defined as the RMS science base, may be utilized to maximize system productivity with the smallest possible number of machines.\nRationale for RMS.\nGlobalization has created a new landscape for industry, one of fierce competition, short windows of market opportunity, and frequent changes in product demand. This change presents both a threat and an opportunity. To capitalize on the opportunity, industry needs to possess manufacturing systems that can produce a wide range of products within a product family. That range must meet the requirements of multiple countries and various cultures, not just one regional market. A design for the right mix of products must be coupled with the technical capabilities that allow for quick changeover of product mix and quantities that might vary dramatically, even on a monthly basis. Reconfigurable manufacturing systems have these capabilities.\nRMS System Architecture and Operation.\nThe system architecture of a typical RMS is shown below. \nThe system is composed of stages: 10, 20, 30, 40, etc. Each stage consists of identical machines, such as CNC milling machines, or RMT machines. The system produces one product, for example, an automotive engine block or a cylinder head. The manufactured product moves on the horizontal conveyor. Then Gantry-10 grips the product and brings it to one of CNC-10. When CNC-10 finishes the processing, Gantry-10 moves it back to the conveyor. The conveyor moves the product to Gantry-20, which grips the product and load it on the RMT-20, and so on. Inspection machines are placed at several stages, and at the end of the manufacturing system. \nRMS is defined as a “system designed at the outset for rapid changes in its structure.” In practice this feature is implemented by designing an open space with an access to the gantry at each stage. These spaces enable matching rapidly higher market demand by adding machines in these spaces, which increases production rate to match the demand. \nThe product may move during its production in many production paths. Three paths are shown in the figure. Although the CNC machines at each stage are identical, in practice there are small variations in the precision of identical machines, which create accumulated errors in the manufactured product. The magnitude of the error depends on the path in which the product moved; each path has its own “stream-of-variations” (a term coined by Y. Koren).\nRMS characteristics.\nIdeal reconfigurable manufacturing systems possess six core characteristics: modularity, integrability, customized flexibility, scalability, convertibility, and diagnosability. These characteristics, which were introduced by professor Yoram Koren in 1995, apply to the design of whole manufacturing systems, as well as to some of its components: reconfigurable machines, their controllers, and system control software.\nModularity refers to the modules that reconfigurable manufacturing systems consist of\".\" At the system level the machines are modules. At the machine level the axes of motion are modules (see the RMT Figure). The system control may be composed of control modules. Modules are easier to maintain and update.\nIntegrability is the ability to rapidly integrate modules by mechanical, informational, and control interfaces that enable module integration and communication.  At the system level the machines are the modules that are integrated via material transport systems (such as conveyors and gantries) to form a reconfigurable manufacturing system.\nCustomization allows the design of system flexibility just around a product family, obtaining thereby customized-flexibility, as opposed to the general flexibility of FMS. Customization allows a reduction in the investment cost without sacrificing performance.\nConvertibility is the ability to easily transform the functionality of existing systems, machines, or controls to suit new production requirements. Examples included changing a machine in the system to another type of machine to respond to  a new required functionality, or  switching spindles on a milling machine (e.g., from low-torque high-speed spindle for aluminum to high-torque low-speed spindle for titanium).\nScalability is the ability to easily change production capacity by adding (or reducing) manufacturing resources. Scalability of a manufacturing system is increased by adding machines to expand the system production rate to match a sudden market growth. Adding machines requires extending the reach of the station gantries.\nDiagnosability is the ability to automatically detect and diagnose the source of the manufactured product quality or precision defects. This automatic diagnosis  allows rapid correction of the defects. The RMS must be designed with product inspection machines embedded at optimal locations in the system. \nRMS principles.\nReconfigurable manufacturing systems operate according to a set of basic principles formulated by professor Yoram Koren and are called Koren's RMS principles. The more of these principles applicable to a given manufacturing system, the more reconfigurable is that system. The RMS principles are:\nRMS and FMS.\nReconfigurable manufacturing systems (RMS) and flexible manufacturing systems (FMS) have different goals. FMS aims at increasing the variety of parts produced. RMS aims at increasing the speed of responsiveness to market changes and customer's demand. RMS is also flexible, but only to a limited extent—its flexibility is confined to only that necessary to produce a part family. This is the \"customized flexibility\" or the customization characteristic, which is not the general flexibility that FMS offers. The customized flexibility enables higher production rates. Other important advantages of RMS are rapid scalability to the desired volume, and convertibility, which are obtained within reasonable cost to manufacturers. The best application of FMS is found in production of small sets of products [see Wikipedia].\nRMS science base.\nThe RMS technology is based on a systematic approach to the design and operation of reconfigurable manufacturing systems. The approach consists of key elements, the compilation of which is called the RMS science base. These elements are summarized below.", "Automation-Control": 0.9784336686, "Qwen2": "Yes"} {"id": "2368264", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=2368264", "title": "Connectionist expert system", "text": "Connectionist expert systems are artificial neural network (ANN) based expert systems where the ANN generates inferencing rules e.g., fuzzy-multi layer perceptron where linguistic and natural form of inputs are used. Apart from that, rough set theory may be used for encoding knowledge in the weights better and also genetic algorithms may be used to optimize the search solutions better. Symbolic reasoning methods may also be incorporated (see hybrid intelligent system). (Also see expert system, neural network, clinical decision support system.)", "Automation-Control": 0.7754584551, "Qwen2": "Yes"} {"id": "63965880", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=63965880", "title": "Zero dynamics", "text": "In mathematics, zero dynamics is known as the concept of evaluating the effect of zero on systems.\nHistory.\nThe idea was introduced thirty years ago as the nonlinear approach to the concept of transmission of zeros. The original purpose of introducing the concept was to develop an asymptotic stabilization with a set of guaranteed regions of attraction (semi-global stabilizability), to make the overall system stable.\nInitial working.\nGiven the internal dynamics of any system, zero dynamics refers to the control action chosen in which the output variables of the system are kept identically zero. While, various systems have an equally distinctive set of zeros, such as decoupling zeros, invariant zeros, and transmission zeros. Thus, the reason for developing this concept was to control the non-minimum phase and nonlinear systems effectively.\nApplications.\nThe concept is widely utilized in SISO mechanical systems, whereby applying a few heuristic approaches, zeros can be identified for various linear systems. Zero dynamics adds an essential feature to the overall system’s analysis and the design of the controllers. Mainly its behavior plays a significant role in measuring the performance limitations of specific feedback systems. In a Single Input Single Output system, the zero dynamics can be identified by using junction structure patterns. In other words, using concepts like bond graph models can help to point out the potential direction of the SISO systems.\nApart from its application in nonlinear standardized systems, similar controlled results can be obtained by using zero dynamics on nonlinear discrete-time systems. In this scenario, the application of zero dynamics can be an interesting tool to measure the performance of nonlinear digital design systems (nonlinear discrete-time systems).\nBefore the advent of zero dynamics, the problem of acquiring non-interacting control systems by using internal stability was not specifically discussed. However, with the asymptotic stability present within the zero dynamics of a system, static feedback can be ensured. Such results make zero dynamics an interesting tool to guarantee the internal stability of non-interacting control systems.", "Automation-Control": 0.9971656203, "Qwen2": "Yes"} {"id": "58383744", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=58383744", "title": "Separation principle in stochastic control", "text": "The separation principle is one of the fundamental principles of stochastic control theory, which states that the problems of optimal control and state estimation can be decoupled under certain conditions. In its most basic formulation it deals with a linear stochastic system\nwith a state process formula_2, an output process formula_3 and a control formula_4, where formula_5 is a vector-valued Wiener process, formula_6 is a zero-mean Gaussian random vector independent of formula_5, formula_8, and formula_9, formula_10, formula_11, formula_12, formula_13 are matrix-valued functions which generally are taken to be continuous of bounded variation. Moreover, formula_14 is nonsingular on some interval formula_15. The problem is to design an output feedback law formula_16 which maps the observed process formula_3 to the control input formula_4 in a nonanticipatory manner so as to minimize the functional\nwhere formula_20 denotes expected value, prime (formula_21) denotes transpose. and formula_22 and formula_23 are continuous matrix functions of bounded variation, formula_24 is positive semi-definite and formula_25 is positive definite for all formula_26. Under suitable conditions, which need to be properly stated, the optimal policy formula_27 can be chosen in the form\nwhere formula_29 is the linear least-squares estimate of the state vector formula_30 obtained from the Kalman filter\nwhere formula_32 is the gain of the optimal linear-quadratic regulator obtained by taking formula_33 and formula_6 deterministic, and where formula_35 is the Kalman gain. There is also a non-Gaussian version of this problem (to be discussed below) where the Wiener process formula_5 is replaced by a more general square-integrable martingale with possible jumps. In this case, the Kalman filter needs to be replaced by a nonlinear filter providing an estimate of the (strict sense) conditional mean\nwhere\nis the \"filtration\" generated by the output process; i.e., the family of increasing sigma fields representing the data as it is produced.\nIn the early literature on the separation principle it was common to allow as admissible controls formula_4 all processes that are \"adapted\" to the filtration formula_40 in the class of deterministically well-posed control laws that minimizes formula_41, and it is given by\nwhere formula_32 is the deterministic control gain and formula_44 is given by the linear (distributed) filter\nwhere formula_46 is the innovation process\nand the gain formula_2 is as defined in page 120 in Lindquist.", "Automation-Control": 0.9938882589, "Qwen2": "Yes"} {"id": "54965789", "revid": "1151190200", "url": "https://en.wikipedia.org/wiki?curid=54965789", "title": "IEC 63110", "text": "IEC 63110 is an international standard defining a protocol for the management of electric vehicles charging and discharging infrastructures, which is currently under development. IEC 63110 is one of the International Electrotechnical Commission's group of standards for electric road vehicles and electric industrial trucks, and is the responsibility of Joint Working Group 11 (JWG11) of IEC Technical Committee 69 (TC69).\nStandard documents.\nIEC 63110 consists of the following parts, detailed in separate IEC 63110 standard documents:", "Automation-Control": 0.6953084469, "Qwen2": "Yes"} {"id": "19806267", "revid": "38627444", "url": "https://en.wikipedia.org/wiki?curid=19806267", "title": "Level luffing crane", "text": "A level-luffing crane is a crane mechanism where the hook remains at the same level while luffing: moving the jib up and down, so as to move the hook inwards and outwards relative to the base.\nUsually the description is only applied to those with a luffing jib that have some \"additional\" mechanism applied to keep the hook level when luffing.\nLevel-luffing is most important when careful movement of a load near ground level is required, such as in construction or shipbuilding. This partially explains the popularity of fixed horizontal jibs in these fields.\nToplis cable luffing.\nAn early form of level-luffing gear was the \"Toplis\" design, invented by a Stothert & Pitt engineer in 1914. \nThe crane jibs luffs as for a conventional crane, with the end of the jib rising and falling. The crane's hook is kept level by automatically paying out enough extra cable to compensate for this. This is also a purely mechanical linkage, arranged by the reeving of the hoist cables to the jib over a number of pulleys at the crane's apex above the cab, so that luffing the jib upwards allows more free cable and lowers the hook to compensate.\nHorse-head jibs.\nThe usual mechanism for level-luffing in modern cranes is to add an additional \"horse head\" section to the top of the jib. By careful design of the geometry, this keeps level merely by the linked action of the pivots.\nPowered level-luffing.\nAs cranes and their control systems became more sophisticated, it became possible to control the level of luffing directly, by winching the hoist cable in and out as needed. The first of these systems used mechanical clutches between luffing and hoist drums, giving simplicity and a \"near level\" result.\nLater systems have used modern electronic controls and quickly reversible motors with good slow-speed control to the hoist winch motors, so as to give a positioning accuracy of inches. Some early systems used controllable hydraulic gearboxes to achieve the same result, but these added complexity and cost and so were only popular where high accuracy was needed, such as for shipbuilding.\nLuffing cabs.\nLuffing mechanisms have also been applied to the driver's cab being mounted on its own jib, following the movement of the crane's main jib. These are used for tasks such as ship unloading, where the view from the driver's cab is greatly improved by cantilevering it forwards and over the ship.", "Automation-Control": 0.7028137445, "Qwen2": "Yes"} {"id": "3058037", "revid": "9784415", "url": "https://en.wikipedia.org/wiki?curid=3058037", "title": "Process optimization", "text": "Process optimization is the discipline of adjusting a process so as to optimize (make the best or most effective use of) some specified set of parameters without violating some constraint. The most common goals are minimizing cost and maximizing throughput and/or efficiency. This is one of the major quantitative tools in industrial decision making.\nWhen optimizing a process, the goal is to maximize one or more of the process specifications, while keeping all others within their constraints. This can be done by using a process mining tool, discovering the critical activities and bottlenecks, and acting only on them.\nAreas.\nFundamentally, there are three parameters that can be adjusted to affect optimal performance. They are:\nThe first step is to verify that the existing equipment is being used to its fullest advantage by examining operating data to identify equipment bottlenecks.\nOperating procedures may vary widely from person-to-person or from shift-to-shift. Automation of the plant can help significantly. But automation will be of no help if the operators take control and run the plant in manual.\nIn a typical processing plant, such as a chemical plant or oil refinery, there are hundreds or even thousands of control loops. Each control loop is responsible for controlling one part of the process, such as maintaining a temperature, level, or flow.\nIf the control loop is not properly designed and tuned, the process runs below its optimum. The process will be more expensive to operate, and equipment will wear out prematurely. For each control loop to run optimally, identification of sensor, valve, and tuning problems is important. It has been well documented that over 35% of control loops typically have problems.\nThe process of continuously monitoring and optimizing the entire plant is sometimes called performance supervision.", "Automation-Control": 0.9107630253, "Qwen2": "Yes"} {"id": "3063552", "revid": "6716295", "url": "https://en.wikipedia.org/wiki?curid=3063552", "title": "Partially observable Markov decision process", "text": "A partially observable Markov decision process (POMDP) is a generalization of a Markov decision process (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state. Instead, it must maintain a sensor model (the probability distribution of different observations given the underlying state) and the underlying MDP. Unlike the policy function in MDP which maps the underlying states to the actions, POMDP's policy is a mapping from the history of observations (or belief states) to the actions.\nThe POMDP framework is general enough to model a variety of real-world sequential decision processes. Applications include robot navigation problems, machine maintenance, and planning under uncertainty in general. The general framework of Markov decision processes with imperfect information was described by Karl Johan Åström in 1965 in the case of a discrete state space, and it was further studied in the operations research community where the acronym POMDP was coined. It was later adapted for problems in artificial intelligence and automated planning by Leslie P. Kaelbling and Michael L. Littman.\nAn exact solution to a POMDP yields the optimal action for each possible belief over the world states. The optimal action maximizes the expected reward (or minimizes the cost) of the agent over a possibly infinite horizon. The sequence of optimal actions is known as the optimal policy of the agent for interacting with its environment.\nDefinition.\nFormal definition.\nA discrete-time POMDP models the relationship between an agent and its environment. Formally, a POMDP is a 7-tuple formula_1, where\nAt each time period, the environment is in some state formula_9. The agent takes an action formula_10, which causes the environment to transition to state formula_11 with probability formula_12. At the same time, the agent receives an observation formula_13 which depends on the new state of the environment, formula_11, and on the just taken action, formula_15, with probability formula_16 (or sometimes formula_17 depending on the sensor model). Finally, the agent receives a reward formula_18 equal to formula_19. Then the process repeats. The goal is for the agent to choose actions at each time step that maximize its expected future discounted reward: formula_20, where formula_21 is the reward earned at time formula_22. The discount factor formula_23 determines how much immediate rewards are favored over more distant rewards. When formula_24 the agent only cares about which action will yield the largest expected immediate reward; when formula_25 the agent cares about maximizing the expected sum of future rewards.\nDiscussion.\nBecause the agent does not directly observe the environment's state, the agent must make decisions under uncertainty of the true environment state. However, by interacting with the environment and receiving observations, the agent may update its belief in the true state by updating the probability distribution of the current state. A consequence of this property is that the optimal behavior may often include (information gathering) actions that are taken purely because they improve the agent's estimate of the current state, thereby allowing it to make better decisions in the future.\nIt is instructive to compare the above definition with the definition of a Markov decision process. An MDP does not include the observation set, because the agent always knows with certainty the environment's current state. Alternatively, an MDP can be reformulated as a POMDP by setting the observation set to be equal to the set of states and defining the observation conditional probabilities to deterministically select the observation that corresponds to the true state.\nBelief update.\nAfter having taken the action formula_15 and observing formula_27, an agent needs to update its belief in the state the environment may (or not) be in. Since the state is Markovian (by assumption), maintaining a belief over the states solely requires knowledge of the previous belief state, the action taken, and the current observation. The operation is denoted formula_28. Below we describe how this belief update is computed.\nAfter reaching formula_11, the agent observes formula_13 with probability formula_31. Let formula_32 be a probability distribution over the state space formula_2. formula_34 denotes the probability that the environment is in state formula_35. Given formula_34, then after taking action formula_15 and observing formula_27,\nwhere formula_40 is a normalizing constant with formula_41.\nBelief MDP.\nA Markovian belief state allows a POMDP to be formulated as a Markov decision process where every belief is a state. The resulting \"belief MDP\" will thus be defined on a continuous state space (even if the \"originating\" POMDP has a finite number of states: there are infinite belief states (in formula_42) because there are an infinite number of probability distributions over the states (of formula_2)).\nFormally, the belief MDP is defined as a tuple formula_44 where\nOf these, formula_47 and formula_18 need to be derived from the original POMDP. formula_47 is\nformula_54\nwhere formula_55 is the value derived in the previous section and\nformula_56\nThe belief MDP reward function (formula_18) is the expected reward from the POMDP reward function over the belief state distribution:\nformula_58.\nThe belief MDP is not partially observable anymore, since at any given time the agent knows its belief, and by extension the state of the belief MDP.\nPolicy and value function.\nUnlike the \"originating\" POMDP (where each action is available from only one state), in the corresponding Belief MDP all belief states allow all actions, since you (almost) always have \"some\" probability of believing you are in any (originating) state. As such, formula_59 specifies an action formula_60 for any belief formula_32.\nHere it is assumed the objective is to maximize the expected total discounted reward over an infinite horizon. When formula_62 defines a cost, the objective becomes the minimization of the expected cost.\nThe expected reward for policy formula_59 starting from belief formula_64 is defined as\nwhere formula_66 is the discount factor. The optimal policy formula_67 is obtained by optimizing the long-term reward.\nwhere formula_64 is the initial belief.\nThe optimal policy, denoted by formula_67, yields the highest expected reward value for each belief state, compactly represented by the optimal value function formula_71. This value function is solution to the Bellman optimality equation:\nFor finite-horizon POMDPs, the optimal value function is piecewise-linear and convex. It can be represented as a finite set of vectors. In the infinite-horizon formulation, a finite vector set can approximate formula_71 arbitrarily closely, whose shape remains convex. Value iteration applies dynamic programming update to gradually improve on the value until convergence to an formula_74-optimal value function, and preserves its piecewise linearity and convexity. By improving the value, the policy is implicitly improved. Another dynamic programming technique called policy iteration explicitly represents and improves the policy instead.\nApproximate POMDP solutions.\nIn practice, POMDPs are often computationally intractable to solve exactly, so computer scientists have developed methods that approximate solutions for POMDPs.\nGrid-based algorithms comprise one approximate solution technique. In this approach, the value function is computed for a set of points in the belief space, and interpolation is used to determine the optimal action to take for other belief states that are encountered which are not in the set of grid points. More recent work makes use of sampling techniques, generalization techniques and exploitation of problem structure, and has extended POMDP solving into large domains with millions of states. For example, adaptive grids and point-based methods sample random reachable belief points to constrain the planning to relevant areas in the belief space.\nDimensionality reduction using PCA has also been explored.\nAnother line of approximate solution techniques for solving POMDPs relies on using (a subset of) the history of previous observations, actions and rewards up to the current time step as a pseudo-state. Usual techniques for solving MDPs based on these pseudo-states can then be used (e.g. Q-learning). Ideally the pseudo-states should contain the most important information from the whole history (to reduce bias) while being as compressed as possible (to reduce overfitting).\nPOMDP theory.\nPlanning in POMDP is undecidable in general. However, some settings have been identified to be decidable (see Table 2 in, reproduced below). Different objectives have been considered. Büchi objectives are defined by Büchi automata. Reachability is an example of a Büchi condition (for instance, reaching a good state in which all robots are home). coBüchi objectives correspond to traces that do not satisfy a given Büchi condition (for instance, not reaching a bad state in which some robot died). Parity objectives are defined via parity games; they enable to define complex objectives such that reaching a good state every 10 timesteps. The objective can be satisfied:\nWe also consider the finite memory case in which the agent is a finite-state machine, and the general case in which the agent has an infinite memory.\nApplications.\nPOMDPs can be used to model many kinds of real-world problems. Notable applications include the use of a POMDP in management of patients with ischemic heart disease, assistive technology for persons with dementia, the conservation of the critically endangered and difficult to detect Sumatran tigers and aircraft collision avoidance.", "Automation-Control": 0.6768934727, "Qwen2": "Yes"} {"id": "1481123", "revid": "757572", "url": "https://en.wikipedia.org/wiki?curid=1481123", "title": "Maria (reachability analyzer)", "text": "Maria: The Modular Reachability Analyzer is a reachability analyzer for concurrent systems that uses Algebraic System Nets (a high-level variant of Petri nets) as its modelling formalism.", "Automation-Control": 0.7618442774, "Qwen2": "Yes"} {"id": "30862921", "revid": "159620", "url": "https://en.wikipedia.org/wiki?curid=30862921", "title": "Roll forming", "text": "Roll forming, also spelled roll-forming or rollforming, is a type of rolling involving the continuous bending of a long strip of sheet metal (typically coiled steel) into a desired cross-section. The strip passes through sets of rolls mounted on consecutive stands, each set performing only an incremental part of the bend, until the desired cross-section (profile) is obtained. Roll forming is ideal for producing constant-profile parts with long lengths and in large quantities.\nOverview.\nA variety of cross-section profiles can be produced, but each profile requires a carefully crafted set of roll tools. Design of the rolls starts with a \"flower pattern\", which is the sequence of profile cross-sections, one profile for each stand of rolls. The roll contours are then derived from the flower pattern profiles. Because of the high cost of the roll sets, computer simulation is often used to develop or validate the roll designs and optimize the forming process to minimize the number of stands and material stresses in the final product.\nRoll-formed sections may have advantages over extrusions of a similar shapes. Roll formed parts may be much lighter, with thinner walls possible than in the extrusion process, and stronger, having been work hardened in a cold state. Parts can be made having a finish or already painted. In addition, the roll forming process is more rapid and takes less energy than extrusion.\nRoll forming machines are available that produce shapes of different sizes and material thicknesses using the same rolls. Variations in size are achieved by making the distances between the rolls variable by manual adjustment or computerized controls, allowing for rapid changeover. These specialized mills are prevalent in the light gauge framing industry where metal studs and tracks of standardized profiles and thicknesses are used. For example, a single mill may be able to produce metal studs of different web (e.g. 3-5/8\" to 14 inches), flange (e.g. 1-3/8\" to 2-1/2\") and lip (e.g. 3/8\" to 5/8\") dimensions, from different gauges (e.g. 20 to 12 GA) of galvanized steel sheet.\nRoll forming lines can be set up with multiple configurations to punch and cut off parts in a continuous operation. For cutting a part to length, the lines can be set up to use a pre-cut die where a single blank runs through the roll mill, or a post-cut die where the profile is cut off after the roll forming process. Features may be added in a hole, notch, embossment, or shear form by punching in a roll forming line. These part features can be done in a pre-punch application (before roll forming starts), in a mid-line punching application (in the middle of a roll forming line/process) or a post punching application (after roll forming is done). Some roll forming lines incorporate only one of the above punch or cut off applications, others incorporate some or all of the applications in one line.\nProcess.\nRoll forming is, among the manufacturing processes, one of the simplest. It typically begins with a large coil of sheet metal, between and in width, and and thick, supported on an uncoiler. The strip is fed through an entry guide to properly align the material as it passes through the rolls of the mill, each set of rolls forming a bend until the material reaches its desired shape. Roll sets are typically mounted one over the other on a pair of horizontal parallel shafts supported by a stand(s). Side rolls and cluster rolls may also be used to provide greater precision and flexibility and to limit stresses on the material. The shaped strips can be cut to length ahead of a roll forming mill, between mills, or at the end of the roll forming line.\nGeometric possibilities.\nThe geometric possibilities can be very broad and even include enclosed shapes as long as the cross-section is uniform. Typical sheet thicknesses range from to , but they can exceed that. Length is almost unaffected by the rolling process. The part widths typically are not smaller than however they can exceed . The primary limitation is profile depth, which is generally limited to less than and rarely larger than due to roll-imparted stresses and surface speed differentials that increase with depth.\nProduction rates.\nThe production rate depends greatly on the material thickness and the bend radius; it is however also affected by the number of required stations or steps. For bend radii of 50 times the material thickness of a low carbon steel thick can range from through eight stations to through 12 stations or through 22 stations.\nThe time for one product to take shape can be represented by a simple function: , where is the length of the piece being formed, is the number of forming stands, is the distance between stands, and is the velocity of the strip through the rolls.\nIn general, roll forming lines can run from or higher, depending on the application. In some cases the limiting factor is the punching or cut-off applications.\nOther considerations.\nWhile dealing with manufacturing, Things to consider are, for example, lubrication, the effect of the process on material properties, cost, and of course safety.\nLubrication provides an essential barrier between the roll dies and the work-piece surface. It helps reducing the tool wear and allows things to move along faster. This table shows the different kinds of lubricants, their application, and the ideal metals to use them on.\nThe effects of the process on the material's properties are minimal. The physical and chemical properties virtually don't change, but the process may cause work-hardening, micro-cracks, or thinning at bends when discussing the mechanical properties of the material.\nThe cost of roll forming is relatively low. When calculating the cost of the process things such as setup time, equipment and tool costs, load/unload time, direct labor rate, overhead rate, and the amortization of equipment and tooling must be considered.Safety is also a bit of an issue with this process. The main hazards that need to be taken into consideration are dealing with moving work-pieces (up to ), high pressure rolls, or sharp, sheared metal edges.", "Automation-Control": 0.922480464, "Qwen2": "Yes"} {"id": "30864591", "revid": "1461430", "url": "https://en.wikipedia.org/wiki?curid=30864591", "title": "Networked control system", "text": "A networked control system (NCS) is a control system wherein the control loops are closed through a communication network. The defining feature of an NCS is that control and feedback signals are exchanged among the system's components in the form of information packages through a network.\nOverview.\nThe functionality of a typical NCS is established by the use of four basic elements: \nThe most important feature of an NCS is that it connects cyberspace to physical space enabling the execution of several tasks from long distance. In addition, NCSs eliminate unnecessary wiring reducing the complexity and the overall cost in designing and implementing the control systems. They can also be easily modified or upgraded by adding sensors, actuators, and controllers to them with relatively low cost and no major change in their structure. Furthermore, featuring efficient sharing of data between their controllers, NCSs are able to easily fuse global information to make intelligent decisions over large physical spaces. \nTheir potential applications are numerous and cover a wide range of industries, such as space and terrestrial exploration, access in hazardous environments, factory automation, remote diagnostics and troubleshooting, experimental facilities, domestic robots, aircraft, automobiles, manufacturing plant monitoring, nursing homes and tele-operations. While the potential applications of NCSs are numerous, the proven applications are few, and the real opportunity in the area of NCSs is in developing real-world applications that realize the area's potential.\nProblems and solutions.\nAdvent and development of the Internet combined with the advantages provided by NCS attracted the interest of researchers around the globe. Along with the advantages, several challenges also emerged giving rise to many important research topics. New control strategies, kinematics of the actuators in the systems, reliability and security of communications, bandwidth allocation, development of data communication protocols, corresponding fault detection and fault tolerant control strategies, real-time information collection and efficient processing of sensors data are some of the relative topics studied in depth.\nThe insertion of the communication network in the feedback control loop makes the analysis and design of an NCS complex, since it imposes additional time delays in control loops or possibility of packages loss. Depending on the application, time-delays could impose severe degradation on the system performance.\nTo alleviate the time-delay effect, Y. Tipsuwan and M-Y. Chow, in ADAC Lab at North Carolina State University, proposed the gain scheduler middleware (GSM) methodology and applied it in iSpace. S. Munir and W.J. Book (Georgia Institute of Technology) used a Smith predictor, a Kalman filter and an energy regulator to perform teleoperation through the Internet.\nK.C. Lee, S. Lee and H.H. Lee used a genetic algorithm to design a controller used in a NCS. Many other researchers provided solutions using concepts from several control areas such as robust control, optimal stochastic control, model predictive control, fuzzy logic etc.\nA most critical and important issue surrounding the design of distributed NCSs with the successively increasing complexity is to meet the requirements on system reliability and dependability, while guaranteeing a high system performance over a wide operating range. This makes network based fault detection and diagnosis techniques, which are essential to monitor the system performance, receive more and more attention.", "Automation-Control": 0.9817029238, "Qwen2": "Yes"} {"id": "3754843", "revid": "9784415", "url": "https://en.wikipedia.org/wiki?curid=3754843", "title": "Supervisory control theory", "text": "The supervisory control theory (SCT), also known as the Ramadge–Wonham framework (RW framework), is a method for automatically synthesizing supervisors that restrict the behavior of a plant such that as much as possible of the given specifications are fulfilled. The plant is assumed to spontaneously generate events. The events are in either one of the following two categories controllable or uncontrollable. The supervisor observes the string of events generated by the plant and might prevent the plant from generating a subset of the controllable events. However, the supervisor has no means of forcing the plant to generate an event.\nIn its original formulation the SCT considered the plant and the specification to be modeled by formal languages, not necessarily regular languages generated by finite automata as was done in most subsequent work.", "Automation-Control": 1.0000060797, "Qwen2": "Yes"} {"id": "4162069", "revid": "5042921", "url": "https://en.wikipedia.org/wiki?curid=4162069", "title": "Hybrid bond graph", "text": "A hybrid bond graph is a graphical description of a physical dynamic system with discontinuities (i.e., a hybrid dynamical system). Similar to\na regular bond graph, it is an energy-based technique. However, it allows instantaneous switching of the junction structure, which may violate the principle of continuity of power (Mosterman and Biswas, 1998). ", "Automation-Control": 0.7686002254, "Qwen2": "Yes"} {"id": "1674411", "revid": "30402483", "url": "https://en.wikipedia.org/wiki?curid=1674411", "title": "Convex optimization", "text": "Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard.\nConvex optimization has applications in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design, data analysis and modeling, finance, statistics (optimal experimental design), and structural optimization, where the approximation concept has proven to be efficient. \nWith recent advancements in computing and optimization algorithms, convex programming is nearly as straightforward as linear programming.\nDefinition.\nA convex optimization problem is an optimization problem in which the objective function is a convex function and the feasible set is a convex set. A function formula_1 mapping some subset of formula_2into formula_3 is convex if its domain is convex and for all formula_4 and all formula_5 in its domain, the following condition holds: formula_6. A set S is convex if for all members formula_7 and all formula_4, we have that formula_9.\nConcretely, a convex optimization problem is the problem of finding some formula_10 attaining\nwhere the objective function formula_12 is convex, as is the feasible set formula_13.\n If such a point exists, it is referred to as an \"optimal point\" or \"solution\"; the set of all optimal points is called the \"optimal set\". If formula_1 is unbounded below over formula_13 or the infimum is not attained, then the optimization problem is said to be \"unbounded\". Otherwise, if formula_13 is the empty set, then the problem is said to be \"infeasible\".\nStandard form.\nA convex optimization problem is in \"standard form\" if it is written as\nwhere:\nThis notation describes the problem of finding formula_18 that minimizes formula_28 among all formula_29 satisfying formula_30, formula_21 and formula_32, formula_23. The function formula_1 is the objective function of the problem, and the functions formula_35 and formula_36 are the constraint functions.\nThe feasible set formula_13 of the optimization problem consists of all points formula_38 satisfying the constraints. This set is convex because formula_39 is convex, the sublevel sets of convex functions are convex, affine sets are convex, and the intersection of convex sets is convex.\nA solution to a convex optimization problem is any point formula_40 attaining formula_11. In general, a convex optimization problem may have zero, one, or many solutions.\nMany optimization problems can be equivalently formulated in this standard form. For example, the problem of maximizing a concave function formula_1 can be re-formulated equivalently as the problem of minimizing the convex function formula_43. The problem of maximizing a concave function over a convex set is commonly called a convex optimization problem.\nProperties.\nThe following are useful properties of convex optimization problems:\nThese results are used by the theory of convex minimization along with geometric notions from functional analysis (in Hilbert spaces) such as the Hilbert projection theorem, the separating hyperplane theorem, and Farkas' lemma.\nApplications.\nThe following problem classes are all convex optimization problems, or can be reduced to convex optimization problems via simple transformations:\nConvex optimization has practical applications for the following.\nLagrange multipliers.\nConsider a convex minimization problem given in standard form by a cost function formula_44 and inequality constraints formula_45 for formula_46. Then the domain formula_47 is:\nThe Lagrangian function for the problem is\nFor each point formula_50 in formula_51 that minimizes formula_1 over formula_51, there exist real numbers formula_54 called Lagrange multipliers, that satisfy these conditions simultaneously:\nIf there exists a \"strictly feasible point\", that is, a point formula_61 satisfying\nthen the statement above can be strengthened to require that formula_63.\nConversely, if some formula_50 in formula_51 satisfies (1)–(3) for scalars formula_66 with formula_63 then formula_50 is certain to minimize formula_1 over formula_51.\nAlgorithms.\nUnconstrained convex optimization can be easily solved with gradient descent (a special case of steepest descent) or Newton's method, combined with line search for an appropriate step size; these can be mathematically proven to converge quickly, especially the latter method. Convex optimization with linear equality constraints can also be solved using KKT matrix techniques if the objective function is a quadratic function (which generalizes to a variation of Newton's method, which works even if the point of initialization does not satisfy the constraints), but can also generally be solved by eliminating the equality constraints with linear algebra or solving the dual problem. Finally, convex optimization with both linear equality constraints and convex inequality constraints can be solved by applying an unconstrained convex optimization technique to the objective function plus logarithmic barrier terms. (When the starting point is not feasible - that is, satisfying the constraints - this is preceded by so-called \"phase I\" methods, which either find a feasible point or show that none exist. Phase I methods generally consist of reducing the search in question to yet another convex optimization problem.)\nConvex optimization problems can also be solved by the following contemporary methods:\nSubgradient methods can be implemented simply and so are widely used. Dual subgradient methods are subgradient methods applied to a dual problem. The drift-plus-penalty method is similar to the dual subgradient method, but takes a time average of the primal variables.\nSoftware.\nThere is a large software ecosystem for convex optimization. This ecosystem has two main categories: \"solvers\" on the one hand and \"modeling tools\" (or \"interfaces\") on the other hand.\nSolvers implement the algorithms themselves and are usually written in C. They require users to specify optimization problems in very specific formats which may not be natural from a modeling perspective. Modeling tools are separate pieces of software that let the user specify an optimization in higher-level syntax. They manage all transformations to and from the user's high-level model and the solver's input/output format. \nThe table below shows a mix of modeling tools (such as CVXPY and Convex.jl) and solvers (such as CVXOPT and MOSEK). This table is by no means exhaustive.\nExtensions.\nExtensions of convex optimization include the optimization of biconvex, pseudo-convex, and quasiconvex functions. Extensions of the theory of convex analysis and iterative methods for approximately solving non-convex minimization problems occur in the field of generalized convexity, also known as abstract convex analysis.", "Automation-Control": 0.9837296009, "Qwen2": "Yes"} {"id": "60574580", "revid": "23013856", "url": "https://en.wikipedia.org/wiki?curid=60574580", "title": "Luch Design Bureau", "text": "Luch Design Bureau , located in Kyiv, Ukraine, is a major Ukrainian developer of components for the defense industry.\nThe company is in close co-operation with the Artem holding company, also located in Kyiv. Artem is the main manufacturer of the models developed by the Luch Design Bureau.\nThe company was first established in Ukraine in 1965 and quickly became a leading Soviet developer of automated control systems and diagnostics systems in aviation engineering.\nHistory.\n1965 – development and release for production of PPP-3SM mobile position of preliminary preparation for application (with its location on UAZ car chassis) with air-to-air missiles manual control and PPP-3SAM mobile position of preliminary preparation for application with missiles automatic control system as a basis for creation of further missile automatic control systems for the Air and Navy Forces.\n1965-1969 – creation and release for production of SAK-46 automatic control system for control of air missiles and automatic control and AKIPS-80 test mobile station for control of anti-submarine missiles. In the Soviet Union the mentioned developments were recognized as a first automatic control systems by the State Commissions.\n1968-1972 – development and release for production of “Ingul” system (with its location on GAZ-66 car chassis) for preparation for application and maintenance of 9 types of air missiles.\n1968-1977 – development and release for production of automatic test and control mobile stations:\n- AKIPS-125 for underwater missiles control; \n- AKIPS-4U for  anti-submarine missiles control; \n- AKIPS-4U1 for torpedoes control.\n1969-1977 – development and release for production of “Trubezh” system for preparation for application and maintenance of 12 types of air missiles.\n1975-1977 – creation of current information aircraft recorder - RIU.\n1977-1980 – development and release for production of “Ingul-A” and “Trubezh-A” systems with use of modularity (container) equipment construction for preparation for application and control of 26 types of missiles and guided air bombs (“Ingul-A”) and 18 types of missiles (“Trubezh-A”).\n1978-1989 – development and release for production of automatic control and test mobile stations AKIPS-1 and AKIPS-3.2 for control of air anti-submarine missiles.\nAll mentioned stations were put into service by the Armed Forces of the Soviet Union and a number of foreign countries.\n1981-1983 – development and release for production of multipurpose modular “Gurt” system which provides the preparation for application of more than 40 types of missiles and their various modifications. The “Gurt” system replaced “Ingul” and “Trubezh” systems that were in service.\nIn addition to new equipment samples development SKDB “Luch” is actively engaged in modernization and overhaul-period renewal of existing equipment.\nBy the respective coordinated decisions of the ministries of Ukraine the SKDB “Luch” was assigned as a Principal enterprise of Ukraine that conducts and coordinate works on specified life cycle and  lifetime prolongation of air, anti-aircraft missiles, mine and torpedo weapon, the “Ingul” and “Gurt” systems as well as AKIPS stations.\nSince 1979 SKDB “Luch” has been developing and producing units of servo electric control surface actuators of control systems of air, anti-aircraft missiles and torpedoes. More than ten steering modules with electric actuators were developed for products of various classes which according to their characteristics are highly competitive with the best world analogs.\nThus, for example, in 1986 the block of servo electric control surface actuators for R-77 air-to-air missile with lattice control surfaces was created. On the basis of the compact commutatorless electric motors created in Ukraine for the previous decade the SKDB “Luch” developed and has been producing a number of small-sized units of servo electric control surface actuators for the guided tank missiles. Actuators are a part of developed and produced by the SKDB “Luch” digital control systems which provide rounds and missiles precise guidance. Actuators provide high accuracy characteristics, have increased jamming resistance and withstand overloads which take place during firing from guns. \nSince 2002 the SKDB “Luch” has been supplying (instead of the “Gurt” system) the “Gurt-M” system which provides:\n- control and preparation for application of more than 50 various modifications of air missiles and guided air bombs;\n- final checking of missiles at manufacturing plants;\n- fault diagnostic during missiles repair;\n- missiles technical state forecast during overhaul-period renewal.\nMore recently the SKDB “Luch” has been a Principal enterprise that develops conceptually new special military equipment for Ukraine with participation of over than 30 Ukrainian enterprises. Products developed under this program are included in the top-priority category.\nProducts.\n \nList of products ", "Automation-Control": 0.6094111204, "Qwen2": "Yes"} {"id": "29006979", "revid": "9021902", "url": "https://en.wikipedia.org/wiki?curid=29006979", "title": "Enterprise appliance transaction module", "text": "An enterprise appliance transaction module (EATM) is a device, typically used in the manufacturing automation marketplace, for the transfer of plant floor equipment and product status to manufacturing execution systems (MES), enterprise resource planning (ERP) systems and the like.\nSolutions that deliver manufacturing floor integration have evolved over time. Initially they took the form of custom integrated systems, designed and delivered by system integrators. These solutions were largely based on separate commercial off-the-shelf (COTS) products integrated into a custom system.\nModern EATM products might not needing any software development or custom integration.\nComponents.\nHardware platform – embedded computer, computer appliance\nDevice communications software – Support for the automation protocols from which data will be extracted. Device communications software typically operates through polled or change based protocols that are vendor specific. Data to be extracted is typically organized into related items, and transferred based on a machine status such as Cycle Complete, Job Start, System Downtime Event, Operator Change, etc.\nTypical protocols; Rockwell Automation CIP, ControlLogix backplane, EtherNet/IP, Siemens Industrial Ethernet, Modbus TCP. There are hundreds of automation device protocols and EATM solutions are typically targeting certain market segments and will be based on automation vendor relationships.\nEnterprise communications software – Software that will enable communications to enterprise systems. Communications at this level are typically transaction oriented and require data transactions to be sent and acknowledged to ensure the data integrity. Examples include; Relational Database Adapters, Java Message Services (JMS), Oracle Database Interfaces and proprietary interfaces to specific products.\nTransaction application – Software that is configured to watch and collect device variables, formats them into required transactions, and transfer the results securely and reliably to other systems. The transaction application resides between the device communications and enterprise communications.\nOverall, a manufacturing environment is portrayed as a three layer manufacturing pyramid. At the base, device control Systems – Programmable Logic Controller (PLC) and Supervisory Control and Data Acquisition systems (SCADA) perform the process automation functions. A layer above that encompasses Plant Execution Systems that deliver the functions of; Asset Management, Genealogy, statistical process control (SPC]) MES, order tracking, quality assurance and scheduling. At the top most level, enterprise resource planning (ERP) systems offer final control over the enterprise and track overall enterprise performance.\nIt is the job of EATM to act as a bi-directional bridge between field devices and the supervisory control systems. These field devices could be located in a work cell or an assembly or process line. They could be very simple devices, or programmable controllers, machine controls, or PLCs. The upstream business systems could be ANDON and Kanban systems for that line, manufacturing execution systems (MES), and archival quality databases.", "Automation-Control": 0.6565307975, "Qwen2": "Yes"} {"id": "1697331", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=1697331", "title": "Nyquist stability criterion", "text": "In control theory and stability theory, the Nyquist stability criterion or Strecker–Nyquist stability criterion, independently discovered by the German electrical engineer at Siemens in 1930 and the Swedish-American electrical engineer Harry Nyquist at Bell Telephone Laboratories in 1932, is a graphical technique for determining the stability of a dynamical system. \nBecause it only looks at the Nyquist plot of the open loop systems, it can be applied without explicitly computing the poles and zeros of either the closed-loop or open-loop system (although the number of each type of right-half-plane singularities must be known). As a result, it can be applied to systems defined by non-rational functions, such as systems with delays. In contrast to Bode plots, it can handle transfer functions with right half-plane singularities. In addition, there is a natural generalization to more complex systems with multiple inputs and multiple outputs, such as control systems for airplanes.\nThe Nyquist stability criterion is widely used in electronics and control system engineering, as well as other fields, for designing and analyzing systems with feedback. While Nyquist is one of the most general stability tests, it is still restricted to linear time-invariant (LTI) systems. Nevertheless, there are generalizations of the Nyquist criterion (and plot) for non-linear systems, such as the circle criterion and the scaled relative graph of a nonlinear operator. Additionally, other stability criteria like Lyapunov methods can also be applied for non-linear systems.\nAlthough Nyquist is a graphical technique, it only provides a limited amount of intuition for why a system is stable or unstable, or how to modify an unstable system to be stable. Techniques like Bode plots, while less general, are sometimes a more useful design tool.\nNyquist plot.\nA Nyquist plot is a parametric plot of a frequency response used in automatic control and signal processing. The most common use of Nyquist plots is for assessing the stability of a system with feedback. In Cartesian coordinates, the real part of the transfer function is plotted on the \"X\"-axis while the imaginary part is plotted on the \"Y\"-axis. The frequency is swept as a parameter, resulting in one point per frequency. The same plot can be described using polar coordinates, where gain of the transfer function is the radial coordinate, and the phase of the transfer function is the corresponding angular coordinate. The Nyquist plot is named after Harry Nyquist, a former engineer at Bell Laboratories.\nAssessment of the stability of a closed-loop negative feedback system is done by applying the Nyquist stability criterion to the Nyquist plot of the open-loop system (i.e. the same system without its feedback loop). This method is easily applicable even for systems with delays and other non-rational transfer functions, which may appear difficult to analyze with other methods. Stability is determined by looking at the number of encirclements of the point (−1, 0). The range of gains over which the system will be stable can be determined by looking at crossings of the real axis. \nThe Nyquist plot can provide some information about the shape of the transfer function. For instance, the plot provides information on the difference between the number of zeros and poles of the transfer function by the angle at which the curve approaches the origin.\nWhen drawn by hand, a cartoon version of the Nyquist plot is sometimes used, which shows the linearity of the curve, but where coordinates are distorted to show more detail in regions of interest. When plotted computationally, one needs to be careful to cover all frequencies of interest. This typically means that the parameter is swept logarithmically, in order to cover a wide range of values.\nBackground.\nThe mathematics uses the Laplace transform, which transforms integrals and derivatives in the time domain to simple multiplication and division in the \"s\" domain.\nWe consider a system whose transfer function is formula_1; when placed in a closed loop with negative feedback formula_2, the closed loop transfer function (CLTF) then becomes:\nStability can be determined by examining the roots of the desensitivity factor polynomial formula_4, e.g. using the Routh array, but this method is somewhat tedious. Conclusions can also be reached by examining the open loop transfer function (OLTF) formula_5, using its Bode plots or, as here, its polar plot using the Nyquist criterion, as follows.\nAny Laplace domain transfer function formula_6 can be expressed as the ratio of two polynomials:\nThe roots of formula_8 are called the \"zeros\" of formula_6, and the roots of formula_10 are the \"poles\" of formula_6. The poles of formula_6 are also said to be the roots of the \"characteristic equation\" formula_13.\nThe stability of formula_6 is determined by the values of its poles: for stability, the real part of every pole must be negative. If formula_6 is formed by closing a negative unity feedback loop around the open-loop transfer function,\nthen the roots of the characteristic equation are also the zeros of formula_17, or simply the roots of formula_18.\nCauchy's argument principle.\nFrom complex analysis, a contour formula_19 drawn in the complex formula_20 plane, encompassing but not passing through any number of zeros and poles of a function formula_21, can be mapped to another plane (named formula_21 plane) by the function formula_23. Precisely, each complex point formula_20 in the contour formula_19 is mapped to the point formula_21 in the new formula_21 plane yielding a new contour. \nThe Nyquist plot of formula_21, which is the contour formula_29 will encircle the point formula_30 of the formula_21 plane formula_32 times, where formula_33 by Cauchy's argument principle. Here formula_34 and formula_35 are, respectively, the number of zeros of formula_36 and poles of formula_21 inside the contour formula_19. Note that we count encirclements in the formula_21 plane in the same sense as the contour formula_19 and that encirclements in the opposite direction are \"negative\" encirclements. That is, we consider clockwise encirclements to be positive and counterclockwise encirclements to be negative.\nInstead of Cauchy's argument principle, the original paper by Harry Nyquist in 1932 uses a less elegant approach. The approach explained here is similar to the approach used by Leroy MacColl (Fundamental theory of servomechanisms 1945) or by Hendrik Bode (Network analysis and feedback amplifier design 1945), both of whom also worked for Bell Laboratories. This approach appears in most modern textbooks on control theory.\nDefinition.\nWe first construct the Nyquist contour, a contour that encompasses the right-half of the complex plane:\nThe Nyquist contour mapped through the function formula_47 yields a plot of formula_47 in the complex plane. By the argument principle, the number of clockwise encirclements of the origin must be the number of zeros of formula_47 in the right-half complex plane minus the number of poles of formula_47 in the right-half complex plane. If instead, the contour is mapped through the open-loop transfer function formula_1, the result is the Nyquist Plot of formula_1. By counting the resulting contour's encirclements of −1, we find the difference between the number of poles and zeros in the right-half complex plane of formula_47. Recalling that the zeros of formula_47 are the poles of the closed-loop system, and noting that the poles of formula_47 are same as the poles of formula_1, we now state the Nyquist Criterion:\"Given a Nyquist contour formula_19, let formula_35 be the number of poles of formula_1 encircled by formula_19, and formula_34 be the number of zeros of formula_47 encircled by formula_19. Alternatively, and more importantly, if formula_34 is the number of poles of the closed loop system in the right half plane, and formula_35 is the number of poles of the open-loop transfer function formula_1 in the right half plane, the resultant contour in the formula_1-plane, formula_68 shall encircle (clockwise) the point formula_69 formula_32 times such that formula_71.\"If the system is originally open-loop unstable, feedback is necessary to stabilize the system. Right-half-plane (RHP) poles represent that instability. For closed-loop stability of a system, the number of closed-loop roots in the right half of the \"s\"-plane must be zero. Hence, the number of counter-clockwise encirclements about formula_72 must be equal to the number of open-loop poles in the RHP. Any clockwise encirclements of the critical point by the open-loop frequency response (when judged from low frequency to high frequency) would indicate that the feedback control system would be destabilizing if the loop were closed. (Using RHP zeros to \"cancel out\" RHP poles does not remove the instability, but rather ensures that the system will remain unstable even in the presence of feedback, since the closed-loop roots travel between open-loop poles and zeros in the presence of feedback. In fact, the RHP zero can make the unstable pole unobservable and therefore not stabilizable through feedback.)\nThe Nyquist criterion for systems with poles on the imaginary axis.\nThe above consideration was conducted with an assumption that the open-loop transfer function formula_1 does not have any pole on the imaginary axis (i.e. poles of the form formula_74). This results from the requirement of the argument principle that the contour cannot pass through any pole of the mapping function. The most common case are systems with integrators (poles at zero).\nTo be able to analyze systems with poles on the imaginary axis, the Nyquist Contour can be modified to avoid passing through the point formula_74. One way to do it is to construct a semicircular arc with radius formula_76 around formula_74, that starts at formula_78 and travels anticlockwise to formula_79. Such a modification implies that the phasor formula_1 travels along an arc of infinite radius by formula_81, where formula_82 is the multiplicity of the pole on the imaginary axis.\nMathematical derivation.\nOur goal is to, through this process, check for the stability of the transfer function of our unity feedback system with gain \"k\", which is given by\nThat is, we would like to check whether the characteristic equation of the above transfer function, given by\nhas zeros outside the open left-half-plane (commonly initialized as OLHP).\nWe suppose that we have a clockwise (i.e. negatively oriented) contour formula_19 enclosing the right half plane, with indentations as needed to avoid passing through zeros or poles of the function formula_1. Cauchy's argument principle states that \nWhere formula_34 denotes the number of zeros of formula_10 enclosed by the contour and formula_35 denotes the number of poles of formula_10 by the same contour. Rearranging, we have\nformula_92, which is to say\nWe then note that formula_94 has exactly the same poles as formula_1. Thus, we may find formula_35 by counting the poles of formula_1 that appear within the contour, that is, within the open right half plane (ORHP).\nWe will now rearrange the above integral via substitution. That is, setting formula_98, we have\nWe then make a further substitution, setting formula_100. This gives us\nWe now note that formula_102 gives us the image of our contour under formula_1, which is to say our Nyquist plot. We may further reduce the integral\nby applying Cauchy's integral formula. In fact, we find that the above integral corresponds precisely to the number of times the Nyquist plot encircles the point formula_105 clockwise. Thus, we may finally state that\nWe thus find that formula_107 as defined above corresponds to a stable unity-feedback system when formula_34, as evaluated above, is equal to 0.", "Automation-Control": 0.7796010971, "Qwen2": "Yes"} {"id": "1699649", "revid": "45653908", "url": "https://en.wikipedia.org/wiki?curid=1699649", "title": "Tebis", "text": "Tebis (\"Technische Entwicklung Beratung und Individuelle Software\") is a CAD/CAM software provided by Tebis AG, with headquarters in Martinsried near Munich/Germany.\nDevelopment locations: Martinsried and Norderstedt, Germany\nInternational locations: China, Spain, France, Italy, Portugal, Sweden, United Kingdom, USA.\nFunctionality.\nTebis is a CAD/CAM software for industries such as die, mold or model manufacturing. The software is primarily to create toolpaths for machining operations such as drilling, milling and turning, but also for Wire EDM and sinker EDM. These toolpaths control multi-axis CNC machines. Other applications include manufacturing planning, design, reverse engineering, quality assurance, CNC machining and assembly. The software features interfaces for neutral file formats as well as proprietary formats of third-party manufacturers (STEP 203/214, VDAFS, IGES, DXF, STL, Parasolid, Catia V4/V5, Creo, SolidWorks, NX, JT, Inventor, Nastran, AutoForm).\nIndustrial application.\nThe programs are used in manufacturing companies of all sizes, from small and medium-sized companies to OEMs in the automotive and aerospace industries and their suppliers.\nThe following is a small sampling of companies who use the CAM system from Tebis.\nThe history of Tebis.\nTebis was founded in 1984. Following initial consulting jobs and business software projects, Tebis shifted its focus after six months to CAD/CAM. The first technical product was a PC-based station, which used a drawing board equipped with a position-measuring system to digitize transparent plans and convert them to scribed programs for milling machines.\nVersions 1.0 to 1.0.4 constituted the first Tebis CAD/CAM system. As one of the first 3D systems, Tebis ran exclusively on PCs (DOS). Two monitors were required for its operation: One monitor displayed the real commands, while the other showed the geometries in 4 panels. The input commands were entered using a digitizer tablet. The milling programs were calculated only for individual surfaces. Because of the small RAM (256 bytes) in the NC machines of the 1980s, Tebis provided a DNC connection to enable postprocessing via a V24 line next to the NC machine.\nThe Tebis Version 2.0 with a graphical user interface was introduced in 1989. It is still used today in a much more advanced form, and is distinct from common Windows interfaces. This version made it possible to animate geometries onscreen in real-time. Tebis Automill technology, which allows users to calculate milling paths across surfaces, was introduced in Version 2.1.\nTebis Version 3.0 was presented in 1993. The system was modularized and expanded for operation under the SCO UNIX, HP-UX, IRIX and AIX operating systems. Version 3.1 included the Milling Wizard, version V3.2 featured interactive CAD and version V3.3 offered the first integration of a tool library and parameterized administration for all NC calculations. In Version 3.4, modules for the simulation of machining at a virtual CNC machine the design of electrodes for EDM, and 2.5D milling and drilling were added. Starting with Version 3.5, variable machining templates can be used for even better NC programming automation. For the first time this version also included the Job Manager as a central control element for all machining steps. The CAD module for BREP design was integrated in the software, enabling Tebis to be used for the entire manufacturing process in die, mold and model manufacturing. Version 4.0 was provided with a new user interface specifically designed for CAD/CAM applications and a new platform for 2.5D and 3D feature-based NC automation. For the first time, this version supported CNC lathes and industrial robots, and the manufacturing technologies of laser hardening and laser weld cladding.\nThe current Tebis Version 4.1 was launched in 2020 with an internally-developed parametric-associative CAD system base. The hybrid CAD system combines free-form surfaces, solids and digitized data and provides Tebis template technology also in the CAD environment. Parametric CAD templates automate design and CAD manufacturing preparation. The user interface has also been optimized in terms of simplicity and automation for CAM users.\nTebis is one of the global market leaders in CAM software. The owner-managed company also has its own consulting unit working with companies primarily in die, mold and model manufacturing. Services include industry-specific process and management consulting and optimizing the processes of these companies.\nAfter acquiring a division of ID Ingenieurgesellschaft für Datentechnik mbH, Tebis now also offers a manufacturing execution system (MES) called ProLeiS that can be integrated in the CAD/CAM application.", "Automation-Control": 0.8785930872, "Qwen2": "Yes"} {"id": "54568728", "revid": "626912", "url": "https://en.wikipedia.org/wiki?curid=54568728", "title": "RigExpert", "text": "Rig Expert Ukraine Ltd is a manufacturer of ham and PMR Two-way radio RF antenna analysis and antenna tuning equipment. The company was founded in 2003 and is headquartered in Kyiv, Ukraine.\nCurrent products.\nThe AA-30, AA-54 & AA-170 are almost the same product except for the frequency range. Similarly, the AA-600, AA-1000 & AA-1400 are the same product except for the different frequency range.", "Automation-Control": 0.8974816203, "Qwen2": "Yes"} {"id": "39587805", "revid": "16809467", "url": "https://en.wikipedia.org/wiki?curid=39587805", "title": "Proximal gradient method", "text": "Proximal gradient methods are a generalized form of projection used to solve non-differentiable convex optimization problems. \nMany interesting problems can be formulated as convex optimization problems of the form\nformula_1\nwhere formula_2 are possibly non-differentiable convex functions. The lack of differentiability rules out conventional smooth optimization techniques like the steepest descent method and the conjugate gradient method, but proximal gradient methods can be used instead. \nProximal gradient methods starts by a splitting step, in which the functions formula_3 are used individually so as to yield an easily implementable algorithm. They are called proximal because each non-differentiable function among formula_3 is involved via its proximity operator. Iterative shrinkage thresholding algorithm, projected Landweber, projected gradient, alternating projections, alternating-direction method of multipliers, alternating\nsplit Bregman are special instances of proximal algorithms. \nFor the theory of proximal gradient methods from the perspective of and with applications to statistical learning theory, see proximal gradient methods for learning.\nProjection onto convex sets (POCS).\nOne of the widely used convex optimization algorithms is projections onto convex sets (POCS). This algorithm is employed to recover/synthesize a signal satisfying simultaneously several convex constraints. Let formula_5 be the indicator function of non-empty closed convex set formula_6 modeling a constraint. This reduces to convex feasibility problem, which require us to find a solution such that it lies in the intersection of all convex sets formula_6. In POCS method each set formula_6 is incorporated by its projection operator formula_9. So in each iteration formula_10 is updated as\nHowever beyond such problems projection operators are not appropriate and more general operators are required to tackle them. Among the various generalizations of the notion of a convex projection operator that exist, proximity operators are best suited for other purposes.\nExamples.\nSpecial instances of Proximal Gradient Methods are", "Automation-Control": 0.9933717251, "Qwen2": "Yes"} {"id": "41199359", "revid": "1090170660", "url": "https://en.wikipedia.org/wiki?curid=41199359", "title": "Tegra Note 7", "text": "The Tegra Note 7 is a mini tablet computer and the second Tegra 4 based mobile device designed by Nvidia that runs the Android operating system.\nRelease.\nRevealed on September 18, 2013, EVGA was the first to partner with Nvidia for release on November 19, 2013. Other manufacturers Nvidia has partnered with include Advent, ZOTAC, PNY, Gigabyte, Xolo HP and Cherry Mobile.\nThe device is known as EVGA Tegra Note 7 in the US, Advent Vega Tegra Note 7 in the United Kingdom, Gradiente Tegra Note 7 in Brazil, Gazer Tegra Note 7 in Ukraine and Russia, Gigabyte Tegra Note 7 in Australia and New Zealand, Cherry Mobile Tegra Note 7 in the Philippines and XOLO PLAY Tegra Note 7 in India. HP has rebranded the Tegra Note 7 as the HP Slate 7 Extreme.\nFeatures.\nThe Note 7 uses a 1.8 GHz quad-core Tegra 4 chipset with 1 GB of RAM; Nvidia claims that the chipset and other improvements make it the fastest 7-inch tablet on the market offering 50% faster browsing experience than tablets twice the price. Notable features of the device include enhanced capacitive \"DirectStylus\" technology that is three times more responsive, premium Tegra 4 audio processing with PureAudio, and world's first HDR camera in a tablet with Nvidia Chimera computational photography. The 1280×800 display use Nvidia PRISM 2 display processing, which modulates the display backlight and per-pixel color values to extend battery life 40% for up to 10 hours of HD video playback.\nSoftware updates.\nNvidia released Tegra Note 7 System Update 3.0 (Android 5.1) on July 23, 2015.", "Automation-Control": 1.0000100136, "Qwen2": "Yes"} {"id": "41200806", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=41200806", "title": "Proximal gradient methods for learning", "text": "Proximal gradient (forward backward splitting) methods for learning is an area of research in optimization and statistical learning theory which studies algorithms for a general class of convex regularization problems where the regularization penalty may not be differentiable. One such example is formula_1 regularization (also known as Lasso) of the form\nProximal gradient methods offer a general framework for solving regularization problems from statistical learning theory with penalties that are tailored to a specific problem application. Such customized penalties can help to induce certain structure in problem solutions, such as \"sparsity\" (in the case of lasso) or \"group structure\" (in the case of group lasso).\nRelevant background.\nProximal gradient methods are applicable in a wide variety of scenarios for solving convex optimization problems of the form\nwhere formula_4 is convex and differentiable with Lipschitz continuous gradient, formula_5 is a convex, lower semicontinuous function which is possibly nondifferentiable, and formula_6 is some set, typically a Hilbert space. The usual criterion of formula_7 minimizes formula_8 if and only if formula_9 in the convex, differentiable setting is now replaced by\nwhere formula_11 denotes the subdifferential of a real-valued, convex function formula_12.\nGiven a convex function formula_13 an important operator to consider is its proximal operator formula_14 defined by\nwhich is well-defined because of the strict convexity of the formula_16 norm. The proximal operator can be seen as a generalization of a projection.\nWe see that the proximity operator is important because formula_17 is a minimizer to the problem formula_18 if and only if\nMoreau decomposition.\nOne important technique related to proximal gradient methods is the Moreau decomposition, which decomposes the identity operator as the sum of two proximity operators. Namely, let formula_21 be a lower semicontinuous, convex function on a vector space formula_22. We define its Fenchel conjugate formula_23 to be the function\nThe general form of Moreau's decomposition states that for any formula_25 and any formula_20 that\nwhich for formula_28 implies that formula_29. The Moreau decomposition can be seen to be a generalization of the usual orthogonal decomposition of a vector space, analogous with the fact that proximity operators are generalizations of projections.\nIn certain situations it may be easier to compute the proximity operator for the conjugate formula_30 instead of the function formula_31, and therefore the Moreau decomposition can be applied. This is the case for group lasso.\nLasso regularization.\nConsider the regularized empirical risk minimization problem with square loss and with the formula_1 norm as the regularization penalty:\nwhere formula_34 The formula_1 regularization problem is sometimes referred to as \"lasso\" (least absolute shrinkage and selection operator). Such formula_1 regularization problems are interesting because they induce \" sparse\" solutions, that is, solutions formula_37 to the minimization problem have relatively few nonzero components. Lasso can be seen to be a convex relaxation of the non-convex problem\nwhere formula_39 denotes the formula_40 \"norm\", which is the number of nonzero entries of the vector formula_37. Sparse solutions are of particular interest in learning theory for interpretability of results: a sparse solution can identify a small number of important factors.\nSolving for L1 proximity operator.\nFor simplicity we restrict our attention to the problem where formula_42. To solve the problem\nwe consider our objective function in two parts: a convex, differentiable term formula_44 and a convex function formula_45. Note that formula_46 is not strictly convex.\nLet us compute the proximity operator for formula_47. First we find an alternative characterization of the proximity operator formula_48 as follows:\nformula_49\nFor formula_45 it is easy to compute formula_51: the formula_52th entry of formula_51 is precisely\nUsing the recharacterization of the proximity operator given above, for the choice of formula_45 and formula_20 we have that formula_57 is defined entrywise by\nwhich is known as the soft thresholding operator formula_59.\nFixed point iterative schemes.\nTo finally solve the lasso problem we consider the fixed point equation shown earlier:\nGiven that we have computed the form of the proximity operator explicitly, then we can define a standard fixed point iteration procedure. Namely, fix some initial formula_61, and for formula_62 define\nNote here the effective trade-off between the empirical error term formula_64 and the regularization penalty formula_47. This fixed point method has decoupled the effect of the two different convex functions which comprise the objective function into a gradient descent step (formula_66) and a soft thresholding step (via formula_67).\nConvergence of this fixed point scheme is well-studied in the literature and is guaranteed under appropriate choice of step size formula_68 and loss function (such as the square loss taken here). Accelerated methods were introduced by Nesterov in 1983 which improve the rate of convergence under certain regularity assumptions on formula_4. Such methods have been studied extensively in previous years.\nFor more general learning problems where the proximity operator cannot be computed explicitly for some regularization term formula_46, such fixed point schemes can still be carried out using approximations to both the gradient and the proximity operator.\nPractical considerations.\nThere have been numerous developments within the past decade in convex optimization techniques which have influenced the application of proximal gradient methods in statistical learning theory. Here we survey a few important topics which can greatly improve practical algorithmic performance of these methods.\nAdaptive step size.\nIn the fixed point iteration scheme\none can allow variable step size formula_72 instead of a constant formula_68. Numerous adaptive step size schemes have been proposed throughout the literature. Applications of these schemes suggest that these can offer substantial improvement in number of iterations required for fixed point convergence.\nElastic net (mixed norm regularization).\nElastic net regularization offers an alternative to pure formula_1 regularization. The problem of lasso (formula_1) regularization involves the penalty term formula_45, which is not strictly convex. Hence, solutions to formula_77 where formula_4 is some empirical loss function, need not be unique. This is often avoided by the inclusion of an additional strictly convex term, such as an formula_79 norm regularization penalty. For example, one can consider the problem\nwhere formula_34\nFor formula_82 the penalty term formula_83 is now strictly convex, and hence the minimization problem now admits a unique solution. It has been observed that for sufficiently small formula_84, the additional penalty term formula_85 acts as a preconditioner and can substantially improve convergence while not adversely affecting the sparsity of solutions.\nExploiting group structure.\nProximal gradient methods provide a general framework which is applicable to a wide variety of problems in statistical learning theory. Certain problems in learning can often involve data which has additional structure that is known \" a priori\". In the past several years there have been new developments which incorporate information about group structure to provide methods which are tailored to different applications. Here we survey a few such methods.\nGroup lasso.\nGroup lasso is a generalization of the lasso method when features are grouped into disjoint blocks. Suppose the features are grouped into blocks formula_86. Here we take as a regularization penalty\nwhich is the sum of the formula_79 norm on corresponding feature vectors for the different groups. A similar proximity operator analysis as above can be used to compute the proximity operator for this penalty. Where the lasso penalty has a proximity operator which is soft thresholding on each individual component, the proximity operator for the group lasso is soft thresholding on each group. For the group formula_89 we have that proximity operator of formula_90 is given by\nwhere formula_89 is the formula_93th group.\nIn contrast to lasso, the derivation of the proximity operator for group lasso relies on the Moreau decomposition. Here the proximity operator of the conjugate of the group lasso penalty becomes a projection onto the ball of a dual norm.\nOther group structures.\nIn contrast to the group lasso problem, where features are grouped into disjoint blocks, it may be the case that grouped features are overlapping or have a nested structure. Such generalizations of group lasso have been considered in a variety of contexts. For overlapping groups one common approach is known as \"latent group lasso\" which introduces latent variables to account for overlap. Nested group structures are studied in \"hierarchical structure prediction\" and with directed acyclic graphs.", "Automation-Control": 0.7842086554, "Qwen2": "Yes"} {"id": "41217487", "revid": "39166520", "url": "https://en.wikipedia.org/wiki?curid=41217487", "title": "MOSFET Gate Driver", "text": "MOSFET gate driver is a specialized circuit that is used to drive the gate (gate driver) of power MOSFETs effectively and efficiently in high-speed switching applications. The addition of high-speed MOSFET gate drivers are the last step if the turn-on is intended to fully enhance the conducting channel of the MOSFET technology.\nMOSFET technology.\nThe gate driver works under the same principle as the MOSFET transistor. It provides an output current that provides a charge to the semiconductor by a control electrode. It is also simple to drive and has resistive nature for power uses.", "Automation-Control": 0.6213434935, "Qwen2": "Yes"} {"id": "44577560", "revid": "9813438", "url": "https://en.wikipedia.org/wiki?curid=44577560", "title": "Occam learning", "text": "In computational learning theory, Occam learning is a model of algorithmic learning where the objective of the learner is to output a succinct representation of received training data. This is closely related to probably approximately correct (PAC) learning, where the learner is evaluated on its predictive power of a test set.\nOccam learnability implies PAC learning, and for a wide variety of concept classes, the converse is also true: PAC learnability implies Occam learnability.\nIntroduction.\nOccam Learning is named after Occam's razor, which is a principle stating that, given all other things being equal, a shorter explanation for observed data should be favored over a lengthier explanation. The theory of Occam learning is a formal and mathematical justification for this principle. It was first shown by Blumer, et al. that Occam learning implies PAC learning, which is the standard model of learning in computational learning theory. In other words, \"parsimony\" (of the output hypothesis) implies \"predictive power\".\nDefinition of Occam learning.\nThe succinctness of a concept formula_1 in concept class formula_2 can be expressed by the length formula_3 of the shortest bit string that can represent formula_1 in formula_2. Occam learning connects the succinctness of a learning algorithm's output to its predictive power on unseen data.\nLet formula_2 and formula_7 be concept classes containing target concepts and hypotheses respectively. Then, for constants formula_8 and formula_9, a learning algorithm formula_10 is an formula_11-Occam algorithm for formula_2 using formula_7 iff, given a set formula_14 of formula_15 samples labeled according to a concept formula_16, formula_10 outputs a hypothesis formula_18 such that\nwhere formula_24 is the maximum length of any sample formula_25. An Occam algorithm is called \"efficient\" if it runs in time polynomial in formula_24, formula_15, and formula_28 We say a concept class formula_2 is \"Occam learnable\" with respect to a hypothesis class formula_7 if there exists an efficient Occam algorithm for formula_2 using formula_32\nThe relation between Occam and PAC learning.\nOccam learnability implies PAC learnability, as the following theorem of Blumer, et al. shows:\nTheorem (\"Occam learning implies PAC learning\").\nLet formula_10 be an efficient formula_11-Occam algorithm for formula_2 using formula_7. Then there exists a constant formula_37 such that for any formula_38, for any distribution formula_39, given formula_40 samples drawn from formula_39 and labelled according to a concept formula_42 of length formula_24 bits each, the algorithm formula_10 will output a hypothesis formula_45 such that formula_46 with probability at least formula_47 .Here, formula_48 is with respect to the concept formula_1 and distribution formula_39. This implies that the algorithm formula_10 is also a PAC learner for the concept class formula_2 using hypothesis class formula_7. A slightly more general formulation is as follows:\nTheorem (\"Occam learning implies PAC learning, cardinality version\").\nLet formula_38. Let formula_10 be an algorithm such that, given formula_15 samples drawn from a fixed but unknown distribution formula_57 and labeled according to a concept formula_42 of length formula_24 bits each, outputs a hypothesis formula_60 that is consistent with the labeled samples. Then, there exists a constant formula_61 such that if formula_62, then formula_10 is guaranteed to output a hypothesis formula_60 such that formula_46 with probability at least formula_47.\nWhile the above theorems show that Occam learning is sufficient for PAC learning, it doesn't say anything about \"necessity.\" Board and Pitt show that, for a wide variety of concept classes, Occam learning is in fact necessary for PAC learning. They proved that for any concept class that is \"polynomially closed under exception lists,\" PAC learnability implies the existence of an Occam algorithm for that concept class. Concept classes that are polynomially closed under exception lists include Boolean formulas, circuits, deterministic finite automata, decision-lists, decision-trees, and other geometrically-defined concept classes.\nA concept class formula_2 is polynomially closed under exception lists if there exists a polynomial-time algorithm formula_68 such that, when given the representation of a concept formula_42 and a finite list formula_70 of \"exceptions\", outputs a representation of a concept formula_71 such that the concepts formula_1 and formula_73 agree except on the set formula_70.\nProof that Occam learning implies PAC learning.\nWe first prove the Cardinality version. Call a hypothesis formula_75 \"bad\" if formula_76, where again formula_48 is with respect to the true concept formula_1 and the underlying distribution formula_57. The probability that a set of samples formula_21 is consistent with formula_19 is at most formula_82, by the independence of the samples. By the union bound, the probability that there exists a bad hypothesis in formula_83 is at most formula_84, which is less than formula_85 if formula_86. This concludes the proof of the second theorem above.\nUsing the second theorem, we can prove the first theorem. Since we have a formula_11-Occam algorithm, this means that any hypothesis output by formula_10 can be represented by at most formula_89 bits, and thus formula_90. This is less than formula_91 if we set formula_92 for some constant formula_37. Thus, by the Cardinality version Theorem, formula_10 will output a consistent hypothesis formula_19 with probability at least formula_96. This concludes the proof of the first theorem above.\nImproving sample complexity for common problems.\nThough Occam and PAC learnability are equivalent, the Occam framework can be used to produce tighter bounds on the sample complexity of classical problems including conjunctions, conjunctions with few relevant variables, and decision lists.\nExtensions.\nOccam algorithms have also been shown to be successful for PAC learning in the presence of errors, probabilistic concepts, function learning and Markovian non-independent examples.", "Automation-Control": 0.6662199497, "Qwen2": "Yes"} {"id": "44586101", "revid": "374440", "url": "https://en.wikipedia.org/wiki?curid=44586101", "title": "SIM.JS", "text": "SIM.JS is an event-based discrete-event simulation library based on standard\nJavaScript. The library has been written in order to enable simulation within standard browsers by utilizing web technology.\nSIM.JS supports entities, resources (Facility, Buffers and Stores), communication (via Timers, Events and Messages) and statistics \n(with Data Series, Time Series and Population statistics).\nThe SIM.JS distribution contains tutorials, in-depth documentation, and a large\nnumber of examples.\nSIM.JS is released as open source software under the LGPL license. The first version was released in January 2011.\nExample.\nThere are several examples bundled with the library download. Trafficlight simulation is a standard simulation problem, which may be simulated as within this example:", "Automation-Control": 0.7800732851, "Qwen2": "Yes"} {"id": "44587746", "revid": "12023796", "url": "https://en.wikipedia.org/wiki?curid=44587746", "title": "GoIP", "text": "GoIP is a GSM-gateway[1] and SIM bank produced by the Hybertone and DBL technology companies. It enables connections between the GSM network and VoIP.\nUsage concept.\nSim-card is put into GSM-gateways (or SIM-bank connected to GSM-gateway) in order to register it with the GSM network, at the same time the gateway is associated with VoIP through program switch. Accordingly, the traffic can be converted in and out between GSM and VoIP channels. SIP and H.323 protocols are used for media traffic termination. GoIP equipment is compatible with all main IP PBX: Asterisk, Mera, Oktell, 3CX, etc.\nCompared with PSTN, GSM-gateways provide a drastic economy by amending the infrastructure and lowering the expenditures on technical support.\nGoIP includes integral support of SIP and H.323 protocols with flexible settings. Duplex authentification of password and trust-list backup will significantly decrease telecommunication expenditures while maintaining an adaptable system of call transfer. The GoIP gateway supports several devices' groups, with flexible settings of large GSM-gateways groups with different channels.\nThere are several models of GoIP GSM gateways.\nApplications of GoIP gateways: They are vastly usable by system integrators, TCP, call centers, large and little companies, and domestic users of VoIP as well.\nVoIP-GSM gateways produced on the GoIP platform help to accomplish the following:\nAdding the mobile lines in the existing telephone system (GoIP provides a GSM network between telephone systems and IP PBX, and ensures a fast connection to PSTN where usual telephone lines are unavailable)\nOrganization of outbound call-centers\nCall transfer from GSM into SIP and backward (inbound and outbound calls between GSM and VoIP)\nDetermination of SIM-bank.\nSIM-bank is a SIM-card controller that provides remote control over VoIP GSM gateways and allows to simplify and automatize many procedures likewise:\n- fast connection of SIM cards to the gateway, rapid exchange of SIM-cards via the Internet, and their prearranged synchronous replacement.\nThe cooperative use of GoIP gateways and GoIP SIM-bank enables the management of unmanned devices and reduces the work pressure in SIM-cards operations, for example, SIM-cards replacement and replenishment.\nThe remote access advantages for SIM cards.\nCentral management of SIM cards,\ntheir dynamic allocation,\nno need to change or reinsert the SIM card in the device,\nrapid replacement of SIM cards with no breakdowns in service.\nFor instance, the remote GoIP gateway will no longer require the replacement of SIM cards in the device or their reinsertion. One can manage the group of GoIP gateways remotely, effectively, and with the minimal loss with the help of a SIM bank. The main advantages of SIM-bank appliances include 1) the replacement of SIM cards in gateways channels that are territorially remote from the very SIM bank and other gateways in order to transfer the SIM cards between the base stations of GSM operator and 2) autonomous work that supports from 32 to 128 channels for SIM-cards independent connections.\nSources:", "Automation-Control": 0.9604424238, "Qwen2": "Yes"} {"id": "11000264", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=11000264", "title": "Homomorphic secret sharing", "text": "In cryptography, homomorphic secret sharing is a type of secret sharing algorithm in which the secret is encrypted via homomorphic encryption. A homomorphism is a transformation from one algebraic structure into another of the same type so that the structure is preserved. Importantly, this means that for every kind of manipulation of the original data, there is a corresponding manipulation of the transformed data.\nTechnique.\nHomomorphic secret sharing is used to transmit a secret to several recipients as follows:\nExamples.\nSuppose a community wants to perform an election, using a decentralized voting protocol, but they want to ensure that the vote-counters won't lie about the results. Using a type of homomorphic secret sharing known as Shamir's secret sharing, each member of the community can add their vote to a form that is split into pieces, each piece is then submitted to a different vote-counter. The pieces are designed so that the vote-counters can't predict how any alterations to each piece will affect the whole, thus, discouraging vote-counters from tampering with their pieces. When all votes have been received, the vote-counters combine them, allowing them to recover the aggregate election results.\nIn detail, suppose we have an election with:\nFeatures.\nThis protocol works as long as not all of the \"k\" authorities are corrupt — if they were, then they could collaborate to reconstruct \"P\"(\"x\") for each voter and also subsequently alter the votes.\nThe protocol requires authorities to be completed, therefore in case there are authorities, authorities can be corrupted, which gives the protocol a certain degree of robustness.\nThe protocol manages the IDs of the voters (the IDs were submitted with the ballots) and therefore can verify that only legitimate voters have voted.\nUnder the assumptions on \"t\":\nThe protocol implicitly prevents corruption of ballots.\nThis is because the authorities have no incentive to change the ballot since each authority has only a share of the ballot and has no knowledge how changing this share will affect the outcome.", "Automation-Control": 0.6242913008, "Qwen2": "Yes"} {"id": "11020732", "revid": "39166520", "url": "https://en.wikipedia.org/wiki?curid=11020732", "title": "SecureLog", "text": "In cryptology, SecureLog is an algorithm used to convert digital data into trusted data that can be verified if the authenticity is questioned. SecureLog is used in IT solutions that generates data to support compliance regulations like SOX.\nHistory.\nAn algorithm used to make datalogs secure from manipulation. The first infrastructure supporting the algorithm was available on the Internet in 2006.\nOperation.\nSecureLog involves an \"active key provider\", a \"managed data store\" and a \"verification provider\".\nUses.\nThe algorithm is used in several different use cases:", "Automation-Control": 0.7138609886, "Qwen2": "Yes"} {"id": "61700257", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=61700257", "title": "NETDATA", "text": "NETDATA is a file format used primarily for data transfer and storage on IBM mainframe systems, although implementations are available for other systems.\nDescription.\nNETDATA files are 80-byte card image files containing unloaded file data plus metadata to allow the original file to be reconstituted on the receiving system. A complete NETDATA file consists of a number of \"control records\", followed by \"data records\" and terminated by a \"trailer record\". All records have the same format:\nControl records.\nControl records have a six-character EBCDIC identifier in bytes 2-7 following the length and flags. They contain a number of self-defining fields, called \"text units\". Each text unit consists of a two byte \"text unit key\" identifying this text unit, a two-byte big-endian binary number of length-data pairs that follow for this key (usually one), a two byte length field identifying the length of the text unit data, and a text unit of the specified length. Implementations are expected to ignore any text unit information not relevant to the receiving system.\nHeader Control Record\nThe header record must be the first record of a NETDATA file. It has the identifier \"INMR01\". It contains information identifying the sender: node (host), timestamp, and user id, the length of the control record segments, and the target (receiving) node and user id. It may optionally contain a request for acknowledgement of receipt, the version number of the data format, the number of files in the transmission, and a \"user parameter string.\" CMS allows only one file per transmission, but TSO/E and other systems may allow more than one.\nFile Utility Control Record\nThis record describes how the file's data is to be reconstituted. Its identifier is \"INMR02\". Bytes 8-11 contain the big-endian binary number of the file to which this record applies. If there are multiple files in a transmission they are numbered starting with one. The rest of this record describes the file's format, and one or more steps (\"utility programs\") which must be executed in order to rebuild this file. The text units identify the file's organization (INMDSORG: sequential, partitioned, etc.), its fixed of maximum record length (INMLRECL), its record format (INMRECFM: fixed, variable, etc) the approximate size of the file (IBMSIZE), and the utility program name(s) (INMUTILN). It may also contain the file's block size, creation date, number of directory blocks, name, expiration date, file mode number, last change date, last reference date, member name list (for partitioned datasets), a note file, and a user parameter string.\nData Control Record\nThe Data Control Record immediately precedes the data and describes its format, similar to the Utility Control Record. Its identifier is \"INMR03\". This record is ignored by CMS, but is used by TSO/E. It contains the file's organization (INMDSORG), its record length (INMLRECL), its record format (INMRECFM), and the file size (IBMSIZE).\nUser Control Record\nThe User Control record can appear at any point in the data stream. Its identifier is \"INMR04\". If present it is ignored by CMS, but may be used by other systems. It contains only a User Parameter String (INMUSERP).\nTrailer Control Record\nThis record marks the end of the file. Its identifier is \"INMR06\". No other data is defined for this record.\nAcknowledgement Control Record\nThis record has an id of \"INMR07\". It is used by the receiving system to acknowledge receipt of a transmission. It contains one of the text units File Name (INMDSNM) or Note File (INMTERM) plus, optionally, the Origin Time Stamp (INMFTIME).\nA note file (sometimes called a \"PROFS note\") \"is a short communication, the kind usually done by letter.\".\nData records.\nData records (identified by their flag value), follow the Data Control Record, if present, and precede the Trailer Control Record. Records can be any size up to INMLRECL. They are sent as multiple segments of up to 253 bytes, split into 80 byte records for transmission, and reassembled by the receiver. Settings of the flags byte in each record mark the beginning, end, or a complete record of the file. Bytes of a record can contain any bit pattern. No character values are reserved.", "Automation-Control": 0.8987518549, "Qwen2": "Yes"} {"id": "46467403", "revid": "11521989", "url": "https://en.wikipedia.org/wiki?curid=46467403", "title": "CNC machine tool monitoring by AE sensors", "text": "A machine tool monitoring system is a flow of information and system processing in which the information selection, obtaining data, processing of information and decision making on the refined information are integrated. The aim of tool condition monitoring is to detect early the disturbances in the machining process and wear of machine tool components. \nThe condition of tool has been researched extensively in the past and have focused on detection of tool wear, tool breakage and the estimation of remaining tool life. It is very important for on-line identification of tool condition in machining process for enhanced productivity, better quality of parts and lower costs for unmanned, automated manufacturing systems.\nTechniques of machine tool monitoring.\nMachine tool monitoring can be done with or without additional sensors. Using additional sensors, monitoring can be done by measuring:\nSensor-less machine tool monitoring is done by measuring internal drive signals such as:\nCombined measuring of multiple quantities is also possible.\nAcoustic emission sensor.\nMachine tool monitoring is explained with Acoustic Emission (AE) sensors. An AE sensor is commonly defined as the sound emitted as an elastic wave by a solid when it is deformed or struck, caused by the rapid release of localized stress energy. Therefore, it is an occurrence phenomenon which releases elastic energy into the material, which then propagates as an elastic wave. The detection frequency range of acoustic emission is from 1 kHz to 1 MHz. \nRapid stress-releasing events generate a spectrum of stress waves starting at 0 Hz and typically falling off at several MHz. AE can be related to an irreversible release of energy. It can also be generated from sources not involving material failure including friction, cavitation and impact. The three major applications of AE sensors phenomena are: a) Source location - determine the locations of occurrence of an event b) Material mechanical performance - evaluate and characterize materials/structures; and c Health monitoring – monitors the safety operation.\nHow an AE sensor monitors machine tool.\nAn AE sensor works on the principle of measuring the high-frequency energy signals produced during cutting process. It also measures the AE energy resulting from the fracture when a tool breaks. It is best suited to applications where the level of background AE signal is low compared to the sound of tool breakage. This makes the AE sensor ideal for breakage detection of small drills and taps. It is easy to install on both new and existing machines. \nAn AE sensor detects force proportional monitoring signals even in machining operations, which generate very small cutting forces. In combination with true power, it increases the reliability of breakage monitoring. It is used especially with solid carbide tools, or very small tools on large machines and multi spindles. Most of the sensors have to be attached to the machine tool surface. However, there are alternative methods of AE wave transmitting. A rotating, wireless AE sensor consists of a rotating sensor and a fixed receiver. An AE sensor can also receive the acoustic waves via a jet of cooling lubricant, which can be connected directly to the tool or workpiece.\nThe machine tool monitoring systems commonly use sensors for measuring cutting force components or quantities related to cutting force (power, torque, distance/displacement and strain). AE sensors are relatively easy to install in existing or new machines, and do not influence machine integrity and stiffness. All systems suppliers also use acoustic emission sensors, especially for monitoring small tools and for grinding. \nAll sensors used in machine tool monitoring systems are well adjusted to harsh machine tool environments. The difficulties in designing reliable machine tool monitoring can be related to the complexity of the machining process itself, which may have one or more of the following characteristics, apart from the changes of the machine tool itself.", "Automation-Control": 0.8814160824, "Qwen2": "Yes"} {"id": "5686380", "revid": "910180", "url": "https://en.wikipedia.org/wiki?curid=5686380", "title": "Integral windup", "text": "Integral windup, also known as integrator windup or reset windup, refers to the situation in a PID controller where a large change in setpoint occurs (say a positive change) and the integral term accumulates a significant error during the rise (windup), thus overshooting and continuing to increase as this accumulated error is unwound (offset by errors in the other direction). The specific problem is the excess overshooting.\nSolutions.\nThis problem can be addressed by\nOccurrence.\nIntegral windup particularly occurs as a limitation of physical systems, compared with ideal systems, due to the ideal output being physically impossible (process : the output of the process being limited at the top or bottom of its scale, making the error constant). For example, the position of a valve cannot be any more open than fully open and also cannot be closed any more than fully closed. In this case, anti-windup can actually involve the integrator being turned off for periods of time until the response falls back into an acceptable range.\nThis usually occurs when the controller's output can no longer affect the controlled variable, or if the controller is part of a selection scheme and it is selected right.\nIntegral windup was more of a problem in analog controllers. Within modern distributed control systems and programmable logic controllers, it is much easier to prevent integral windup by either limiting the controller output, limiting the integral to produce feasible output, or by using external reset feedback, which is a means of feeding back the selected output to the integral circuit of all controllers in the selection scheme so that a closed loop is maintained.", "Automation-Control": 0.9937853217, "Qwen2": "Yes"} {"id": "35013582", "revid": "82697", "url": "https://en.wikipedia.org/wiki?curid=35013582", "title": "Admission control", "text": "Admission control is a validation process in communication systems where a check is performed before a connection is established to see if current resources are sufficient for the proposed connection.\nApplications.\nFor some applications, dedicated resources (such as a wavelength across an optical network) may be needed in which case admission control has to verify availability of such resources before a request can be admitted.\nFor more elastic applications, a total volume of resources may be needed prior to some deadline in order to satisfy a new request, in which case admission control needs to verify availability of resources at the time and perform scheduling to guarantee satisfaction of an admitted request.", "Automation-Control": 0.6010066271, "Qwen2": "Yes"} {"id": "1100516", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=1100516", "title": "Model predictive control", "text": "Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models and in power electronics. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from a linear–quadratic regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry.\nGeneralized predictive control (GPC) and dynamic matrix control (DMC) are classical examples of MPC.\nOverview.\nThe models used in MPC are generally intended to represent the behavior of complex and simple dynamical systems. The additional complexity of the MPC control algorithm is not generally needed to provide adequate control of simple systems, which are often controlled well by generic PID controllers. Common dynamic characteristics that are difficult for PID controllers include large time delays and high-order dynamics.\nMPC models predict the change in the dependent variables of the modeled system that will be caused by changes in the independent variables. In a chemical process, independent variables that can be adjusted by the controller are often either the setpoints of regulatory PID controllers (pressure, flow, temperature, etc.) or the final control element (valves, dampers, etc.). Independent variables that cannot be adjusted by the controller are used as disturbances. Dependent variables in these processes are other measurements that represent either control objectives or process constraints.\nMPC uses the current plant measurements, the current dynamic state of the process, the MPC models, and the process variable targets and limits to calculate future changes in the dependent variables. These changes are calculated to hold the dependent variables close to target while honoring constraints on both independent and dependent variables. The MPC typically sends out only the first change in each independent variable to be implemented, and repeats the calculation when the next change is required.\nWhile many real processes are not linear, they can often be considered to be approximately linear over a small operating range. Linear MPC approaches are used in the majority of applications with the feedback mechanism of the MPC compensating for prediction errors due to structural mismatch between the model and the process. In model predictive controllers that consist only of linear models, the superposition principle of linear algebra enables the effect of changes in multiple independent variables to be added together to predict the response of the dependent variables. This simplifies the control problem to a series of direct matrix algebra calculations that are fast and robust.\nWhen linear models are not sufficiently accurate to represent the real process nonlinearities, several approaches can be used. In some cases, the process variables can be transformed before and/or after the linear MPC model to reduce the nonlinearity. The process can be controlled with nonlinear MPC that uses a nonlinear model directly in the control application. The nonlinear model may be in the form of an empirical data fit (e.g. artificial neural networks) or a high-fidelity dynamic model based on fundamental mass and energy balances. The nonlinear model may be linearized to derive a Kalman filter or specify a model for linear MPC.\nAn algorithmic study by El-Gherwi, Budman, and El Kamel shows that utilizing a dual-mode approach can provide significant reduction in online computations while maintaining comparative performance to a non-altered implementation. The proposed algorithm solves N convex optimization problems in parallel based on exchange of information among controllers.\nTheory behind MPC.\n MPC is based on iterative, finite-horizon optimization of a plant model. At time formula_1 the current plant state is sampled and a cost minimizing control strategy is computed (via a numerical minimization algorithm) for a relatively short time horizon in the future: formula_2. Specifically, an online or on-the-fly calculation is used to explore state trajectories that emanate from the current state and find (via the solution of Euler–Lagrange equations) a cost-minimizing control strategy until time formula_3. Only the first step of the control strategy is implemented, then the plant state is sampled again and the calculations are repeated starting from the new current state, yielding a new control and new predicted state path. The prediction horizon keeps being shifted forward and for this reason MPC is also called receding horizon control. Although this approach is not optimal, in practice it has given very good results. Much academic research has been done to find fast methods of solution of Euler–Lagrange type equations, to understand the global stability properties of MPC's local optimization, and in general to improve the MPC method.\nPrinciples of MPC.\nModel predictive control is a multivariable control algorithm that uses:\nAn example of a quadratic cost function for optimization is given by:\nwithout violating constraints (low/high limits) with\netc.\nNonlinear MPC.\nNonlinear model predictive control, or NMPC, is a variant of model predictive control that is characterized by the use of nonlinear system models in the prediction. As in linear MPC, NMPC requires the iterative solution of optimal control problems on a finite prediction horizon. While these problems are convex in linear MPC, in nonlinear MPC they are not necessarily convex anymore. This poses challenges for both NMPC stability theory and numerical solution.\nThe numerical solution of the NMPC optimal control problems is typically based on direct optimal control methods using Newton-type optimization schemes, in one of the variants: direct single shooting, direct multiple shooting methods, or direct collocation. NMPC algorithms typically exploit the fact that consecutive optimal control problems are similar to each other. This allows to initialize the Newton-type solution procedure efficiently by a suitably shifted guess from the previously computed optimal solution, saving considerable amounts of computation time. The similarity of subsequent problems is even further exploited by path following algorithms (or \"real-time iterations\") that never attempt to iterate any optimization problem to convergence, but instead only take a few iterations towards the solution of the most current NMPC problem, before proceeding to the next one, which is suitably initialized; see, e.g... Another promising candidate for the nonlinear optimization problem is to use a randomized optimization method. Optimum solutions are found by generating random samples that satisfy the constraints in the solution space and finding the optimum one based on cost function. \nWhile NMPC applications have in the past been mostly used in the process and chemical industries with comparatively slow sampling rates, NMPC is being increasingly applied, with advancements in controller hardware and computational algorithms, e.g., preconditioning, to applications with high sampling rates, e.g., in the automotive industry, or even when the states are distributed in space (Distributed parameter systems). As an application in aerospace, recently, NMPC has been used to track optimal terrain-following/avoidance trajectories in real-time.\nExplicit MPC.\nExplicit MPC (eMPC) allows fast evaluation of the control law for some systems, in stark contrast to the online MPC. Explicit MPC is based on the parametric programming technique, where the solution to the MPC control problem formulated as optimization problem is pre-computed offline. This offline solution, i.e., the control law, is often in the form of a piecewise affine function (PWA), hence the eMPC controller stores the coefficients of the PWA for each a subset (control region) of the state space, where the PWA is constant, as well as coefficients of some parametric representations of all the regions. Every region turns out to geometrically be a convex polytope for linear MPC, commonly parameterized by coefficients for its faces, requiring quantization accuracy analysis. Obtaining the optimal control action is then reduced to first determining the region containing the current state and second a mere evaluation of PWA using the PWA coefficients stored for all regions. If the total number of the regions is small, the implementation of the eMPC does not require significant computational resources (compared to the online MPC) and is uniquely suited to control systems with fast dynamics. A serious drawback of eMPC is exponential growth of the total number of the control regions with respect to some key parameters of the controlled system, e.g., the number of states, thus dramatically increasing controller memory requirements and making the first step of PWA evaluation, i.e. searching for the current control region, computationally expensive.\nRobust MPC.\nRobust variants of model predictive control are able to account for set bounded disturbance while still ensuring state constraints are met. Some of the main approaches to robust MPC are given below.\nCommercially available MPC software.\nCommercial MPC packages are available and typically contain tools for model identification and analysis, controller design and tuning, as well as controller performance evaluation.\nA survey of commercially available packages has been provided by S.J. Qin and T.A. Badgwell in \"Control Engineering Practice\" 11 (2003) 733–764.\nMPC vs. LQR.\nModel predictive control and linear-quadratic regulators are both expressions of optimal control, with different schemes of setting up optimisation costs.\nWhile a model predictive controller often looks at fixed length, often graduatingly weighted sets of error functions, the linear-quadratic regulator looks at all linear system inputs and provides the transfer function that will reduce the total error across the frequency spectrum, trading off state error against input frequency.\nDue to these fundamental differences, LQR has better global stability properties, but MPC often has more locally optimal[?] and complex performance.\nThe main differences between MPC and LQR are that LQR optimizes across the entire time window (horizon) whereas MPC optimizes in a receding time window, and that with MPC a new solution is computed often whereas LQR uses the same single (optimal) solution for the whole time horizon. Therefore, MPC typically solves the optimization problem in a smaller time window than the whole horizon and hence may obtain a suboptimal solution. However, because MPC makes no assumptions about linearity, it can handle hard constraints as well as migration of a nonlinear system away from its linearized operating point, both of which are major drawbacks to LQR.\nThis means that LQR can become weak when operating away from stable fixed points. MPC can chart a path between these fixed points, but convergence of a solution is not guaranteed, especially if thought as to the convexity and complexity of the problem space has been neglected.", "Automation-Control": 0.9984244108, "Qwen2": "Yes"} {"id": "47959004", "revid": "7852030", "url": "https://en.wikipedia.org/wiki?curid=47959004", "title": "DFM analysis for stereolithography", "text": "In design for additive manufacturing (DFAM), there are both broad themes (which apply to many additive manufacturing processes) and optimizations specific to a particular AM process. Described here is DFM analysis for stereolithography, in which design for manufacturability (DFM) considerations are applied in designing a part (or assembly) to be manufactured by the stereolithography (SLA) process. In SLA, parts are built from a photocurable liquid resin that cures when exposed to a laser beam that scans across the surface of the resin (photopolymerization). Resins containing acrylate, epoxy, and urethane are typically used. Complex parts and assemblies can be directly made in one go, to a greater extent than in earlier forms of manufacturing such as casting, forming, metal fabrication, and machining. Realization of such a seamless process requires the designer to take in considerations of manufacturability of the part (or assembly) by the process. In any product design process, DFM considerations are important to reduce iterations, time and material wastage.\nChallenges in stereolithography.\nMaterial.\nExcessive setup specific material cost and lack of support for 3rd party resins is a major challenge with SLA process:. The choice of material (a design process) is restricted by the supported resin. Hence, the mechanical properties are also fixed. When scaling up dimensions selectively to deal with expected stresses, post curing is done by further treatment with UV light and heat. Although advantageous to mechanical properties, the additional polymerization and cross linkage can result in shrinkage, warping and residual thermal stresses. Hence, the part shall be designed in its 'green' stage i.e. pre-treatment stage.\nSetup and process.\nSLA process is an additive manufacturing process. Hence, design considerations such as orientation, process latitude, support structures etc. have to be considered.\nOrientation affects the support structures, manufacturing time, part quality and part cost. Complex structures may fail to manufacture properly due to orientation which is not feasible resulting in undesirable stresses. This is when the DFM guidelines can be applied. Design feasibility for stereolithography can be validated by analytical as well as on the basis of simulation and/or guidelines \nRule-based DFM considerations.\nRule-based considerations in DFM refer to certain criteria that the part has to meet in order to avoid failures during manufacturing. Given the layer-by-layer manufacturing technique the process follows, there isn't any constraint on the overall complexity that the part may have. But some rules have been developed through experience by the printer developer/academia which must be followed to ensure that the individual features that make up the part are within certain 'limits of feasibility'.\nPrinter constraints.\nConstraints/limitations in SLA manufacturing comes from the printer's accuracy, layer thickness, speed of curing, speed of printing etc. Various printer constraints are to be considered during design such as:\nSupport structures.\nA point needs support if:\nWhile printing, support structures act as a part of design hence, their limitations and advantages are kept in mind while designing. Major considerations include:\nPart deposition orientation.\nPart orientation is a very crucial decision in DFM analysis for SLA process. The build time, surface quality, volume/number of support structures etc. depend on this. In many cases, it is also possible to address the manufacturability issues just by reorienting the part. For example, an overhanging geometry with shallow angle may be oriented to ensure steep angles. Hence, major considerations include:\nPlan-based DFM considerations.\nPlan-based considerations in DFM refer to criteria that arise due to process plan. These are to be met in order to avoid failures during manufacturing of a part that may be satisfy the rule-based criteria but may have some manufacturing difficulties due to sequence in which features are produced.\nGeometric tailoring.\nGeometric Tailoring bridges the mismatch of material properties and process differences described above. Both functionality and manufacturability issues are addressed. Functionality issues are addressed through 'tailoring' of dimensions of the part to compensate the stress and deflection behavior anomalies. Manufacturability issues are tackled through identification of difficult to manufacture geometric attributes (an approach used in most DFM handbooks) or through simulations of manufacturing processes. For RP-produced parts (as in SLA), the problem formulations are called material-process geometric tailoring (MPGT)/RP.\nFirst, the designer specifies information such as: Parametric CAD model of the part; constraints and goals on functional, geometry, cost and time characteristics; analysis models for these constraints and goals; target values of goals; and preferences for the goals.\nDFM problem is then formulated as the designer fills in the MPGT template with this information and sends to the manufacturer, who fills in the remaining 'manufacturing relevant' information. With the completed formulation, the manufacturer is now able to solve the DFM problem, performing GT of the part design. Hence, the MPGT serves as the digital interface between the designer and the manufacturer.\nVarious Process Planning (PP) strategies have been developed for geometric tailoring in SLA process.\nDFM frameworks.\nThe constraints imposed by the manufacturing process are mapped onto the design. This helps in identification of DFM problems while exploring process plans by acting as a retrieval method. Various DFM frameworks are developed in literature. These frameworks help in various decision making steps such as:\nExternal links.\nDfm2U Live", "Automation-Control": 0.6830501556, "Qwen2": "Yes"} {"id": "69839168", "revid": "1157733983", "url": "https://en.wikipedia.org/wiki?curid=69839168", "title": "Slippery road training", "text": "Slippery road training is the driving on a closed area with a slippery surface as a part of training road users in driving on slippery roads.\nIt is used in some countries as a mandatory prerequisite for a driving test, or by drivers who wish to practice their maneuvering skills in slippery conditions. Such training can also detect faults in the brakes or steering of a vehicle which can have a major impact on emergency maneuvers.\nTrack construction.\nIn order to be able to simulate wet icy conditions all year around, slippery tracks have traditionally been made using iron plates on the ground upon which an oil film is sprayed. On newer courses, epoxy is often used instead, which only needs to be applied with water to achieve the same effect.\nTraining requirement for driving license.\nNorway.\nIn Norway, slippery road training is a mandatory part of training which must be completed for everyone taking their driving test, and goes under the name \"glattkjøring\" (or officially: \"sikkerhetskurs på bane\"). The exam task consists of regaining control of the vehicle after skidding when braking or turning, as well as adapting driving behavior on slippery roads. A standard slippery track therefore has a long straight for braking, and a curve for turning training. Some courses also have a slippery slope. Normally, a slippery road also has markers in cardboard or similar soft material which can be remotely controlled by the driving instructor to simulate obstacles in the road that the driver must avoid (e.g. a running moose or pedestrians). The Norwegian Automobile Federation owns and operates 26 slippery road tracks in Norway.", "Automation-Control": 0.7669929266, "Qwen2": "Yes"} {"id": "27380463", "revid": "1157912680", "url": "https://en.wikipedia.org/wiki?curid=27380463", "title": "Sensitivity (control systems)", "text": "In control engineering, the sensitivity (or more precisely, the sensitivity function) of a control system measures how variations in the plant parameters affects the closed-loop transfer function. Since the controller parameters are typically matched to the process characteristics and the process may change, it is important that the controller parameters are chosen in such a way that the closed loop system is not sensitive to variations in process dynamics. Moreover, the sensitivity function is also important to analyse how disturbances affects the system. \nSensitivity function.\nLet formula_1 and formula_2 denote the plant and controller's transfer function in a basic closed loop control system written in the Laplace domain using unity negative feedback.\nSensitivity function as a measure of robustness to parameter variation.\nThe closed-loop transfer function is given by\nformula_3\nDifferentiating formula_4 with respect to formula_5 yields \nformula_6\nwhere formula_7 is defined as the function \nformula_8 \nand is known as the sensitivity function. Lower values of formula_9 implies that relative errors in the plant parameters has less effects in the relative error of the closed-loop transfer function. \nSensitivity function as a measure of disturbance attenuation.\nThe sensitivity function also describes the transfer function from external disturbance to process output. In fact, assuming an additive disturbance \"n\" after the output \nof the plant, the transfer functions of the closed loop system are given by\nformula_10\nHence, lower values of formula_9 suggest further attenuation of the external disturbance. The sensitivity function tells us how the disturbances are influenced by feedback. Disturbances with frequencies such that formula_12 is less than one are reduced by an amount equal to the distance to the critical point formula_13 and disturbances with frequencies such that formula_12 is larger than one are amplified by the feedback.\nSensitivity peak and sensitivity circle.\nSensitivity peak.\nIt is important that the largest value of the sensitivity function be limited for a control system. The nominal sensitivity peak formula_15 is defined as\nformula_16\nand it is common to require that the maximum value of the sensitivity function, formula_15, be in a range of 1.3 to 2.\nSensitivity circle.\nThe quantity formula_15 is the inverse of the shortest distance from the Nyquist curve of the loop transfer function to the critical point formula_13. A sensitivity formula_15 guarantees that the distance from the critical point to the Nyquist curve is always greater than formula_21 and the Nyquist curve of the loop transfer function is always outside a circle around the critical point formula_22 with the radius formula_21, known as the sensitivity circle. formula_15 defines the maximum value of the sensitivity function and the inverse of formula_15 gives you the shortest distance from the open-loop transfer function formula_26 to the critical point formula_22.", "Automation-Control": 0.7835273743, "Qwen2": "Yes"} {"id": "637857", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=637857", "title": "Symbol level", "text": "In knowledge-based systems, agents choose actions based on the principle of rationality to move closer to a desired goal. The agent is able to make decisions based on knowledge it has about the world (see knowledge level). But for the agent to actually change its state, it must use whatever means it has available. This level of description for the agent's behavior is the symbol level. The term was coined by Allen Newell in 1982.\nFor example, in a computer program, the knowledge level consists of the information contained in its data structures that it uses to perform certain actions. The symbol level consists of the program's algorithms, the data structures themselves, and so on.", "Automation-Control": 0.9654397964, "Qwen2": "Yes"} {"id": "523009", "revid": "4796325", "url": "https://en.wikipedia.org/wiki?curid=523009", "title": "Kite control systems", "text": "Kite types, kite mooring, and kite applications result in a wide variety of kite control systems. Contemporary manufacturers, kite athletes, kite pilots, scientists, and engineers are expanding the possibilities.\nSingle-line kite control systems.\nHigh-altitude attempt single-line control systems.\nOn-board angle-of-attack mechanisms were used in the 2000 altitude record-making flight; the operators' designed adjuster limited kite line tension to not more than 100 pounds by altering the angle of attack of the kite's wing body. The kite's line had a control: a line payout meter that did not function in the record-setting flight. However, some special tether line lower end used bungee and pulley arrangements to lower the impact of gusts on the long tether. Control of a kite includes how other aircraft see the kite system; the team placed a radio beacon (using two-meter frequency detectable for 50 miles) on the kite; for sight visibility, strobe lights were hung from the kite's nose. Control via use of reels and pulleys become critical when tension is high; the team had to repair and replace parts during the flight session.\nAuxiliary control.\nAuxiliary devices have been invented and used for controlling single-line kites. Devices on board the kite's wing can react to the kite-line's tension or to the kite's angle of attack with the ambient stream in which the kite is flying. Special reel devices allow kite-line length and tension control. Moving the kite's line lower end left or right or windward or anti-windward forms part of the control system of single-line kites. Devices at the kite's bridle can be set to alter the relative lengths of sub-bridle lines in order to set the attitude of the kite so that the kite flies at a certain position of the potential positions; this can be done for one setting while the kite is readied for flight; but Kenneth C. Howard invited a device that can be operated on single-line kites during the flight session for variable settings: \nFighter-kite control systems.\nThe traditional fighter kiting with single-line control dominates kite fighting while multi-line kite fighting is yet a minor activity. The human operator of the single line aims to master movements (tugs, jerks, releases, directional movements) in order to have the unstable kite temporarily move in one direction or another. The intents of the controls are offensive and defensive; escape from an attack or position for an attack. The building of the kite so that motions by the kite's human operator or pilot allow a temporary limited stability takes special care.\nHistorical kite control systems.\nA piano-wire based kite control system.\nMedium-length-tethered power kites.\nPower kites are controlled by two to five lines. The simplest systems provide steering by pulling either end of the kite. More lines can provide different functions. These are:\nThe lines attach to different controllers:\nControl of high-altitude electricity-generating wind-power kite systems.\nHuman control of high altitude wind power systems is typically accomplished through servo mechanisms, as the tether tensions are too great for direct manual operation.\nThere are a number of patents in this area:\nOther concepts include:\nControl of kite rigs.\nKite rigs are systems for propelling a vehicle, such as a boat, buggy, or a vehicle with snow and ice runners. They may be as simple as a person flying a kite while standing on a specialized skateboard, or be complex systems fixed to the vehicle with powered and automated controls. They differ from conventional sails in that they are flown from lines, not supported by masts.\nCommercial transport propulsion.\nShip-pulling kites run to hundreds of square meters of area and require a special attachment points, a launch and recovery system, and fly-by-wire controls.\nThe SkySails ship propulsion system consists of a large foil kite, an electronic control system for the kite, and an automatic system to retract the kite.\nThe kite, while over ten times larger, bears similarities to the arc kites used in kitesurfing. However, the kite is an inflatable rather than a ram-air kite. Additionally, a control pod is used rather than direct tension on multiple kite control lines; only one line runs the full distance from kite to ship, with the bridle lines running from kite to control pod. Power to the pod is provided by cables embedded in the line; the same line also carries commands to the control pod from the ship.\nThe kite is launched and recovered by an animated mast or arm, which grips the kite by its leading edge. The mast also inflates and deflates the kite. When not in use, mast and deflated kite fold away.\nTarget-kites.\nThe term \"target kite\" generally refers to the war-time kites used for shipboard anti-aircraft gunnery practice. These were the invention of Paul E. Garber, doing war work while on leave from the Smithsonian (where he was responsible for the acquisition of much of the Air and Space collection).\nThe kites were ordinary two-spar Eddy style kites with a height of about five feet. The sail was sky blue with the profile of a Japanese Zero or German aircraft painted in black. Attached at the lower end of the vertical spar is a small rudder, much like a boat's rudder. The rudder is controlled by two kite lines, which are also used to fly the kite. The two lines come down to earth and terminate at an either a \"flying bar\" (a bar with spools at either end) or a special two-spool reel which incorporated a ratchet mechanism to assist in equalizing line length. The spool was in the center of a wooden bar which held the lines a fixed distance apart.\nIndoor.\nA wand or pole with a string on the end is often used to lead indoor kites around.\nHang-gliders.\nUnpowered short-tethered hang-gliders.\nUnlike the long-lined power kites used in extreme kiting sport, the focus in this section is the short-lined framed large kite. The kite line or \"hang line\" for best controlling the flight of the hang glider kite needs to be carefully lengthened; then the line frequently splits to two, three, or four main tethers that connect to the hung kite operator's or pilot's harness. Mike Meier, kite glider author, wrote How \"To Get The Right Hang Height\". NASA used mass-shifting in the Paresev hung-pilot aircraft with a stiffened-frame kite. The hang tether was also stiffened, differently. In sport hang gliding kite systems using the short hang line, the hang loop or first section of the hanging kite line is a flexible webbing, then the main lines to the harness are cords and sometimes webbing that are flexible. Control of the attitude of the kite's wing is achieved frequently by the pilot's grabbing the kite's stiffened airframe part called the control frame and pushing or pulling the kite's airframe left or right or forward and aft in various combinations; this control system is most commonly called \"weight-shifting\" although mechanically the situation is altering positions of mass to alter the center of gravity of the entire system relative to the aerodynamic center of pressure in order to effect leveraging moments to control the flight.\nThe place on the kite airframe where the tether is tied is very important as in all kites; such connection or bridling takes into consideration the aerodynamic center of pressure and the system's center of gravity. A key article by Mike Meier, \"Pitch Stability & Center of Mass\" \"Location\", focuses on this concern of control.\nWhile flying the kite hang glider, there are times during flight instruction that instructors will have the student fully release from holding the triangle control frame and simply and only hang. The hanging (gravity pulls the student's body downwards and results in a tensional tugging of the kite's wing) student experiences that the properly bridled and trimmed wing will fly stably. However, since gusts occur, the student learns that hands-off flying is not the normal status—rather the kite pilot is almost always handling the control frame. Light bar pressure.\nPowered short-tethered hang-gliders.\nHere the unpowered kite is tethered to a pilot who arranges to have a harness to which is attached a thrusting prime moving engine or motor; the total system is a powered aircraft while the kite itself remains unpowered (very differently is when an engine is mounted on a wing). The control system includes the control system of the similar system where the pilot is not thrust by a harnessed prime moving engine or motor; however, in controlling flight, adjustments for center of mass are respected. Further, while the pilot's thrust is on, the pilot positions so that the kite's kite line is angled so tugging of the wing is accomplished in the familiar kiting manner where the kite line begins upwind and angles upward downwind (relative wind is to be the wind in attention here).\nUnder static-line tow.\nHere the tug kite line stays the same length during the kiting operation. The ground vehicle driver has special control duties. The kited hang glider person controls the kite in some ways different from other tow methods; careful distinctions are learned in professional instruction. Controlling things when unexpected events occur is a large part of instruction.\nUnder non-static-line tow.\n The complex control system includes the operator of the winch. The length of line starts long and then gets shorter as the winch reels the tug line; this alters the control decisions by the kited hang glider pilot. Instruction for controls is available for new winch operators and hang glider pilots who want to be so kited. Distinguish this method from static-line (tug line stays same length during the tow). The control system for the shortening-the-tug-line method of kiting is different.\nUnder bungee-line launch.\nBungee launch control systems for kited hang gliders has its own special details. The tug kite line is very elastic; when tensed, the line is long; during use for launch, the kite line shortens. Controlling the kite's wing attitudes is up to the pilot who frequently is hung from a short kite line while controlling a triangle control frame or other airframe part or even aerodynamic surface controls. Professional instruction is highly recommended. Inelastic portion of the bungee assembly is used to help guard against what can happen if the bungee breaks and snaps back toward the pilot; a tug-line parachute can be used to lower the speed that the released bungee will fall. Bungee launch is used most frequently for launch off slopes when free-foot-launch is not easy (site structure or pilots who have not the use of their legs), or for flatland short-flight demonstrations.\nParagliders.\nThe non-stiffened Francis Rogallo parawing, the Domina Jalbert founded parafoil wing, or other modified fully flexible wings (Barish sailwing, KiteShip wing, parasails, modified conical parachutes) do not lend themselves to a mounting of a prime moving engine or motor to them; rather the kiting lines to the unpowered wing terminate below the wing to a static or mobile anchoring; that anchoring itself may be with its own active thrusting engine or motor or the anchor (which could be payload, pilot, or both payload and pilot) may simply fall by gravity force—and thus by gravity tug the wing through the kite lines. When the payload or pilot is simply falling without adding a prime moving engine or motor, then the kited flexible wing is a paragliding wing; when the payload or pilot is additionally arranged with a thrust engine or motor, then the kited unpowered flexible wing with such thrust payload or pilot is a powered aircraft system or powered paragliding system. The control systems are varied for particular applications (ranging from lowering military payloads, autonomous powered paragliders or drones, sport paragliding, sport powered paragliding, scale-model paragliding, scale-model powered paragliding). All variations have in common the unpowered kite whether or not the payload and/or pilot is powered.\nGovernable gliding parachutes.\n These free-flight kites are governable parachutes and are used as payload delivery systems, sport gliding parachuting or skydiving, BASE jumping, scale-model parachuting. When used for delivery of sensitive payloads or carrying humans, the fast opening from packed format is damped by use of a slider. The wing remains unpowered and kited by bridle tethering lines; the lines attach to platforms or harnesses. The size and design of the kited wing is customized for the final type of use where packing, opening, and sink rate are important feature. Control systems are specialized for the specific use. Control systems sometimes include radio control from remote locations.\nKite aerial photography.\nKites used in kite aerial photography (KAP) are typically controlled using the same reels and spools as non-KAP kite flyers. The best KAP work seems be done at lower altitudes than expected (100–200'), so no special equipment is required. The most problematic KAP flights are when the best camera shot requires the kite to be flown amongst tall trees or buildings, so quick haul-in can be a plus.\nThe camera rig itself is attached to the kite line some distance beneath the kite, preferably with a pulley scheme that will permit the camera to float in a level attitude regardless of the kite's gyrations. The \"Picavet\" system is one such scheme.\nFurther sophistication in kite photography comes with live video and radio control features to control where the camera is pointing. This is superior to the minimal rig which simply clicks the camera every few minutes and must be hauled down to earth to change the direction in which the camera points. The penalty of the radio control rigs is weight, which requires higher winds to do photography. So in addition to clear skies, high winds are also necessary, which will limit opportunities for photography.\nSolar sail and plasma kites.\nScientists on one type of solar kite take pride that there will be a minimum of moving parts to control the movement of the solar kite through space and around the Earth, Moon, comet, or other Solar System body. A collection of scientists and engineers are expanding the definition of what a kite is; the solar kite described by authors C. Jack and C. Welch has the inertia of the mass of the kite providing resistance against photonic flow. Also, the controlling of the kite to alter the kite's acceleration sets up a kiting scenario: causing the kite to deflect away from the pull of gravity to keep it flying on its intended path supports the inclusion of the solar sail as a kite in photonic flow. The kite is fed start data; the kite tracks the stars and operates three elements to control its attitude to effect its deflections to result in the flight path desired by the ground-directing kite operators. The position of the payload is changed to alter the relative positions of the kite's center of pressure and center of mass; this is done in part by piezoelectric actuators. Also, the struts that hold the centered payload are differentially heated; such causes one of the struts to become longer than the cooler struts and thereby changing the center of mass relative to the center of pressure of the kite. Further, to cause an attitude change, tiny photo thrusters (heated wire) tweak the attitude of the kite; such thrusters do not propel the kite, but are only used to change the attitude of the kite's sail. These mechanisms aim to give authoritative control at minimum power use for giving direction to the kite. Working solar kite groups are considering at least seventeen means of control of the solar kite/solar sail.", "Automation-Control": 0.6970364451, "Qwen2": "Yes"} {"id": "2542615", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=2542615", "title": "Krylov subspace", "text": "In linear algebra, the order-\"r\" Krylov subspace generated by an \"n\"-by-\"n\" matrix \"A\" and a vector \"b\" of dimension \"n\" is the linear subspace spanned by the images of \"b\" under the first \"r\" powers of \"A\" (starting from formula_1), that is,\nBackground.\nThe concept is named after Russian applied mathematician and naval engineer Alexei Krylov, who published a paper about it in 1931.\nUse.\nKrylov subspaces are used in algorithms for finding approximate solutions to high-dimensional linear algebra problems. Many linear dynamical system tests in control theory, especially those related to controllability and observability, involve checking the rank of the Krylov subspace. These tests are equivalent to finding the span of the Gramians associated with the system/output maps so the uncontrollable and unobservable subspaces are simply the orthogonal complement to the Krylov subspace.\nModern iterative methods such as Arnoldi iteration can be used for finding one (or a few) eigenvalues of large sparse matrices or solving large systems of linear equations. They try to avoid matrix-matrix operations, but rather multiply vectors by the matrix and work with the resulting vectors. Starting with a vector formula_13, one computes formula_23, then one multiplies that vector by formula_12 to find formula_25 and so on. All algorithms that work this way are referred to as Krylov subspace methods; they are among the most successful methods currently available in numerical linear algebra. These methods can be used in situations where there is an algorithm to compute the matrix-vector multiplication without there being an explicit representation of formula_12, giving rise to Matrix-free methods.\nIssues.\nBecause the vectors usually soon become almost linearly dependent due to the properties of power iteration, methods relying on Krylov subspace frequently involve some orthogonalization scheme, such as Lanczos iteration for Hermitian matrices or Arnoldi iteration for more general matrices.\nExisting methods.\nThe best known Krylov subspace methods are the Conjugate gradient, IDR(s) (Induced dimension reduction), GMRES (generalized minimum residual), BiCGSTAB (biconjugate gradient stabilized), QMR (quasi minimal residual), TFQMR (transpose-free QMR) and MINRES (minimal residual method).", "Automation-Control": 0.9244790673, "Qwen2": "Yes"} {"id": "2544318", "revid": "42405773", "url": "https://en.wikipedia.org/wiki?curid=2544318", "title": "Coloured Petri net", "text": "Coloured Petri nets are a backward compatible extension of the mathematical concept of Petri nets.\nColoured Petri nets preserve useful properties of Petri nets and at the same time extend the initial formalism to allow the distinction between tokens.\nColoured Petri nets allow tokens to have a data value attached to them. This attached data value is called the token color. Although the color can be of arbitrarily complex type, places in coloured Petri nets usually contain tokens of one type. This type is called the color set of the place.\nDefinition 1. A \"net\" is a tuple \"N\" = (\"P\", \"T\", \"A\", Σ, \"C\", \"N\", \"E\", \"G\", \"I\" ) where:\nIn coloured Petri nets, sets of places, transitions and arcs are pairwise disjoint \"P\" ∩ \"T\" = \"P\" ∩ \"A\" = \"T\" ∩ \"A\" = ∅\nUse of node function and arc expression function allows multiple arcs connect the same pair of nodes with different arc expressions.\nA well-known program for working with coloured Petri nets is cpntools.", "Automation-Control": 0.9869635105, "Qwen2": "Yes"} {"id": "15516517", "revid": "28032115", "url": "https://en.wikipedia.org/wiki?curid=15516517", "title": "Application Level Events", "text": "Application Level Events (ALE) is a standard created by EPCglobal, an organization of industry leaders devoted to the development of standards for the Electronic Product Code (EPC) and Radio-frequency identification (RFID) technologies and standards. The ALE specification is a software specification indicating required functionality and behavior, as well as a common API expressed through XML Schema Definition (XSD) and Web Services Description Language (WSDL).", "Automation-Control": 0.9504310489, "Qwen2": "Yes"} {"id": "66097812", "revid": "6972236", "url": "https://en.wikipedia.org/wiki?curid=66097812", "title": "Maria Pia Fanti", "text": "Maria Pia Fanti (born 21 February 1957) is an Italian control theorist known for her research on topics that include discrete event dynamic systems, Petri nets, consensus, fault detection and isolation, agile manufacturing, and road traffic control. She is a professor in the Department of Electrical and Information Engineering at the Polytechnic University of Bari, where she heads the Laboratory for Control and Automation.\nEducation and career.\nFanti studied electronic engineering at the University of Pisa, and earned a laurea there in 1983. She has been a full professor at the Polytechnic University of Bari since 2012.\nRecognition.\nFanti was named an IEEE Fellow in 2017 \"for contributions to modeling and control of discrete event systems\".", "Automation-Control": 0.9730104804, "Qwen2": "Yes"} {"id": "66115813", "revid": "35936988", "url": "https://en.wikipedia.org/wiki?curid=66115813", "title": "Kirsten Morris", "text": "Kirsten Anna Morris (born 1960) is a Canadian applied mathematician specializing in control theory, including work on flexible structures, smart materials, hysteresis, and infinite-dimensional optimization. She is a professor at the University of Waterloo, the former chair of the Society for Industrial and Applied Mathematics Activity Group on Control and Systems, the author of two books on control theory, and an IEEE Fellow.\nEducation and career.\nMorris was motivated to study mathematical economics at Queen's University at Kingston by a job doing econometrics at a bank, but lost interest in the economic applications of mathematics after a year, instead switching into a program in mathematics and engineering, which she finished in 1982. She became interested in control theory while studying for a master's degree at the University of Waterloo. After completing the degree in 1984, she continued at Waterloo as a doctoral student, and earned her Ph.D. there in 1989. Her dissertation, \"Finite-Dimensional Control of Infinite-Dimensional Systems\", was supervised by Mathukumalli Vidyasagar.\nAfter a year as a staff scientist at the NASA Langley Research Center, she returned to Waterloo as an assistant professor in the Department of Applied Mathematics in 1990. She became a full professor there in 2003, and also holds a cross-appointment in the Department of Mechanical and Mechatronics Engineering. She chaired the Society for Industrial and Applied Mathematics Activity Group on Control and Systems from 2018 to 2019, and has held leadership positions in the IEEE Control Systems Society and the International Federation of Automatic Control.\nBooks.\nMorris is the author of the books \"Introduction to Feedback Control\" (Harcourt-Brace, 2001) and \"Controller Design for Distributed Parameter Systems\" (Springer, 2020). She is the editor of \"Control of Flexible Structures: Papers from the Workshop on Problems in Sensing, Identification and Control of Flexible Structures held in Waterloo, Ontario, June 1992\" (American Mathematical Society, 1993).\nRecognition.\nIn 2020, Morris was named an IEEE Fellow, affiliated with the IEEE Control Systems Society, \"for contributions to control and estimator design for infinite-dimensional systems\". She was named a SIAM Fellow in the 2021 class of fellows, \"for contributions to modeling, approximation, and control design for distributed parameter systems\". She is also a Fellow of the International Federation of Automatic Control.", "Automation-Control": 0.8081744909, "Qwen2": "Yes"} {"id": "24933015", "revid": "20483999", "url": "https://en.wikipedia.org/wiki?curid=24933015", "title": "Margin-infused relaxed algorithm", "text": "Margin-infused relaxed algorithm (MIRA) is a machine learning algorithm, an online algorithm for multiclass classification problems. It is designed to learn a set of parameters (vector or matrix) by processing all the given training examples one-by-one and updating the parameters according to each training example, so that the current training example is classified correctly with a margin against incorrect classifications at least as large as their loss. The change of the parameters is kept as small as possible.\nA two-class version called binary MIRA simplifies the algorithm by not requiring the solution of a quadratic programming problem (see below). When used in a one-vs-all configuration, binary MIRA can be extended to a multiclass learner that approximates full MIRA, but may be faster to train.\nThe flow of the algorithm looks as follows:\n Input: Training examples formula_1\n Output: Set of parameters formula_2\n formula_3 ← 0, formula_4 ← 0\n for formula_5 ← 1 to formula_6\n for formula_7 ← 1 to formula_8\n formula_9 ← update formula_10 according to formula_11\n formula_3 ← formula_13\n end for\n end for\n return formula_14\nThe update step is then formalized as a quadratic programming problem: Find formula_15, so that formula_16, i.e. the score of the current correct training formula_17 must be greater than the score of any other possible formula_18 by at least the loss (number of errors) of that formula_18 in comparison to formula_17.", "Automation-Control": 0.6762406826, "Qwen2": "Yes"} {"id": "24952021", "revid": "10611664", "url": "https://en.wikipedia.org/wiki?curid=24952021", "title": "Tensor product model transformation", "text": "In mathematics, the tensor product (TP) model transformation was proposed by Baranyi and Yam as key concept for higher-order singular value decomposition of functions. It transforms a function (which can be given via closed formulas or neural networks, fuzzy logic, etc.) into TP function form if such a transformation is possible. If an exact transformation is not possible, then the method determines a TP function that is an approximation of the given function. Hence, the TP model transformation can provide a trade-off between approximation accuracy and complexity.\nA free MATLAB implementation of the TP model transformation can be downloaded at or an old version of the toolbox is available at MATLAB Central . A key underpinning of the transformation is the higher-order singular value decomposition.\nBesides being a transformation of functions, the TP model transformation is also a new concept in qLPV based control which plays a central role in the providing a valuable means of bridging between identification and polytopic systems theories. The TP model transformation is uniquely effective in manipulating the convex hull of polytopic forms, and, as a result has revealed and proved the fact that convex hull manipulation is a necessary and crucial step in achieving optimal solutions and decreasing conservativeness in modern LMI based control theory. Thus, although it is a transformation in a mathematical sense, it has established a conceptually new direction in control theory and has laid the ground for further new approaches towards optimality. Further details on the control theoretical aspects of the TP model transformation can be found here: TP model transformation in control theory.\nThe TP model transformation motivated the definition of the \"HOSVD canonical form of TP functions\", on which further information can be found here. It has been proved that the TP model transformation is capable of numerically reconstructing this HOSVD based canonical form. Thus, the TP model transformation can be viewed as a numerical method to compute the HOSVD of functions, which provides exact results if the given function has a TP function structure and approximative results otherwise.\nThe TP model transformation has recently been extended in order to derive various types of convex TP functions and to manipulate them. This feature has led to new optimization approaches in qLPV system analysis and design, as described at TP model transformation in control theory.\nDefinitions.\nthat is, using compact tensor notation (using the tensor product operation formula_4 of ):\nwhere core tensor formula_6 is constructed from formula_7, and row vector formula_8 contains continuous univariate weighting functions formula_9. The function formula_10 is the formula_11-th weighting function defined on the formula_12-th dimension, and formula_13 is the formula_12-the element of vector formula_15. Finite element means that formula_16 is bounded for all formula_17. For qLPV modelling and control applications a higher structure of TP functions are referred to as TP model.\nHere formula_19 is a tensor as formula_20, thus the size of the core tensor is formula_21. The product operator formula_22 has the same role as formula_23, but expresses the fact that the tensor product is applied on the formula_24 sized tensor elements of the core tensor formula_25. Vector formula_26 is an element of the closed hypercube formula_27.\nThis means that formula_30 is inside the convex hull defined by the core tensor for all formula_31.\nnamely it generates the core tensor formula_35 and the weighting functions formula_36 for all formula_37. Its free MATLAB implementation is downloadable at or at MATLAB Central .\nIf the given formula_38 does not have TP structure (i.e. it is not in the class of TP models), then the TP model transformation determines its approximation:\nwhere trade-off is offered by the TP model transformation between complexity (number of components in the core tensor or the number of weighting functions) and the approximation accuracy. The TP model can be generated according to various constrains. Typical TP models generated by the TP model transformation are:\nReferences.\nBaranyi, P. (2018). Extension of the Multi-TP Model Transformation to Functions with Different Numbers of Variables. Complexity, 2018.", "Automation-Control": 0.9786996245, "Qwen2": "Yes"} {"id": "10799089", "revid": "28398017", "url": "https://en.wikipedia.org/wiki?curid=10799089", "title": "Temper mill", "text": "A temper mill is a steel sheet or steel plate processing line composed of a horizontal pass cold rolling mill stand, entry and exit conveyor tables and upstream and downstream equipment depending on the design and nature of the processing system.\nThe primary purpose of a temper mill is to improve the surface finish on steel products.\nComponents.\nA typical type of temper mill installation includes entry equipment for staging and accepting hot rolled coils of steel which have been hot wound at the end of a hot strip mill or hot rolled plate mill. Also included in a typical temper mill installation are pinch rolls, a leveler (sometimes two levelers), a shear for cutting the finished product to pre-determined lengths, a stacker for accumulating cut lengths of product\nSometimes a temper mill installation includes a re-coil line where the finished product is a coil instead of bundles of cut lengths of product. Maximum product flexibility capability could be attained if the installation was arranged to produce both coils and bundles of cut to length product.\nThe heart of the temper mill is the cold rolling mill stand which produces the temper pass. It will include electric powered drive motors and speed reduction gearing suited to the process desired. The design of the rolling mill can be a 2-high or 4-high (even 6-high in some cases). The mill stand can be work roll driven or back up roll driven. The mill can be designed with hydraulic work roll bending or back up roll bending. Installations typically have a single rolling mill stand, but may have two. Pinch rolls provide back tension for the pay off reel in the entry section and entry and exit tension for the temper pass.\nFunction.\nThe process goal is physical property enhancement through cold forming of the steel product in the bite of the work rolls. The physical properties that are enhanced by the temper pass due to elongation of the product include:\nTypical elongation produced in the product is 0.5% to 2%. Product dimensions vary. Thicknesses include typical sheet metal gauges up to 1.00\" thick plate. Widths vary from 36\" to 125\".\nThe finish of the rolled product is controlled by using rolls having a variety of surface finishes designed to impart the desired finish to the product. Roll finishes range from ground and polished rolls to impart a bright finish, to shot-blasted or electric-discharged textured rolls that produce a dull, velvety finish on the steel surface.\nTypical auxiliary equipment includes PLC based controls, overhead traveling cranes, roll changing equipment, roll grinding equipment, hydraulic power unit(s), bundle lifting devices, Coil handling devices, etc.", "Automation-Control": 0.7038122416, "Qwen2": "Yes"} {"id": "172777", "revid": "23646674", "url": "https://en.wikipedia.org/wiki?curid=172777", "title": "Perceptron", "text": "In machine learning, the perceptron (or McCulloch-Pitts neuron) is an algorithm for supervised learning of binary classifiers. A binary classifier is a function which can decide whether or not an input, represented by a vector of numbers, belongs to some specific class. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector.\nHistory.\nThe perceptron was invented in 1943 by Warren McCulloch and Walter Pitts. The first implementation was a machine built in 1958 at the Cornell Aeronautical Laboratory by Frank Rosenblatt, funded by the United States Office of Naval Research.\nThe perceptron was intended to be a machine, rather than a program, and while its first implementation was in software for the IBM 704, it was subsequently implemented in custom-built hardware as the \"Mark 1 perceptron\". This machine was designed for image recognition: it had an array of 400 photocells, randomly connected to the \"neurons\". Weights were encoded in potentiometers, and weight updates during learning were performed by electric motors.\nIn a 1958 press conference organized by the US Navy, Rosenblatt made statements about the perceptron that caused a heated controversy among the fledgling AI community; based on Rosenblatt's statements, \"The New York Times\" reported the perceptron to be \"the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.\"\nAlthough the perceptron initially seemed promising, it was quickly proved that perceptrons could not be trained to recognise many classes of patterns. This caused the field of neural network research to stagnate for many years, before it was recognised that a feedforward neural network with two or more layers (also called a multilayer perceptron) had greater processing power than perceptrons with one layer (also called a single-layer perceptron).\nSingle-layer perceptrons are only capable of learning linearly separable patterns. For a classification task with some step activation function, a single node will have a single line dividing the data points forming the patterns. More nodes can create more dividing lines, but those lines must somehow be combined to form more complex classifications. A second layer of perceptrons, or even linear nodes, are sufficient to solve many otherwise non-separable problems.\nIn 1969, a famous book entitled \"Perceptrons\" by Marvin Minsky and Seymour Papert showed that it was impossible for these classes of network to learn an XOR function. It is often believed (incorrectly) that they also conjectured that a similar result would hold for a multi-layer perceptron network. However, this is not true, as both Minsky and Papert already knew that multi-layer perceptrons were capable of producing an XOR function. (See the page on \"Perceptrons (book)\" for more information.) Nevertheless, the often-miscited Minsky/Papert text caused a significant decline in interest and funding of neural network research. It took ten more years until neural network research experienced a resurgence in the 1980s. This text was reprinted in 1987 as \"Perceptrons - Expanded Edition\" where some errors in the original text are shown and corrected.\nA 2022 article states that the Mark 1 Perceptron was \"part of a previously secret four-year NPIC [the US' National Photographic Interpretation Center] effort from 1963 through 1966 to develop this algorithm into a useful tool for photo-interpreters\".\nThe kernel perceptron algorithm was already introduced in 1964 by Aizerman et al. Margin bounds guarantees were given for the Perceptron algorithm in the general non-separable case first by Freund and Schapire (1998), and more recently by Mohri and Rostamizadeh (2013) who extend previous results and give new L1 bounds.\nThe perceptron is a simplified model of a biological neuron. While the complexity of biological neuron models is often required to fully understand neural behavior, research suggests a perceptron-like linear model can produce some behavior seen in real neurons.\nDefinition.\nIn the modern sense, the perceptron is an algorithm for learning a binary classifier called a threshold function: a function that maps its input formula_1 (a real-valued vector) to an output value formula_2 (a single binary value):\nwhere formula_4 is a vector of real-valued weights, formula_5 is the dot product formula_6, where is the number of inputs to the perceptron, and is the \"bias\". The bias shifts the decision boundary away from the origin and does not depend on any input value.\nThe value of formula_2 (0 or 1) is used to classify formula_1 as either a positive or a negative instance, in the case of a binary classification problem. If is negative, then the weighted combination of inputs must produce a positive value greater than formula_9 in order to push the classifier neuron over the 0 threshold. Spatially, the bias alters the position (though not the orientation) of the decision boundary. The perceptron learning algorithm does not terminate if the learning set is not linearly separable. If the vectors are not linearly separable learning will never reach a point where all vectors are classified properly. The most famous example of the perceptron's inability to solve problems with linearly nonseparable vectors is the Boolean exclusive-or problem. The solution spaces of decision boundaries for all binary functions and learning behaviors are studied in the reference.\nRosenblatt described the details of the perceptron in a 1958 paper. His organization of a perceptron is constructed of three kinds of cells (\"units\"): AI, AII, R, which stand for \"projection\", \"association\" and \"response\".\nIn the context of neural networks, a perceptron is an artificial neuron using the Heaviside step function as the activation function. The perceptron algorithm is also termed the single-layer perceptron, to distinguish it from a multilayer perceptron, which is a misnomer for a more complicated neural network. As a linear classifier, the single-layer perceptron is the simplest feedforward neural network.\nLearning algorithm.\nBelow is an example of a learning algorithm for a single-layer perceptron. For multilayer perceptrons, where a hidden layer exists, more sophisticated algorithms such as backpropagation must be used. If the activation function or the underlying process being modeled by the perceptron is nonlinear, alternative learning algorithms such as the delta rule can be used as long as the activation function is differentiable. Nonetheless, the learning algorithm described in the steps below will often work, even for multilayer perceptrons with nonlinear activation functions.\nWhen multiple perceptrons are combined in an artificial neural network, each output neuron operates independently of all the others; thus, learning each output can be considered in isolation.\nDefinitions.\nWe first define some variables:\nWe show the values of the features as follows:\nTo represent the weights: \nTo show the time-dependence of formula_4, we use:\nSteps.\nThe algorithm updates the weights after every training sample in step 2b.\nConvergence.\nThe perceptron is a linear classifier, therefore it will never get to the state with all the input vectors classified correctly if the training set is not linearly separable, i.e. if the positive examples cannot be separated from the negative examples by a hyperplane. In this case, no \"approximate\" solution will be gradually approached under the standard learning algorithm, but instead, learning will fail completely. Hence, if linear separability of the training set is not known a priori, one of the training variants below should be used.\nIf the training set \"is\" linearly separable, then the perceptron is guaranteed to converge. Furthermore, there is an upper bound on the number of times the perceptron will adjust its weights during the training.\nSuppose that the input vectors from the two classes can be separated by a hyperplane with a margin formula_31, i.e. there exists a weight vector formula_32, and a bias term such that formula_33 for all formula_19 with formula_35 and formula_36 for all formula_19 with formula_38, where formula_39 is the desired output value of the perceptron for input formula_19. Also, let denote the maximum norm of an input vector. Novikoff (1962) proved that in this case the perceptron algorithm converges after making formula_41 updates. The idea of the proof is that the weight vector is always adjusted by a bounded amount in a direction with which it has a negative dot product, and thus can be bounded above by , where is the number of changes to the weight vector. However, it can also be bounded below by because if there exists an (unknown) satisfactory weight vector, then every change makes progress in this (unknown) direction by a positive amount that depends only on the input vector.\nWhile the perceptron algorithm is guaranteed to converge on \"some\" solution in the case of a linearly separable training set, it may still pick \"any\" solution and problems may admit many solutions of varying quality. The \"perceptron of optimal stability\", nowadays better known as the linear support-vector machine, was designed to solve this problem (Krauth and Mezard, 1987).\nVariants.\nThe pocket algorithm with ratchet (Gallant, 1990) solves the stability problem of perceptron learning by keeping the best solution seen so far \"in its pocket\". The pocket algorithm then returns the solution in the pocket, rather than the last solution. It can be used also for non-separable data sets, where the aim is to find a perceptron with a small number of misclassifications. However, these solutions appear purely stochastically and hence the pocket algorithm neither approaches them gradually in the course of learning, nor are they guaranteed to show up within a given number of learning steps.\nThe Maxover algorithm (Wendemuth, 1995) is \"robust\" in the sense that it will converge regardless of (prior) knowledge of linear separability of the data set. In the linearly separable case, it will solve the training problem – if desired, even with optimal stability (maximum margin between the classes). For non-separable data sets, it will return a solution with a small number of misclassifications. In all cases, the algorithm gradually approaches the solution in the course of learning, without memorizing previous states and without stochastic jumps. Convergence is to global optimality for separable data sets and to local optimality for non-separable data sets.\nThe Voted Perceptron (Freund and Schapire, 1999), is a variant using multiple weighted perceptrons. The algorithm starts a new perceptron every time an example is wrongly classified, initializing the weights vector with the final weights of the last perceptron. Each perceptron will also be given another weight corresponding to how many examples do they correctly classify before wrongly classifying one, and at the end the output will be a weighted vote on all perceptrons.\nIn separable problems, perceptron training can also aim at finding the largest separating margin between the classes. The so-called perceptron of optimal stability can be determined by means of iterative training and optimization schemes, such as the Min-Over algorithm (Krauth and Mezard, 1987) or the AdaTron (Anlauf and Biehl, 1989)). AdaTron uses the fact that the corresponding quadratic optimization problem is convex. The perceptron of optimal stability, together with the kernel trick, are the conceptual foundations of the support-vector machine.\nThe formula_42-perceptron further used a pre-processing layer of fixed random weights, with thresholded output units. This enabled the perceptron to classify analogue patterns, by projecting them into a binary space. In fact, for a projection space of sufficiently high dimension, patterns can become linearly separable.\nAnother way to solve nonlinear problems without using multiple layers is to use higher order networks (sigma-pi unit). In this type of network, each element in the input vector is extended with each pairwise combination of multiplied inputs (second order). This can be extended to an \"n\"-order network.\nIt should be kept in mind, however, that the best classifier is not necessarily that which classifies all the training data perfectly. Indeed, if we had the prior constraint that the data come from equi-variant Gaussian distributions, the linear separation in the input space is optimal, and the nonlinear solution is overfitted.\nOther linear classification algorithms include Winnow, support-vector machine, and logistic regression.\nMulticlass perceptron.\nLike most other techniques for training linear classifiers, the perceptron generalizes naturally to multiclass classification. Here, the input formula_43 and the output formula_44 are drawn from arbitrary sets. A feature representation function formula_45 maps each possible input/output pair to a finite-dimensional real-valued feature vector. As before, the feature vector is multiplied by a weight vector formula_46, but now the resulting score is used to choose among many possible outputs:\nLearning again iterates over the examples, predicting an output for each, leaving the weights unchanged when the predicted output matches the target, and changing them when it does not. The update becomes:\nThis multiclass feedback formulation reduces to the original perceptron when formula_43 is a real-valued vector, formula_44 is chosen from formula_51, and formula_52.\nFor certain problems, input/output representations and features can be chosen so that formula_53 can be found efficiently even though formula_44 is chosen from a very large or even infinite set.\nSince 2002, perceptron training has become popular in the field of natural language processing for such tasks as part-of-speech tagging and syntactic parsing (Collins, 2002). It has also been applied to large-scale machine learning problems in a distributed computing setting.", "Automation-Control": 0.7182251215, "Qwen2": "Yes"} {"id": "173186", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=173186", "title": "Roll-to-roll processing", "text": "In the field of electronic devices, roll-to-roll processing, also known as web processing, reel-to-reel processing or R2R, is the process of creating electronic devices on a roll of flexible plastic, metal foil, or flexible glass. In other fields predating this use, it can refer to any process of applying coating, printing, or performing other processes starting with a roll of a flexible material and re-reeling after the process to create an output roll. These processes, and others such as sheeting, can be grouped together under the general term converting. When the rolls of material have been coated, laminated or printed they can be subsequently slit to their finished size on a slitter rewinder.\nIn electronic devices.\nLarge circuits made with thin-film transistors and other devices can be patterned onto these large substrates, which can be up to a few meters wide and long. Some of the devices can be patterned directly, much like an inkjet printer deposits ink. For most semiconductors, however, the devices must be patterned using photolithography techniques.\nRoll-to-roll processing of large-area electronic devices reduces manufacturing cost. Most notable would be solar cells, which are still prohibitively expensive for most markets due to the high cost per unit area of traditional bulk (mono- or polycrystalline) silicon manufacturing. Other applications could arise which take advantage of the flexible nature of the substrates, such as electronics embedded into clothing, large-area flexible displays, and roll-up portable displays.\nThin-film cells.\nA crucial issue for a roll-to-roll thin-film cell production system is the deposition rate of the microcrystalline layer, and this can be tackled using four approaches:\nIn electrochemical devices.\nThe roll-to-roll processing has been used in the manufacture of electrochemical devices such as batteries, supercapacitors, fuel cells, and water electrolyzers. Here, the roll-to-roll processing is utilized for electrode manufacturing and is the key to reducing manufacturing cost through stable production of electrodes on various film substrates such as metal foils, membranes, diffusion media, and separators.", "Automation-Control": 0.9895522594, "Qwen2": "Yes"} {"id": "31766888", "revid": "237572", "url": "https://en.wikipedia.org/wiki?curid=31766888", "title": "Punching machine", "text": "A punching machine is a machine tool for punching and embossing flat sheet-materials to produce form-features needed as mechanical element and/or to extend static stability of a sheet section.\nCNC punching.\nPunch presses are developed for high flexibility and efficient processing of metal stampings. The main areas of application are for small and medium runs. Those machines are typically equipped with a linear die carrier (tool carrier) and quick change tools. Today the method is used where the application of lasers are inefficient or technically impractical. CNC is the abbreviation of Computer Numerically Controlled.\nPrinciple of operation.\nAfter programming the work pieces and entering length of bars the control automatically calculates the maximum number of pieces to be punched (for example, 18 pieces of a bar of 6000 mm). Once the desired number of work pieces is entered, the bar is pushed toward the stop. The machine is fully automated once the production process is launched.\nThe third CNC axis always moves the cylinder exactly over the tool, which keeps the wear on the bearings and tools to a minimum. All pieces are sent down a slat conveyor and are pushed sideways on a table. Any scrap is carried to the end of the conveyor and dropped into a bin. Different workpieces can be produced within one work cycle to optimize production.\nProgramming.\nProgramming is done on a PC equipped with appropriate software that can be part of the machine or a connected external workstation. For generating a new program engineering data can be imported or pasted per mouse and keyboard. Through a graphic and menu-driven user interface previous CNC programming skills are not required. All the punches in a work piece are shown on the screen making programming mistakes easily detected. Ideally each program is stored in one database, so it is easy to recover them by search and sort functions. When selecting a new piece, all the necessary tooling changes are displayed. Before transferring it to the control unit the software scans each program for possible collisions. This eliminates most handling errors.\nTool change system.\nThe linear tool carrier (y-axis) has several stations that hold the punching tools and one cutting tool. Especially for flexibility and efficient processing are set up times a crucial cost factor. Downtimes should be reduced to a minimum. Therefore, recent tool systems are designed for fast and convenient change of punches and dies. They are equipped with a special plug-in system for a quick and easy change of tools.\nThere is no need to screw anything together. The punch and die plate are adjusted to each other automatically Punches and dies can be changed rapidly meaning less machine downtime.\nNetworking with the whole production line.\nA lot of organizational effort and interface management is saved, if the CNC punch press is connected to the previous and subsequent process. For a connection to other machines and external workstations corporate interfaces have to be established.\nIntegration of further production steps.\nBesides punching, machines of the high end class can be equipped with special functions. For example:", "Automation-Control": 0.9165866971, "Qwen2": "Yes"} {"id": "25596814", "revid": "42522270", "url": "https://en.wikipedia.org/wiki?curid=25596814", "title": "Configuration design", "text": "Configuration design is a kind of design where a fixed set of predefined components that can be interfaced (connected) in predefined ways is given, and an assembly (i.e. designed artifact) of components selected from this fixed set is sought that satisfies a set of requirements and obeys a set of constraints.\nThe associated \"design configuration problem\" consists of the following three constituent tasks:\nTypes of knowledge involved in configuration design include:", "Automation-Control": 0.8794847131, "Qwen2": "Yes"} {"id": "57506816", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=57506816", "title": "Hall circles", "text": "Hall circles (also known as M-circles and N-circles) are a graphical tool in control theory used to obtain values of a closed-loop transfer function from the Nyquist plot (or the Nichols plot) of the associated open-loop transfer function. Hall circles have been introduced in control theory by Albert C. Hall in his thesis.\nConstruction.\nConsider a closed-loop linear control system with open-loop transfer function given by transfer function formula_1 and with a unit gain in the feedback loop. The closed-loop transfer function is given by formula_2.\nTo check the stability of \"T\"(\"s\"), it is possible to use the Nyquist stability criterion with the Nyquist plot of the open-loop transfer function \"G\"(\"s\"). Note, however, that only the Nyquist plot of \"G\"(\"s\") does not give the actual values of \"T\"(\"s\"). To get this information from the G(s)-plane, Hall proposed to construct the locus of points in the \"G\"(\"s\")-plane such that \"T\"(\"s\") has constant magnitude and the also the locus of points in the \"G\"(\"s\")-plane such that \"T\"(\"s\") has constant phase angle.\nGiven a positive real value \"M\" representing a fixed magnitude, and denoting G(s) by \"z\", the points satisfying formula_3are given by the points \"z\" in the \"G\"(\"s\")-plane such that the ratio of the distance between \"z\" and 0 and the distance between \"z\" and -1 is equal to \"M\". The points \"z\" satisfying this locus condition are circles of Apollonius, and this locus is known in the context of control systems as \"M-circles\".\nGiven a positive real value \"N\" representing a phase angle, the points satisfying formula_4are given by the points z in the \"G\"(\"s\")-plane such that the angle between -1 and z and the angle between 0 and z is constant. In other words, the angle opposed to the line segment between -1 and 0 must be constant. This implies that the points z satisfying this locus condition are arcs of circles, and this locus is known in the context of control systems as \"N-circles\".\nUsage.\nTo use the Hall circles, a plot of M and N circles is done over the Nyquist plot of the open-loop transfer function. The points of the intersection between these graphics give the corresponding value of the closed-loop transfer function.\nHall circles are also used with the Nichols plot and in this setting, are also known as Nichols chart. Rather than overlaying directly the Hall circles over the Nichols plot, the points of the circles are transferred to a new coordinate system where the ordinate is given by formula_5 and the abscissa is given by formula_6. The advantage of using Nichols chart is that adjusting the gain of the open loop transfer function directly reflects in up and down translation of the Nichols plot in the chart.", "Automation-Control": 0.8035618067, "Qwen2": "Yes"} {"id": "14981459", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=14981459", "title": "Lamp rerating", "text": "Lamp rerating is modelling the predicted properties of a filament lamp when running the lamp at a voltage other than its specified rating, using a power law function of voltage. The following equations can be used to estimate the new operating point. The exact value of the exponent parameters will typically vary slightly with the particular lamp design.", "Automation-Control": 0.9723353386, "Qwen2": "Yes"} {"id": "39648645", "revid": "1876660", "url": "https://en.wikipedia.org/wiki?curid=39648645", "title": "Industrial dryer", "text": "Industrial dryers are used to efficiently process large quantities of bulk materials that need reduced moisture levels. Depending on the amount and the makeup of material needing to be dried, industrial dryers come in many different models constructed specifically for the type and quantity of material to be processed. The most common types of industrial dryers are fluidized bed dryers, rotary dryers, rolling bed dryers, conduction dryers, convection dryers, pharmaceutical dryers, suspension/paste dryers, toroidal bed or TORBED dryers and dispersion dryers. Various factors are considered in determining the correct type of dryer for any given application, including the material to be dried, drying process requirements, production requirements, final product quality requirements and available facility space.", "Automation-Control": 0.7019604445, "Qwen2": "Yes"} {"id": "39657963", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=39657963", "title": "Timer coalescing", "text": "Timer coalescing is a computer system energy-saving technique that reduces central processing unit (CPU) power consumption by reducing the precision of software timers used for synchronization of process wake-ups, minimizing the number of times the CPU is forced to perform the relatively power-costly operation of entering and exiting idle states.", "Automation-Control": 0.8996522427, "Qwen2": "Yes"} {"id": "60466278", "revid": "1123352931", "url": "https://en.wikipedia.org/wiki?curid=60466278", "title": "Semi-automation", "text": "Semi-automation is a process or procedure that is performed by the combined activities of man and machine with both human and machine steps typically orchestrated by a centralized computer controller.\nWithin manufacturing, production processes may be fully manual, semi-automated, or fully automated. In this case, semi-automation may vary in its degree of manual and automated steps.\nSemi-automated manufacturing processes are typically orchestrated by a computer controller which sends messages to the worker at the time in which he/she should perform a step. The controller typically waits for feedback that the human performed step has been completed via either a human-machine interface or via electronic sensors distributed within the process. Controllers within semi-automated processes may either directly control machinery or send signals to machinery distributed within the process. Centralized computer controllers within semi-automated processes orchestrate processes by instructing the worker, providing electronic communication and control to process equipment, tools, or machines, as well as perform data management to record and ensure that the process meets established process criteria.\nMany manufacturers choose not to fully automate a process, and instead implement semi-automation due to the complexity of the task, or the number of products produced is too low to justify the investment in full automation. Other processes may not be fully automated because it may reduce the flexibility to easily adapt the processes to reflect production needs.", "Automation-Control": 0.9458005428, "Qwen2": "Yes"} {"id": "2588632", "revid": "38627444", "url": "https://en.wikipedia.org/wiki?curid=2588632", "title": "Tipped tool", "text": "A tipped tool is any cutting tool in which the cutting edge consists of a separate piece of material that is brazed, welded, or clamped onto a body made of another material. In the types in which the cutter portion is an indexable part clamped by a screw, the cutters are called inserts (because they are inserted into the tool body). Tipped tools allow each part of the tool, the shank and the cutter(s), to be made of the material with the best properties for its job. Common materials for the cutters (brazed tips or clamped inserts) include cemented carbide, polycrystalline diamond, and cubic boron nitride. Tools that are commonly tipped include milling cutters (such as end mills, face mills, and fly cutters), tool bits, router bits, and saw blades (especially the metal-cutting ones).\nAdvantages and disadvantages.\nThe advantage of tipped tools is only a small insert of the cutting material is needed to provide the cutting ability. The small size makes manufacturing of the insert easier than making a solid tool of the same material. This also reduces cost because the tool holder can be made of a less-expensive and tougher material. In some situations a tipped tool is better than its solid counterpart because it combines the toughness of the tool holder with the hardness of the insert.\nIn other situations this is less than optimal, because the joint between the tool holder and the insert reduces rigidity. However, these tools may still be used because the overall cost savings is still greater.\nIn industry today, insert tools are perhaps slightly more common than solid tools, but solid tools are still used in many applications. Entire catalogs of solid–high-speed steel (HSS) and solid-carbide end mills, for example, play prominent parts in some areas of milling practice, including diesinking, moldmaking, and aerospace job or batch production. Most machine shops with lathes have many solid-HSS and solid-carbide tool bits as well as many insert-tipped tool bits, and most commercial operations that involve routers (such as cabinetry and furniture shops) use plenty of solid-HSS and solid-carbide router bits as well as some tipped bits. \nIndexable inserts.\nInserts are removable cutting tips, which means they are not brazed or welded to the tool body. They are usually indexable, meaning that they can be exchanged, and often also rotated or flipped, without disturbing the overall geometry of the tool (effective diameter, tool length offset, etc.). This saves time in manufacturing by allowing fresh cutting edges to be presented periodically without the need for tool grinding, setup changes, or entering of new values into a CNC program.\nWiper insert.\nA \"wiper insert\" is an insert used in a milling machine or a lathe. It is designed for finished cutting, to give a smooth surface on the surface being cut. It uses special geometry to give a good finish on the workpiece at a higher-than-normal feedrate. Wiper inserts generally have a larger area in contact with the workpiece, so they exert higher force on the workpiece. This makes them unsuitable for fragile workpieces.\nISO insert coding.\n\"Inserts used for turning and milling\" are often numbered according to ISO standard 1832. This standard aims to make the naming, specifying and ordering of inserts a simple, consistent and traceable process. This standard takes into account both metric and imperial systems of units, although certain elements differ for each unit system. The code consists of up to 13 symbols with the first 12 of them being compulsory for inserts composed of cubic boron or poly-crystalline diamond and the first 7 being compulsory for all other types of composition.", "Automation-Control": 0.870349884, "Qwen2": "Yes"} {"id": "2591408", "revid": "88026", "url": "https://en.wikipedia.org/wiki?curid=2591408", "title": "Recursive Bayesian estimation", "text": "In probability theory, statistics, and machine learning, recursive Bayesian estimation, also known as a Bayes filter, is a general probabilistic approach for estimating an unknown probability density function (PDF) recursively over time using incoming measurements and a mathematical process model. The process relies heavily upon mathematical concepts and models that are theorized within a study of prior and posterior probabilities known as Bayesian statistics.\nIn robotics.\nA Bayes filter is an algorithm used in computer science for calculating the probabilities of multiple beliefs to allow a robot to infer its position and orientation. Essentially, Bayes filters allow robots to continuously update their most likely position within a coordinate system, based on the most recently acquired sensor data. This is a recursive algorithm. It consists of two parts: prediction and innovation. If the variables are normally distributed and the transitions are linear, the Bayes filter becomes equal to the Kalman filter.\nIn a simple example, a robot moving throughout a grid may have several different sensors that provide it with information about its surroundings. The robot may start out with certainty that it is at position (0,0). However, as it moves farther and farther from its original position, the robot has continuously less certainty about its position; using a Bayes filter, a probability can be assigned to the robot's belief about its current position, and that probability can be continuously updated from additional sensor information.\nModel.\nThe measurements formula_1 are the manifestations of a hidden Markov model (HMM), which means the true state formula_2 is assumed to be an unobserved Markov process. The following picture presents a Bayesian network of a HMM.\nBecause of the Markov assumption, the probability of the current true state given the immediately previous one is conditionally independent of the other earlier states.\nSimilarly, the measurement at the \"k\"-th timestep is dependent only upon the current state, so is conditionally independent of all other states given the current state.\nUsing these assumptions the probability distribution over all states of the HMM can be written simply as:\nHowever, when using the Kalman filter to estimate the state x, the probability distribution of interest is associated with the current states conditioned on the measurements up to the current timestep. (This is achieved by marginalising out the previous states and dividing by the probability of the measurement set.)\nThis leads to the \"predict\" and \"update\" steps of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is the sum (integral) of the products of the probability distribution associated with the transition from the (\"k\" - 1)-th timestep to the \"k\"-th and the probability distribution associated with the previous state, over all possible formula_6.\nThe probability distribution of update is proportional to the product of the measurement likelihood and the predicted state.\nThe denominator\nis constant relative to formula_2, so we can always substitute it for a coefficient formula_11, which can usually be ignored in practice. The numerator can be calculated and then simply normalized, since its integral must be unity.\nSequential Bayesian filtering.\nSequential Bayesian filtering is the extension of the Bayesian estimation for the case when the observed value changes in time. It is a method to estimate the real value of an observed variable that evolves in time.\nThe method is named:\nThe notion of Sequential Bayesian filtering is extensively used in control and robotics.", "Automation-Control": 0.8670030236, "Qwen2": "Yes"} {"id": "20376711", "revid": "29534843", "url": "https://en.wikipedia.org/wiki?curid=20376711", "title": "Strict-feedback form", "text": "In control theory, dynamical systems are in strict-feedback form when they can be expressed as\nwhere\nHere, \"strict feedback\" refers to the fact that the nonlinear functions formula_11 and formula_12 in the formula_13 equation only depend on states formula_14 that are \"fed back\" to that subsystem. That is, the system has a kind of lower triangular form.\nStabilization.\nSystems in strict-feedback form can be stabilized by recursive application of backstepping. That is,\nThis process is known as backstepping because it starts with the requirements on some internal subsystem for stability and progressively \"steps back\" out of the system, maintaining stability at each step. Because\nthen the resulting system has an equilibrium at the origin (i.e., where formula_58, formula_59, formula_60, ... , formula_61, and formula_62) that is globally asymptotically stable.", "Automation-Control": 0.9832543731, "Qwen2": "Yes"} {"id": "35188921", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=35188921", "title": "Bellman pseudospectral method", "text": "The Bellman pseudospectral method is a pseudospectral method for optimal control based on Bellman's principle of optimality. It is part of the larger theory of pseudospectral optimal control, a term coined by Ross. The method is named after Richard E. Bellman. It was introduced by Ross et al.\nfirst as a means to solve multiscale optimal control problems, and later expanded to obtain suboptimal solutions for general optimal control problems.\nTheoretical foundations.\nThe multiscale version of the Bellman pseudospectral method is based on the spectral convergence property of the Ross–Fahroo pseudospectral methods. That is, because the Ross–Fahroo pseudospectral method converges at an exponentially fast rate, pointwise convergence to a solution is obtained at very low number of nodes even when the solution has high-frequency components. This aliasing phenomenon in optimal control was first discovered by Ross et al. Rather than use signal processing techniques to anti-alias the solution, Ross et al. proposed that Bellman's principle of optimality can be applied to the converged solution to extract information between the nodes. Because the Gauss–Lobatto nodes cluster at the boundary points, Ross et al. suggested that if the node density around the initial conditions satisfy the Nyquist–Shannon sampling theorem, then the complete solution can be recovered by solving the optimal control problem in a recursive fashion over piecewise segments known as Bellman segments.\nIn an expanded version of the method, Ross et al., proposed that method could also be used to generate feasible solutions that were not necessarily optimal. In this version, one can apply the Bellman pseudospectral method at even lower number of nodes even under the knowledge that the solution may not have converged to the optimal one. In this situation, one obtains a feasible solution.\nA remarkable feature of the Bellman pseudospectral method is that it automatically determines several measures of suboptimality based on the original pseudospectral cost and the cost generated by the sum of the Bellman segments.\nComputational efficiency.\nOne of the computational advantages of the Bellman pseudospectral method is that it allows one to escape Gaussian rules in the distribution of node points. That is, in a standard pseudospectral method, the distribution of node points are Gaussian (typically Gauss-Lobatto for finite horizon and Gauss-Radau for infinite horizon). The Gaussian points are sparse in the middle of the interval (middle is defined in a shifted sense for infinite-horizon problems) and dense at the boundaries. The second-order accumulation of points near the boundaries have the effect of wasting nodes. The Bellman pseudospectral method takes advantage of the node accumulation at the initial point to anti-alias the solution and discards the remainder of the nodes. Thus the final distribution of nodes is non-Gaussian and dense while the computational method retains a sparse structure.\nApplications.\nThe Bellman pseudospectral method was first applied by Ross et al. to solve the challenging problem of very low thrust trajectory optimization. It has been successfully applied to solve a practical problem of generating very high accuracy solutions to a trans-Earth-injection problem of bringing a space capsule from a lunar orbit to a pin-pointed Earth-interface condition for successful reentry.\nThe Bellman pseudospectral method is most commonly used as an additional check on the optimality of a pseudospectral solution generated by the Ross–Fahroo pseudospectral methods. That is, in addition to the use of Pontryagin's minimum principle in conjunction with the solutions obtained by the Ross–Fahroo pseudospectral methods, the Bellman pseudospectral method is used as a primal-only test on the optimality of the computed solution.", "Automation-Control": 0.9988169074, "Qwen2": "Yes"} {"id": "17361422", "revid": "10951369", "url": "https://en.wikipedia.org/wiki?curid=17361422", "title": "SpaceAge Control", "text": "SpaceAge Control is a design, manufacturing, and service company focused on 3D displacement sensing and measurement.\nThe company has supplied precision displacement sensors to industries worldwide since 1969. During its history, the company created ongoing displacement sensing innovations starting with miniature and subminiature string potentiometers (1968) and 2D and 3D cable-actuated displacement sensors (1974).\nHistory.\nSpaceAge Control was established in 1968 to design, develop, and manufacture pilot protection devices in support of space-based and high-performance test aircraft programs. In 1970, the company was awarded a NASA contract to produce precision, small-format position transducers for aircraft flight control testing. The successful completion of this contract led to the development and production of a complete line of innovative, small-size position transducers.\nIn 1974, the company was tasked with producing a multi-dimensional \"swivel head\" air data probe to enhance total and static pressure accuracy at the high angles of attack associated with rotary wing aircraft. The resulting product, the 100510 air data boom, is used for flight test air data sensing requirements to include STOL, VSTOL, rotary wing, business jet, military transport, and general aviation aircraft.\nThrough the 1970s, 1980s, and 1990s, significantly all U.S., Canadian, and European aerospace companies have used the company's air data products and position transducers in their research, development, and test activities. Often, these products were designed and manufactured to custom specifications.\nIn 1989, the company began its focus on unmanned aerial vehicles (UAVs) with the development and introduction of the 100400 miniature air data boom. That product use led to the adoption of SpaceAge Control air data products on a broad range of unmanned aircraft to include aerial targets, autonomous vehicles, and experimental vehicles.\nAlso in 1989, a single auto racing team began using these position transducers to monitor throttle movement and suspension travel. This use resulted in the adoption of the products in automotive test and measurement projects including anthropomorphic dummy instrumentation, impact testing, and control verification.", "Automation-Control": 0.9745721817, "Qwen2": "Yes"} {"id": "63153366", "revid": "18872885", "url": "https://en.wikipedia.org/wiki?curid=63153366", "title": "IEC 63119", "text": "IEC 63119 is an international standard defining a protocol for information exchange for electric vehicle charging roaming services, which is currently under development. IEC 63119 is one of the International Electrotechnical Commission's group of standards for electric road vehicles and electric industrial trucks, and is the responsibility of Working Group 9 (WG9) of IEC Technical Committee 69 (TC69).\nStandard documents.\nIEC 63119 consists of the following parts, detailed in separate IEC 63119 standard documents:", "Automation-Control": 0.9878700376, "Qwen2": "Yes"} {"id": "47067520", "revid": "10248457", "url": "https://en.wikipedia.org/wiki?curid=47067520", "title": "Transfer stamping", "text": "Sheet metal forming in medium-high volume production environments is often completed through the use of a Transfer Press operating a number of dies as a complete system. Each die in the system is responsible for adding more shape to the part until the metal work piece attains its final shape. What makes transfer stamping unique is that a single press operates a number of tools, the movement of the sheet metal work piece from one operation to the next is performed by automation either built into the press or onto the dies. With each closing of the press the entire system of tools will close, each performing its designed work to the sheet metal. Upon opening the built in transfer mechanism moves the workpiece from one operation to the next in the sequence. \nIn the past these operations may have been performed using individual presses and the workpieces may have been moved from press to press, and die to die, by hand. As Automation improved hand loading was replaced by pick and place automation and by robots. The transfer press is a natural extension of this practice, simplifying the operation by having all tools in a single large press and using automation which is specifically designed for the press operations.\nTransfer mechanisms.\nTri-axis transfer.\nNamed for the movement of the transfer mechanism, tri-axis transfer mechanisms motion is defined by the three (3) axes of movement made by the part manipulators each press stroke. On the press downstroke the automation which will likely be holding the work piece will lower the work piece to the tool and retract to leave the part on the tool. At the bottom of the stroke the automation mechanism in the retracted state will cycle backward one pitch to position itself adjacent to the next workpiece. As the press cycles upward the part manipulators will index inward to pick up the next work piece, continuing upward following the press ram, then indexing forward to next station. As the press reaches the top of its stroke, the process repeats. The three axes of motion are up-down, in-out, and forward-back. \nCross-bar transfer.\nWith a cross bar transfer mechanism, the in-out axes of movement is constrained by an automation bar spanning the die space. Commonly mounted with suction cups, this cross bar will pick the workpiece up from above and release the part, dropping it into place at the next station. With only two axes of motion and the automation spanning the die space, the cross bar transfer mechanism must \"dwell\" between adjacent tools between each press stroke.", "Automation-Control": 0.9989761114, "Qwen2": "Yes"} {"id": "47079000", "revid": "15951685", "url": "https://en.wikipedia.org/wiki?curid=47079000", "title": "Jet mill", "text": "A jet mill grinds materials by using a high speed jet of compressed air or inert gas to impact particles into each other. Jet mills can be designed to output particles below a certain size while continuing to mill particles above that size, resulting in a narrow size distribution of the resulting product. Particles leaving the mill can be separated from the gas stream by cyclonic separation.\nParticle size.\nA jet mill consists of a short cylinder, meaning the cylinder's height is less than its diameter. Compressed gas is forced into the mill through nozzles tangent to the cylinder wall, creating a vortex. The gas leaves the mill through a tube along the axis of the cylinder. Solid particles in the mill are subject to two competing forces:\nThe drag on small particles is less than large particles, according to the formula derived from Stokes' law, \nwhere \"V\" is the flow settling velocity (m/s) (vertically downwards if \"ρp\" > \"ρf\", upwards if \"ρp\" gravitational acceleration (m/s2), \"ρp\" is the mass density of the particles (kg/m3), \"ρf\" is the mass density of the fluid (kg/m3), \"μ\" is the dynamic viscosity (kg /m*s), and \"R\" is the radius of the spherical particle (m).\nThe formula shows that particles will be pulled toward the wall of the mill according to the square of their radius or diameter. Large particles will continue the comminution process, until they are small enough to stay in the center of the mill where the discharge port is located.", "Automation-Control": 0.8608970046, "Qwen2": "Yes"} {"id": "5621338", "revid": "43767367", "url": "https://en.wikipedia.org/wiki?curid=5621338", "title": "Raster passes", "text": "Raster passes are the most basic of all machining strategies for the finishing or semi-finishing of a part during computer-aided manufacturing (CAM). In raster passes machining the milling cutter moves along curves on the cutter location surface (CL surface) obtained by intersecting the CL surface with vertical, parallel planes. Many CAM systems implement this strategy by sampling cutter location points on these curves by calculating intersection points of the CL surface and as many vertical lines as needed to approximate the curve to the desired accuracy.", "Automation-Control": 0.9989444613, "Qwen2": "Yes"} {"id": "871280", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=871280", "title": "Observability", "text": "Observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs.\nIn control theory, the observability and controllability of a linear system are mathematical duals.\nThe concept of observability was introduced by the Hungarian-American engineer Rudolf E. Kálmán for linear dynamic systems. A dynamical system designed to estimate the state of a system from measurements of the outputs is called a state observer or simply an observer for that system.\nDefinition.\nConsider a physical system modeled in state-space representation. A system is said to be observable if, for every possible evolution of state and control vectors, the current state can be estimated using only the information from outputs (physically, this generally corresponds to information obtained by sensors). In other words, one can determine the behavior of the entire system from the system's outputs. On the other hand, if the system is not observable, there are state trajectories that are not distinguishable by only measuring the outputs.\nLinear time-invariant systems.\nFor time-invariant linear systems in the state space representation, there are convenient tests to check whether a system is observable. Consider a SISO system with formula_1 state variables (see state space for details about MIMO systems) given by\nObservability matrix.\nIf and only if the column rank of the \"observability matrix\", defined as\nis equal to formula_1, then the system is observable. The rationale for this test is that if formula_1 columns are linearly independent, then each of the formula_1 state variables is viewable through linear combinations of the output variables formula_8.\nRelated concepts.\nObservability index.\nThe \"observability index\" formula_9 of a linear time-invariant discrete system is the smallest natural number for which the following is satisfied: formula_10, where\nUnobservable subspace.\nThe \"unobservable subspace\" formula_12 of the linear system is the kernel of the linear map formula_13 given byformula_14where formula_15 is the set of continuous functions from formula_16 to formula_17. formula_12 can also be written as \nSince the system is observable if and only if formula_20, the system is observable if and only if formula_12 is the zero subspace.\nThe following properties for the unobservable subspace are valid:\nDetectability.\nA slightly weaker notion than observability is \"detectability\". A system is detectable if all the unobservable states are stable.\nDetectability conditions are important in the context of sensor networks.\nLinear time-varying systems.\nConsider the continuous linear time-variant system\nSuppose that the matrices formula_27, formula_28 and formula_29 are given as well as inputs and outputs formula_30 and formula_8 for all formula_32 then it is possible to determine formula_33 to within an additive constant vector which lies in the null space of formula_34 defined by\nwhere formula_36 is the state-transition matrix.\nIt is possible to determine a unique formula_33 if formula_34 is nonsingular. In fact, it is not possible to distinguish the initial state for formula_39 from that of formula_40 if formula_41 is in the null space of formula_34.\nNote that the matrix formula_43 defined as above has the following properties:\nObservability matrix generalization.\nThe system is observable in formula_51 if and only if there exists an interval formula_51 in formula_16 such that the matrix formula_34 is nonsingular.\nIf formula_55 are analytic, then the system is observable in the interval [formula_56,formula_57] if there exists formula_58 and a positive integer \"k\" such that\nwhere formula_60 and formula_61 is defined recursively as\nExample.\nConsider a system varying analytically in formula_63 and matricesformula_64 Then formula_65 , and since this matrix has rank = 3, the system is observable on every nontrivial interval of formula_16.\nNonlinear systems.\nGiven the system formula_67, formula_68. Where formula_69 the state vector, formula_70 the input vector and formula_71 the output vector. formula_72 are to be smooth vector fields.\nDefine the observation space formula_73 to be the space containing all repeated Lie derivatives, then the system is observable in formula_74 if and only if formula_75, where\nEarly criteria for observability in nonlinear dynamic systems were discovered by Griffith and Kumar, Kou, Elliot and Tarn, and Singh.\nThere also exist an observability criteria for nonlinear time-varying systems.\nStatic systems and general topological spaces.\nObservability may also be characterized for steady state systems (systems typically defined in terms of algebraic equations and inequalities), or more generally, for sets in formula_77. Just as observability criteria are used to predict the behavior of Kalman filters or other observers in the dynamic system case, observability criteria for sets in formula_77 are used to predict the behavior of data reconciliation and other static estimators. In the nonlinear case, observability can be characterized for individual variables, and also for local estimator behavior rather than just global behavior.", "Automation-Control": 0.9999494553, "Qwen2": "Yes"} {"id": "1779503", "revid": "16416757", "url": "https://en.wikipedia.org/wiki?curid=1779503", "title": "Pneumatic tool", "text": "A pneumatic tool, air tool, air-powered tool or pneumatic-powered tool is a type of power tool, driven by compressed air supplied by an air compressor. Pneumatic tools can also be driven by compressed carbon dioxide stored in small cylinders allowing for portability.\nMost pneumatic tools convert the compressed air to work using a pneumatic motor. Compared to electric power tool equivalents, pneumatic tools are safer to run and maintain, without risk of sparks, short-circuiting or electrocution, and have a higher power to weight ratio, allowing a smaller, lighter tool to accomplish the same task. Furthermore, they are less likely to self-destruct in case the tool is jammed or overloaded.\nGeneral grade pneumatic tools with a short life span are commonly less expensive and considered “disposable tools” in tooling industries, while industrial grade pneumatic tools with long life span are more expensive. In general, pneumatic tools are cheaper than the equivalent electric-powered tools. Regular lubrication of the tools is still needed however.\nMost pneumatic tools are to be supplied with compressed air at 4 to 6 bar.\nAdvantages and disadvantages.\nPneumatic tools have many benefits which have contributed to their rise in popularity. The benefits of using compressed air to power tools are:\nThe primary disadvantage of pneumatic tools is the need for an air compressor, which can be expensive. Pneumatic tools also need to be properly maintained and oiled regularly. Failing to maintain tools can lead to deterioration, due to a build up residual oil and water.\nTechnical terms.\nPneumatic tools are rated using several metrics: Free Speed (rpm), Air Pressure (psi/bar), Air Consumption (cfm/scfm or m3/min), Horse Power (hp), and spindle size. Each individual tool has its own specific requirements which determine their compatibility with air compressor systems.\nFlow or airflow, related to air consumption in pneumatic tools, represents the quantity of compressed air that passes through a section over a unit of time. It is represented in l/min, m3, at the equivalent value in free air in conditions of standard reference atmosphere (SRA). For example: +20 c, 65% of relative humidity, 1013 mbar, in accordance with norms NFE.\nTypes of pneumatic tools.\nPneumatic tools come in many shapes and form, including small and large-sized hand tools.\nThe most common types of pneumatic tools include:", "Automation-Control": 0.8273084164, "Qwen2": "Yes"} {"id": "304604", "revid": "222130", "url": "https://en.wikipedia.org/wiki?curid=304604", "title": "Mechatronics", "text": "Mechatronics engineering also called mechatronics, is an interdisciplinary branch of engineering that focuses on the integration of mechanical, electrical and electronic engineering systems, and also includes a combination of robotics, electronics, computer science, telecommunications, systems, control, and product engineering.\nAs technology advances over time, various subfields of engineering have succeeded in both adapting and multiplying. The intention of mechatronics is to produce a design solution that unifies each of these various subfields. Originally, the field of mechatronics was intended to be nothing more than a combination of mechanics, electrical and electronics, hence the name being a portmanteau of the words \"mechanics\" and \"electronics\"; however, as the complexity of technical systems continued to evolve, the definition had been broadened to include more technical areas.\nThe word \"mechatronics\" originated in Japanese-English and was created by Tetsuro Mori, an engineer of Yaskawa Electric Corporation. The word \"mechatronics\" was registered as trademark by the company in Japan with the registration number of \"46-32714\" in 1971. The company later released the right to use the word to the public, and the word began being used globally. Currently the word is translated into many languages and is considered an essential term for advanced automated industry.\nMany people treat \"mechatronics\" as a modern buzzword synonymous with automation, robotics and electromechanical engineering.\nFrench standard NF E 01-010 gives the following definition: \"approach aiming at the synergistic integration of mechanics, electronics, control theory, and computer science within product design and manufacturing, in order to improve and/or optimize its functionality\".\nHistory.\nThe word \"mechatronics\" was registered as trademark by the company in Japan with the registration number of \"46-32714\" in 1971. The company later released the right to use the word to the public, and the word began being used globally.\nWith the advent of information technology in the 1980s, microprocessors were introduced into mechanical systems, improving performance significantly. By the 1990s, advances in computational intelligence were applied to mechatronics in ways that revolutionized the field.\nDescription.\nA mechatronics engineer unites the principles of mechanics, electrical, electronics, and computing to generate a simpler, more economical and reliable system.\nEngineering cybernetics deals with the question of control engineering of mechatronic systems. It is used to control or regulate such a system (see control theory). Through collaboration, the mechatronic modules perform the production goals and inherit flexible and agile manufacturing properties in the production scheme. Modern production equipment consists of mechatronic modules that are integrated according to a control architecture. The most known architectures involve hierarchy, polyarchy, heterarchy, and hybrid. The methods for achieving a technical effect are described by control algorithms, which might or might not utilize formal methods in their design. Hybrid systems important to mechatronics include production systems, synergy drives,\nexploration rovers, automotive subsystems such as anti-lock braking systems and spin-assist, and everyday equipment such as autofocus cameras, video, hard disks, CD players and phones.\nCourse structure.\nMechatronics students take courses in various fields:\nApplications.\nPhysical implementations.\nMechanical modeling calls for modeling and simulating physical complex phenomena in the scope of a multi-scale and multi-physical approach. This implies to implement and to manage modeling and optimization methods and tools, which are integrated in a systemic approach.\nThe specialty is aimed for students in mechanics who want to open their mind to systems engineering, and able to integrate different physics or technologies, as well as students in mechatronics who want to increase their knowledge in optimization and multidisciplinary simulation techniques.\nThe specialty educates students in robust and/or optimized conception methods for structures or many technological systems, and to the main modeling and simulation tools used in R&D. Special courses are also proposed for original applications (multi-materials composites, innovating transducers and actuators, integrated systems, …) to prepare the students to the coming breakthrough in the domains covering the materials and the systems.\nFor some mechatronic systems, the main issue is no longer how to implement a control system, but how to implement actuators. Within the mechatronic field, mainly two technologies are used to produce movement/motion.\nSubdisciplines.\nMechanical.\nMechanical engineering is an important part of mechatronics engineering. It includes the study of mechanical nature of how an object works. Mechanical elements refer to mechanical structure, mechanism, thermo-fluid, and hydraulic aspects of a mechatronics system. The study of thermodynamics, dynamics, fluid mechanics, pneumatics and hydraulics. Mechatronics engineer who works a mechanical engineer can specialize in hydraulics and pneumatics systems, where they can be found working in automobile industries. A mechatronics engineer can also design a vehicle since they have strong mechanical and electronical background. Knowledge of software applications such as computer-aided design and computer aided manufacturing is essential for designing products. Mechatronics covers a part of mechanical syllabus which is widely applied in automobile industry.\nMechatronic systems represent a large part of the functions of an automobile. The control loop formed by sensor—information processing—actuator—mechanical (physical) change is found in many systems. The system size can be very different. The Anti-lock braking system (ABS) is a mechatronic system. The brake itself is also one. And the control loop formed by driving control (for example cruise control), engine, vehicle driving speed in the real world and speed measurement is a mechatronic system, too. The great importance of mechatronics for automotive engineering is also evident from the fact that vehicle manufacturers often have development departments with \"Mechatronics\" in their names.\nElectronics and Electricals.\nElectronics and Telecommunication engineering specializes in electronics devices and telecom devices of a mechatronics system. A mechatronics engineer specialized in electronics and telecommunications have knowledge of computer hardware devices. The transmission of signal is the main application of this subfield of mechatronics. Where digital and analog systems also forms an important part of mechatronics systems. Telecommunications engineering deals with the transmission of information across a medium.\nElectronics engineering is related to computer engineering and electrical engineering. Control engineering has a wide range of electronic applications from the flight and propulsion systems of commercial airplanes to the cruise control present in many modern cars. VLSI designing is important for creating integrated circuits. Mechatronics engineers have deep knowledge of microprocessors, microcontrollers, microchips and semiconductors. The application of mechatronics in electronics manufacturing industry can conduct research and development on consumer electronic devices such as mobile phones, computers, cameras etc. For mechatronics engineers it is necessary to learn operating computer applications such as MATLAB and Simulink for designing and developing electronic products.\nMechatronics engineering is a interdisciplinary course, it includes concepts of both electrical and mechanical systems. A mechatronics engineer engages in designing high power transformers or radio-frequency module transmitters.\nAvionics.\nAvionics is also considered a variant of mechatronics as it combines several fields such as electronics and telecom with Aerospace engineering. It is the subdiscipline of mechatronics engineering and aerospace engineering which is engineering branch focusing on electronics systems of aircraft. The word avionics is a blend of aviation and electronics. The electronics system of aircraft includes aircraft communication addressing and reporting system, air navigation, aircraft flight control system, aircraft collision avoidance systems, flight recorder, weather radar and lightning detector. These can be as simple as a searchlight for a police helicopter or as complicated as the tactical system for an airborne early warning platform.\nAdvanced Mechatronics.\nAnother variant is Motion control for Advanced Mechatronics, presently recognized as a key technology in mechatronics. The robustness of motion control will be represented as a function of stiffness and a basis for practical realization. Target of motion is parameterized by control stiffness which could be variable according to the task reference. The system robustness of motion always requires very high stiffness in the controller.\nIndustrial.\nThe branch of industrial engineer includes the design of machinery, assembly and process lines of various manufacturing industries. This branch can be said somewhat similar to automation and robotics. Mechatronics engineers who works as industrial engineers design and develop infrastructure of a manufacturing plant. Also it can be said that they are architect of machines. One can work as an industrial designer to design the industrial layout and plan for setting up of a manufacturing industry or as an industrial technician to lookover the technical requirements and repairing of the particular factory.\nRobotics.\nRobotics is one of the newest emerging subfield of mechatronics. It is the study of robots that how they are manufactured and operated. Since 2000, this branch of mechatronics is attracting a number of aspirants. Robotics is interrelated with automation because here also not much human intervention is required. A large number of factories especially in automobile factories, robots are founds in assembly lines where they perform the job of drilling, installation and fitting. Programming skills are necessary for specialization in robotics. Knowledge of programming language —ROBOTC is important for functioning robots. An industrial robot is a prime example of a mechatronics system; it includes aspects of electronics, mechanics, and computing to do its day-to-day jobs.\nComputer.\nThe Internet of things (IoT) is the inter-networking of physical devices, embedded with electronics, software, sensors, actuators, and network connectivity which enable these objects to collect and exchange data. IoT and mechatronics are complementary. Many of the smart components associated with the Internet of Things will be essentially mechatronic. The development of the IoT is forcing mechatronics engineers, designers, practitioners and educators to research the ways in which mechatronic systems and components are perceived, designed and manufactured. This allows them to face up to new issues such as data security, machine ethics and the human-machine interface.\nKnowledge of programming is very important. A mechatronics engineer has to do programming in different levels example.—PLC programming, drone programming, hardware programming, CNC programming etc. Due to combination of electronics engineering, soft skills from computer side is important. Important programming languages for mechatronics engineer to learn is Java, Python, C++ and C programming language.", "Automation-Control": 0.8251159787, "Qwen2": "Yes"} {"id": "2220565", "revid": "37823666", "url": "https://en.wikipedia.org/wiki?curid=2220565", "title": "Rudder ratio", "text": "Rudder ratio refers to a value that is monitored by the computerized flight control systems in modern aircraft. The ratio relates the aircraft airspeed to the rudder deflection setting that is in effect at the time. As an aircraft accelerates, the deflection of the rudder needs to be reduced proportionately within the range of the rudder pedal depression by the pilot. This automatic reduction process is needed because if the rudder is fully deflected when the aircraft is in high-speed flight, it will cause the plane to sharply and violently yaw, or swing from side to side, leading to loss of control and rudder, tail and other damages, even causing the aircraft to crash.", "Automation-Control": 0.9975515604, "Qwen2": "Yes"} {"id": "36457556", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=36457556", "title": "I. Michael Ross", "text": "Isaac Michael Ross is a Distinguished Professor and Program Director of Control and Optimization at the Naval Postgraduate School in Monterey, CA. He has published a highly-regarded textbook on optimal control theory and seminal papers in pseudospectral optimal control theory,\nenergy-sink theory, the optimization and deflection of near-Earth asteroids and comets,\nrobotics, attitude dynamics and control, orbital mechanics, real-time optimal control and \nunscented optimal control. The Kang–Ross–Gong theorem, Ross' lemma, Ross' time constant, the Ross–Fahroo lemma, and the Ross–Fahroo pseudospectral method are all named after him.\nTheoretical contributions.\nAlthough Ross has made contributions to energy-sink theory, attitude dynamics and control and planetary defense, he is best known for work on pseudospectral optimal control. In 2001, Ross and Fahroo announced the covector mapping principle, first, as a special result in pseudospectral optimal control, and later as a general result in optimal control. This principle was based on the Ross–Fahroo lemma which proves that dualization and discretization are not necessarily commutative operations and that certain steps must be taken to promote commutation. When discretization is commutative with dualization, then, under appropriate conditions, Pontryagin's minimum principle emerges as a consequence of the convergence of the discretization. \nTogether with F. Fahroo, W. Kang and Q. Gong, Ross proved a series of results on the convergence of pseudospectral discretizations of optimal control problems. Ross and his coworkers showed that the Legendre and Chebyshev pseudospectral discretizations converge to an optimal solution of a problem under the mild condition of boundedness of variations.\nSoftware contributions.\nIn 2001, Ross created DIDO, a software package for solving optimal control problems. Powered by pseudospectral methods, Ross created a user-friendly set of objects that required no knowledge of his theory to run DIDO. This work was used in on pseudospectral methods for solving optimal control problems. DIDO is used for solving optimal control problems in aerospace applications, search theory, and robotics. Ross' constructs have been licensed to other software products, and have been used by NASA to solve flight-critical problems on the International Space Station.\nFlight contributions.\nIn 2006, NASA used DIDO to implement zero propellant maneuvering of the International Space Station. In 2007, \"SIAM News\" printed a page 1 article announcing the use of Ross' theory. This led other researchers to explore the mathematics of pseudospectral optimal control theory. DIDO is also used to maneuver the Space Station and operate various ground and flight equipment to incorporate autonomy and performance efficiency for nonlinear control systems.\nAwards and distinctions.\nIn 2010, Ross was elected a Fellow of the American Astronautical Society for \"his pioneering contributions to the theory, software and flight demonstration of pseudospectral optimal control.\" He also received (jointly with Fariba Fahroo), the AIAA Mechanics and Control of Flight Award for \"fundamentally changing the landscape of flight mechanics\". His research has made headlines in \"SIAM News\", \"IEEE Control Systems Magazine\", \"IEEE Spectrum\", and \"Space Daily\".", "Automation-Control": 0.9888228774, "Qwen2": "Yes"} {"id": "57191145", "revid": "4637213", "url": "https://en.wikipedia.org/wiki?curid=57191145", "title": "Ultrasonic welding of thermoplastics", "text": "Ultrasonic welding is a method of joining thermoplastic components by heating and subsequent melting of surfaces in contact. Mechanical vibration with frequency between 10 and 70 kHz and amplitude of 10 to 250 μm is applied to joining parts. After ultrasonic energy is turned off, the parts remain in contact under pressure for some time while the melt layer cools down creating a weld.\nDifferent join designs and process controls are used in ultrasonic welding. A sharp surface feature is typically introduced to one of the parts ensuring consistency of the welding process. Components of ultrasonic welding systems as well as the areas of application are described in the article Ultrasonic welding.\nAdvantages and disadvantages.\nFollowing advantages are typically attributed to ultrasonic welding:\nFollowing are the disadvantages of ultrasonic welding:\nProcess description.\nPlunge and continuous welding are the welding modes of thermoplastics.\nPlunge ultrasonic welding.\nWith plunge ultrasonic welding the parts are first secured in a fixture. Ultrasonic energy is then applied to create a weld. After the weld is cooled down, the parts are removed from the fixture.\nAt the start of the process, an actuator is moved to a part. This stage is called \"downstroke.\" Ultrasonic energy can be applied during this phase depending on the size of a horn used. The larger the horn the harder it is to vibrate. Therefore, application of ultrasonic energy during the down stroke becomes necessary (\"pre-triggering\"). In other cases, the vibration is applied after the horn came in contact with the part and some pressure has been created. The force is then continues to increase linearly until some predefined value.\nThe power rises at the same time as ultrasonic energy is being applied in order to accommodate the stack vibration. After some period of time a steady-state process, which indicate sufficient melting at the interface, is reached. At this point, ultrasonic energy is turned off. In production, this often happens prior to reaching the steady-state process as desired strength of the weld joint for a particular application is typically reached at this point. The tooling continues to stay on the part for a period of time called \"hold time.\" This allows for certain pressure (\"hold force\") to be applied to the part. Hold time typically lasts for one half of weld time allowing the weld to solidify.\nThe tooling is being removed from the part during a phase called “up-stroke.” This stage takes place at the completion of the hold time. Some amount of plastic substrate can remain on tooling surface after the welding process. To clean the surface, ultrasonic energy is applied when the tool is being retracted from the part (\"post weld burst\").\nContinuous ultrasonic welding.\nContinuous ultrasonic welding mode is used for joining thin layers of material and is often employed for manufacturing products for hospitals such as gowns and sterile garments, and in other applications.\nTwo layers of material are pulled through a space between a disk – rotary drum (anvil) – and a horn (image). Anvil's surface contains certain pattern. The weld is created at these asperities and the areas between the peaks remain unboned. Surface of the horn is typically round, which prevents undesirable seizing of material. Round horn also allows for proper force distribution at the contact interface.\nMore than two layers of material can be welded at once. The materials to be welded experience similar vibrations to those in plunge welding but shorter in time. Hold force to the newly welded region is provided by previously welded section that has come out of the tooling and cooled down.\nScan welding is a type of continuous ultrasonic welding in which case large plates or sheets can be welded. In scan welding, a part is secured on a stationary table and the horn moves across the part creating a weld joint. A combination of stationary horn and mobile table can also be employed. The horn has round edges as in case with continuous welding and the ultrasonic vibrations are similar to those in plunge welding. The horn can be used to provide the hold force.\nProcess control.\nDifferent ultrasonic welding machines offer different process controls. Each application determines the level of process control. In case with ordinary spot welds, a hand-held welder would be sufficient. More sophisticated equipment with computerized controls and built-in statistical process control (SPC) software may be appropriate in medical devices industry and other applications requiring narrow tolerances and high quality welds.\nFollowing modes of process control are used in ultrasonic welding:\nMost of these modes require microprocessor based controller while a basic welding system would be enough for time mode. Welding parameters can be monitored in real time with microprocessor based controllers.\nThe majority of welding systems include \"time mode\" that allows the operator to specify the duration of welding process independent from other parameters. In \"energy control mode\", the vibration of the tool continues until a preset energy level is reached. Energy mode can be used in conjunction with time mode to improve the quality of welded parts. Certain time limits can be defined to reach the necessary energy level. Should the actual time for reaching the preset energy level deviate from those time limits, such an event would be indicative of a potential issue with the weld.\nIn \"collapse mode\", ultrasonic energy is applied until the parts have collapsed (moved relative to one another) to a certain height. In addition to microprocessor based controller, a linear encoder is used in systems with collapse control. In this mode, the final height of the weld joint can also be controlled by detecting the position of the horn. In \"peak power mode\", the vibration continues until predefined power level is reached. The final dimensions of the weld joint can also be controlled with \"ground detect mode\". In this case, ultrasonic energy is applied until the horn makes electrical contact with the fixture that is positioned at the desired height.\nVarious combinations of process control modes can be employed to define an operating window and aid in quality control.\nJoint design.\nAs with other welding processes, join design is essential step in product development. Many factors should be considered in joint design such as materials to be welded, thickness of parts, operating conditions of the final product, aesthetics and others. Narrow contact area between joining parts is essential design attribute. It allows lower energy input for generating a surface layer of molten plastic. Parts fit-up should provide necessary alignment without interference with their surface features.\nEnergy directors.\n \nEnergy director is a triangular section molded on one of the joint parts. While the parts initially contact each other through this triangle, it carries the highest stress, and is therefore the first portion to melt under application of ultrasonic energy. The purpose of energy directors is to ensure that sufficient amount of material is melted as they fill out the gap between mating parts. Energy directors most commonly used with amorphous polymers, but can also be used with semi-crystalline polymers.\nWhile flash is commonly produced with such joint design, it can be covered with a flash trap. This joint feature conceals the flash providing aesthetic appearance. Minimum recommended wall thickness in this case is 2.03 – 2.29 mm. Since the smaller surface area is used to create a weld with step joint design, it could have lower strength than butt joint design.\nFollowing dimensions for energy directors are typically recommended:\nSharper apex angle provides greater weld strength and ensures tight seal. Such design also works well with polycarbonate and acrylic.\nButt joint is one of the most common weld configurations used with energy directors. Hermetic seal may not be easily achieved in semi-crystalline polymers as they crystallize faster when exposed to air atmosphere. However, hermetic seal can be obtained with amorphous polymers provided good alignment of mating parts with additional fixtures.\nTongue and groove joint design does not require additional fixtures for proper alignment. Molten plastic is fully enclosed in the groove providing aesthetic appearance. Minimum recommended wall thickness is 3.05 – 3.12 mm.\nShear joints.\nSome applications require the plastic welds to be hermetically stable. To satisfy this requirement, shear joints are typically used instead of energy directors. To achieve proper sealing, small tolerances are necessary. Such a stipulation could be difficult to satisfy with larger parts.\nWhen a shear joint is assembled, top part contacts the bottom one along a thin edge, which is the first section to melt. This molten material then flows along the side wall of the bottom part filling up the gap between the parts. While a shear joint provides alignment for mating parts, it often requires additional fixtures to support the top part of the joint from deflecting outwards as it experiences pressing force from the tool. To mitigate the risk of deflection, robust design of the top part ensuring sufficient stiffness is essential.\nShear joints can be used with all polymers. They are well-suited for cylindrical parts.", "Automation-Control": 0.8246959448, "Qwen2": "Yes"} {"id": "57019837", "revid": "15996738", "url": "https://en.wikipedia.org/wiki?curid=57019837", "title": "Counting point", "text": "In logistics, a counting point (CP; also known as a status point, data acquisition point, check point, or control point) is a certain spot designated for planning, controlling, and monitoring material flow items (e.g. single parts, assembly groups, final products, bins, racks, containers, and freight carriers).\nInstallation.\nIf the production and material flow gets more and more complex then more counting points must be installed in the process of transport, shipping, and manufacturing. Especially check points for quality control and quality assurance can be used outstandingly as counting points but also data acquisition points in material handling processes. For better planning and monitoring of material flow items it is helpful to order all counting points in such a way that the requirements of an ideal Boolean Interval (mathematics) Algebra can be fulfilled. Boolean intervals are half-opened and a counting point lays always inside at the beginning and the ending lays outside and is the entry-point of the next-following interval. Such an interval can represent any kind of stretch in production and material flow e.g. an assembly line, a storage or warehouse, a transport route etc. Alternative production and transportation stretches are mapped as parallel intervals, which are logical equivalent but have their own different data acquisition points. If a flow item passes a certain CP it has left the preceding interval and stays in the concerned interval at the same time. By this it can be assured that a flow object can stay only in one interval at a certain moment which is also true and evident for parallel intervals. This kind of mapping material flow structure is necessary for a consistent calculation of the lead-time and complete cycle time for flow items which is extremely important not only for material flow planning but also for production planning and manufacturing operations management in general.\nUsage.\nCounting points are used in different logistic areas like transportation, material handling, goods receipt, and goods issue at the border of a plant, because this is often transfer of ownership. Other well-known counting points are receiving and issuing material items at the border of a storage or warehouse. Counting points play an important rule also in manufacturing and production scheduling and different concepts of material requirements planning (e.g for the concept of cumulative quantities and the gross-net-method. Counting points also appear in the automotive industry where the production flow of a car is controlled, scheduled, and monitored continuously at exactly-defined check points for different manufacturing departments and shops and where the data is used for scheduling and optimization.", "Automation-Control": 0.603758812, "Qwen2": "Yes"} {"id": "56275884", "revid": "43392054", "url": "https://en.wikipedia.org/wiki?curid=56275884", "title": "Data-driven control system", "text": "Data-driven control systems are a broad family of control systems, in which the identification of the process model and/or the design of the controller are based entirely on \"experimental data\" collected from the plant.\nIn many control applications, trying to write a mathematical model of the plant is considered a hard task, requiring efforts and time to the process and control engineers. This problem is overcome by \"data-driven\" methods, which fit a system model to the experimental data collected, choosing it in a specific models class. The control engineer can then exploit this model to design a proper controller for the system. However, it is still difficult to find a simple yet reliable model for a physical system, that includes only those dynamics of the system that are of interest for the control specifications. The \"direct\" data-driven methods allow to tune a controller, belonging to a given class, without the need of an identified model of the system. In this way, one can also simply weight process dynamics of interest inside the control cost function, and exclude those dynamics that are out of interest.\nOverview.\nThe \"standard\" approach to control systems design is organized in two-steps: \nTypical objectives of system identification are to have formula_9 as close as possible to formula_7, and to have formula_6 as small as possible. However, from an identification for control perspective, what really matters is the performance achieved by the controller, not the intrinsic quality of the model.\nOne way to deal with uncertainty is to design a controller that has an acceptable performance with all models in formula_6, including formula_7. This is the main idea behind robust control design procedure, that aims at building frequency domain uncertainty descriptions of the process. However, being based on worst-case assumptions rather than on the idea of averaging out the noise, this approach typically leads to \"conservative\" uncertainty sets. Rather, data-driven techniques deal with uncertainty by working on experimental data, and avoiding excessive conservativism.\nIn the following, the main classifications of data-driven control systems are presented.\nIndirect and direct methods.\nThere are many methods available to control the systems. \nThe fundamental distinction is between indirect and direct controller design methods. The former group of techniques is still retaining the standard two-step approach, \"i.e.\" first a model is identified, then a controller is tuned based on such model. The main issue in doing so is that the controller is computed from the estimated model formula_9 (according to the certainty equivalence principle), but in practice formula_16. To overcome this problem, the idea behind the latter group of techniques is to map the experimental data \"directly\" onto the controller, without any model to be identified in between.\nIterative and noniterative methods.\nAnother important distinction is between iterative and noniterative (or one-shot) methods. In the former group, repeated iterations are needed to estimate the controller parameters, during which the optimization problem is performed based on the results of the previous iteration, and the estimation is expected to become more and more accurate at each iteration. This approach is also prone to on-line implementations (see below). In the latter group, the (optimal) controller parametrization is provided with a single optimization problem. This is particularly important for those systems in which iterations or repetitions of data collection experiments are limited or even not allowed (for example, due to economic aspects). In such cases, one should select a design technique capable of delivering a controller on a single data set. This approach is often implemented off-line (see below).\nOn-line and off-line methods.\nSince, on practical industrial applications, open-loop or closed-loop data are often available continuously, on-line data-driven techniques use those data to improve the quality of the identified model and/or the performance of the controller each time new information is collected on the plant. Instead, off-line approaches work on batch of data, which may be collected only once, or multiple times at a regular (but rather long) interval of time.\nIterative feedback tuning.\nThe iterative feedback tuning (IFT) method was introduced in 1994, starting from the observation that, in identification for control, each iteration is based on the (wrong) certainty equivalence principle.\nIFT is a model-free technique for the direct iterative optimization of the parameters of a fixed-order controller; such parameters can be successively updated using information coming from standard (closed-loop) system operation.\nLet formula_17 be a desired output to the reference signal formula_18; the error between the achieved and desired response is formula_19. The control design objective can be formulated as the minimization of the objective function:\nGiven the objective function to minimize, the \"quasi-Newton method\" can be applied, i.e. a gradient-based minimization using a gradient search of the type:\nThe value formula_22 is the step size, formula_23 is an appropriate positive definite matrix and formula_24 is an approximation of the gradient; the true value of the gradient is given by the following:\nThe value of formula_26 is obtained through the following three-step methodology:\nA crucial factor for the convergence speed of the algorithm is the choice of formula_23; when formula_38 is small, a good choice is the approximation given by the Gauss–Newton direction:\nNoniterative correlation-based tuning.\nNoniterative correlation-based tuning (nCbT) is a noniterative method for data-driven tuning of a fixed-structure controller. It provides a one-shot method to directly synthesize a controller based on a single dataset.\nSuppose that formula_4 denotes an unknown LTI stable SISO plant, formula_41 a user-defined reference model and formula_42 a user-defined weighting function. An LTI fixed-order controller is indicated as formula_43, where formula_44, and formula_45 is a vector of LTI basis functions. Finally, formula_46 is an ideal LTI controller of any structure, guaranteeing a closed-loop function formula_41 when applied to formula_4.\nThe goal is to minimize the following objective function:\nformula_50 is a convex approximation of the objective function obtained from a model reference problem, supposing that formula_51.\nWhen formula_4 is stable and minimum-phase, the approximated model reference problem is equivalent to the minimization of the norm of formula_53 in the scheme in figure.\nThe input signal formula_54 is supposed to be a persistently exciting input signal and formula_55 to be generated by a stable data-generation mechanism. The two signals are thus uncorrelated in an open-loop experiment; hence, the ideal error formula_56 is uncorrelated with formula_54. The control objective thus consists in finding formula_58 such that formula_54 and formula_56 are uncorrelated.\nThe vector of \"instrumental variables\" formula_61 is defined as:\nwhere formula_63 is large enough and formula_64, where formula_65 is an appropriate filter.\nThe correlation function is:\nand the optimization problem becomes:\nDenoting with formula_68 the spectrum of formula_54, it can be demonstrated that, under some assumptions, if formula_65 is selected as:\nthen, the following holds:\nStability constraint.\nThere is no guarantee that the controller formula_73 that minimizes formula_74 is stable. Instability may occur in the following cases:\nConsider a stabilizing controller formula_81 and the closed loop transfer function formula_82.\nDefine:\nCondition 1. is enforced when:\nThe model reference design with stability constraint becomes:\nA convex data-driven estimation of formula_94 can be obtained through the discrete Fourier transform. \nDefine the following:\nFor stable minimum phase plants, the following convex data-driven optimization problem is given:\nVirtual reference feedback tuning.\nVirtual Reference Feedback Tuning (VRFT) is a noniterative method for data-driven tuning of a fixed-structure controller. It provides a one-shot method to directly synthesize a controller based on a single dataset.\nVRFT was first proposed in and then extended to LPV systems. VRFT also builds on ideas given in as formula_97. \nThe main idea is to define a desired closed loop model formula_41 and to use its inverse dynamics to obtain a virtual reference formula_99 from the measured output signal formula_100.\nThe virtual signals are formula_101 and formula_102\nThe optimal controller is obtained from noiseless data by solving the following optimization problem:\nwhere the optimization function is given as follows:", "Automation-Control": 0.9975546002, "Qwen2": "Yes"} {"id": "2397362", "revid": "1170455257", "url": "https://en.wikipedia.org/wiki?curid=2397362", "title": "Karush–Kuhn–Tucker conditions", "text": "In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.\nAllowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which allows only equality constraints. Similar to the Lagrange approach, the constrained maximization (minimization) problem is rewritten as a Lagrange function whose optimal point is a (global) saddle point, i.e. a global maximum (minimum) over the domain of the choice variables and a global minimum (maximum) over the multipliers, which is why the Karush–Kuhn–Tucker theorem is sometimes referred to as the saddle-point theorem.\nThe KKT conditions were originally named after Harold W. Kuhn and Albert W. Tucker, who first published the conditions in 1951. Later scholars discovered that the necessary conditions for this problem had been stated by William Karush in his master's thesis in 1939.\nNonlinear optimization problem.\nConsider the following nonlinear optimization problem in standard form:\nwhere formula_4 is the optimization variable chosen from a convex subset of formula_5, formula_6 is the objective or utility function, formula_7 are the inequality constraint functions and formula_8 are the equality constraint functions. The numbers of inequalities and equalities are denoted by formula_9 and formula_10 respectively. Corresponding to the constrained optimization problem one can form the Lagrangian function\nformula_11\nwhere\nformula_12 The Karush–Kuhn–Tucker theorem then states the following.\nSince the idea of this approach is to find a supporting hyperplane on the feasible set formula_13, the proof of the Karush–Kuhn–Tucker theorem makes use of the hyperplane separation theorem.\nThe system of equations and inequalities corresponding to the KKT conditions is usually not solved directly, except in the few special cases where a closed-form solution can be derived analytically. In general, many optimization algorithms can be interpreted as methods for numerically solving the KKT system of equations and inequalities.\nNecessary conditions.\nSuppose that the objective function formula_14 and the constraint functions formula_15 and formula_16 have subderivatives at a point formula_17. If formula_18 is a local optimum and the optimization problem satisfies some regularity conditions (see below), then there exist constants formula_19 and formula_20, called KKT multipliers, such that the following four groups of conditions hold:\nThe last condition is sometimes written in the equivalent form: formula_29\nIn the particular case formula_30, i.e., when there are no inequality constraints, the KKT conditions turn into the Lagrange conditions, and the KKT multipliers are called Lagrange multipliers.\nInterpretation: KKT conditions as balancing constraint-forces in state space.\nThe primal problem can be interpreted as moving a particle in the space of formula_31, and subjecting it to three kinds of force fields:\nPrimal stationarity states that the \"force\" of formula_40 is exactly balanced by a linear sum of forces formula_41 and formula_42.\nDual feasibility additionally states that all the formula_42 forces must be one-sided, pointing inwards into the feasible set for formula_31.\nDual slackness states that if formula_45, then the formula_42 force must be zero, since the particle is not on the boundary, the one-sided constraint force cannot activate.\nMatrix representation.\nThe necessary conditions can be written with Jacobian matrices of the constraint functions. Let formula_47 be defined as formula_48 and let formula_49 be defined as formula_50. Let formula_51 and formula_52. Then the necessary conditions can be written as:\nRegularity conditions (or constraint qualifications).\nOne can ask whether a minimizer point formula_18 of the original, constrained optimization problem (assuming one exists) has to satisfy the above KKT conditions. This is similar to asking under what conditions the minimizer formula_18 of a function formula_21 in an unconstrained problem has to satisfy the condition formula_64. For the constrained case, the situation is more complicated, and one can state a variety of (increasingly complicated) \"regularity\" conditions under which a constrained minimizer also satisfies the KKT conditions. Some common examples for conditions that guarantee this are tabulated in the following, with the LICQ the most frequently used one:\nThe strict implications can be shown\nand\nIn practice weaker constraint qualifications are preferred since they apply to a broader selection of problems.\nSufficient conditions.\nIn some cases, the necessary conditions are also sufficient for optimality. In general, the necessary conditions are not sufficient for optimality and additional information is required, such as the Second Order Sufficient Conditions (SOSC). For smooth functions, SOSC involve the second derivatives, which explains its name.\nThe necessary conditions are sufficient for optimality if the objective function formula_6 of a maximization problem is a differentiable concave function, the inequality constraints formula_66 are differentiable convex functions, the equality constraints formula_67 are affine functions, and Slater's condition holds. Similarly, if the objective function formula_6 of a minimization problem is a differentiable convex function, the necessary conditions are also sufficient for optimality.\nIt was shown by Martin in 1985 that the broader class of functions in which KKT conditions guarantees global optimality are the so-called Type 1 invex functions.\nSecond-order sufficient conditions.\nFor smooth, non-linear optimization problems, a second order sufficient condition is given as follows.\nThe solution formula_69 found in the above section is a constrained local minimum if for the Lagrangian,\nthen,\nwhere formula_72 is a vector satisfying the following,\nwhere only those active inequality constraints formula_74 corresponding to strict complementarity (i.e. where formula_75) are applied. The solution is a strict constrained local minimum in the case the inequality is also strict.\nIf formula_76, the third order Taylor expansion of the Lagrangian should be used to verify if formula_18 is a local minimum. The minimization of formula_78 is a good counter-example, see also Peano surface.\nEconomics.\nOften in mathematical economics the KKT approach is used in theoretical models in order to obtain qualitative results. For example, consider a firm that maximizes its sales revenue subject to a minimum profit constraint. Letting formula_79 be the quantity of output produced (to be chosen), formula_80 be sales revenue with a positive first derivative and with a zero value at zero output, formula_81 be production costs with a positive first derivative and with a non-negative value at zero output, and formula_82 be the positive minimal acceptable level of profit, then the problem is a meaningful one if the revenue function levels off so it eventually is less steep than the cost function. The problem expressed in the previously given minimization form is\nand the KKT conditions are\nSince formula_87 would violate the minimum profit constraint, we have formula_88 and hence the third condition implies that the first condition holds with equality. Solving that equality gives\nBecause it was given that formula_90 and formula_91 are strictly positive, this inequality along with the non-negativity condition on formula_92 guarantees that formula_92 is positive and so the revenue-maximizing firm operates at a level of output at which marginal revenue formula_94 is less than marginal cost formula_95 — a result that is of interest because it contrasts with the behavior of a profit maximizing firm, which operates at a level at which they are equal.\nValue function.\nIf we reconsider the optimization problem as a maximization problem with constant inequality constraints:\nThe value function is defined as\nso the domain of formula_103 is formula_104\nGiven this definition, each coefficient formula_105 is the rate at which the value function increases as formula_106 increases. Thus if each formula_106 is interpreted as a resource constraint, the coefficients tell you how much increasing a resource will increase the optimum value of our function formula_6. This interpretation is especially important in economics and is used, for instance, in utility maximization problems.\nGeneralizations.\nWith an extra multiplier formula_109, which may be zero (as long as formula_110), in front of formula_111 the KKT stationarity conditions turn into\nwhich are called the Fritz John conditions. This optimality conditions holds without constraint qualifications and it is equivalent to the optimality condition \"KKT or (not-MFCQ)\".\nThe KKT conditions belong to a wider class of the first-order necessary conditions (FONC), which allow for non-smooth functions using subderivatives.", "Automation-Control": 0.8132656813, "Qwen2": "Yes"} {"id": "32867182", "revid": "1011116033", "url": "https://en.wikipedia.org/wiki?curid=32867182", "title": "Waffles (machine learning)", "text": "Waffles is a collection of command-line tools for performing machine learning operations developed at Brigham Young University. These tools are written in C++, and are available under the GNU Lesser General Public License.\nDescription.\nThe Waffles machine learning toolkit contains command-line tools for performing various operations related to machine learning, data mining, and predictive modeling. The primary focus of Waffles is to provide tools that are simple to use in scripted experiments or processes. For example, the supervised learning algorithms included in Waffles are all designed to support multi-dimensional labels, classification and regression, automatically impute missing values, and automatically apply necessary filters to transform the data to a type that the algorithm can support, such that arbitrary learning algorithms can be used with arbitrary data sets. Many other machine learning toolkits provide similar functionality, but require the user to explicitly configure data filters and transformations to make it compatible with a particular learning algorithm. The algorithms provided in Waffles also have the ability to automatically tune their own parameters (with the cost of additional computational overhead).\nBecause Waffles is designed for script-ability, it deliberately avoids presenting its tools in a graphical environment. It does, however, include a graphical \"wizard\" tool that guides the user to generate a command that will perform a desired task. This wizard does not actually perform the operation, but requires the user to paste the command that it generates into a command terminal or a script. The idea motivating this design is to prevent the user from becoming \"locked in\" to a graphical interface.\nAll of the Waffles tools are implemented as thin wrappers around functionality in a C++ class library. This makes it possible to convert scripted processes into native applications with minimal effort.\nWaffles was first released as an open source project in 2005. Since that time, it has been developed at Brigham Young University, with a new version having been released approximately every 6–9 months. Waffles is not an acronym—the toolkit was named after the food for historical reasons.\nAdvantages.\nSome of the advantages of Waffles in contrast with other popular open source machine learning toolkits include:", "Automation-Control": 0.9434501529, "Qwen2": "Yes"} {"id": "22765540", "revid": "41380957", "url": "https://en.wikipedia.org/wiki?curid=22765540", "title": "IEEE Robotics and Automation Society", "text": "The IEEE Robotics and Automation Society (IEEE RAS) is a professional society of the IEEE that supports the development and the exchange of scientific knowledge in the fields of robotics and automation, including applied and theoretical issues.\nHistory.\nThe initial IEEE Robotics and Automation (R&A) entity, the Robotics and Automation Council, was founded in 1984 by a number of IEEE Societies including Aerospace and Electronic Systems, Circuits and Systems, Components, Hybrids, and Manufacturing Technology, Computers, Control Systems, Industrial Electronics, Industry Applications, and Systems, Man and Cybernetics. In 1987 the council became the IEEE Robotics and Automation Society.\nField of interest.\nThe Society states in its constitution that it \"is interested in both applied and theoretical issues in robotics and automation. Robotics is here defined to include intelligent machines and systems used, for example, in space exploration, human services, or manufacturing; whereas automation includes the use of automated methods in various applications, for example, factory, office, home, or transportation systems to improve performance and productivity.\"\nPublications.\nThe society publishes a range of peer-reviewed journals, including\nCo-sponsored publications include:\nConferences.\nThe IEEE Robotics and Automation Society sponsors and co-sponsors a number of annual international conferences such as the International Conference on Robotics and Automation and International Conference on Intelligent Robots and Systems.", "Automation-Control": 0.9999985695, "Qwen2": "Yes"} {"id": "22769766", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=22769766", "title": "Global information system", "text": "Global information system is an information system which is developed and / or used in a global context. Some examples of GIS are SAP, The Global Learning Objects Brokered Exchange and other systems.\nDefinition.\nThere are a variety of definitions and understandings of a global information system (GIS, GLIS), such as\nCommon to this class of information systems is that the context is a global setting, either for its use or development process. This means that it highly relates to distributed systems / distributed computing where the distribution is global. The term also incorporates aspects of global software development and there outsourcing (when the outsourcing locations are globally distributed) and offshoring aspects. A specific aspect of global information systems is the case (domain) of global software development. A main research aspect in this field concerns the coordination of and collaboration between virtual teams. Further important aspects are the internationalization and language localization of system components.\nTasks in designing global information systems.\nCritical tasks in designing global information systems are \nA variety of examples can be given. Basically every multi-lingual website can be seen as a global information system. However, mostly the term GLIS is used to refer to a specific system developed or used in a global context.\nExamples.\nSpecific examples are ", "Automation-Control": 0.6185835004, "Qwen2": "Yes"} {"id": "722503", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=722503", "title": "Linear system", "text": "In systems theory, a linear system is a mathematical model of a system based on the use of a linear operator.\nLinear systems typically exhibit features and properties that are much simpler than the nonlinear case.\nAs a mathematical abstraction or idealization, linear systems find important applications in automatic control theory, signal processing, and telecommunications. For example, the propagation medium for wireless communication systems can often be\nmodeled by linear systems.\nDefinition.\nA general deterministic system can be described by an operator, that maps an input, as a function of to an output, a type of black box description.\nA system is linear if and only if it satisfies the superposition principle, or equivalently both the additivity and homogeneity properties, without restrictions (that is, for all inputs, all scaling constants and all time.)\nThe superposition principle means that a linear combination of inputs to the system produces a linear combination of the individual zero-state outputs (that is, outputs setting the initial conditions to zero) corresponding to the individual inputs.\nIn a system that satisfies the homogeneity property, scaling the input always results in scaling the zero-state response by the same factor. In a system that satisfies the additivity property, adding two inputs always results in adding the corresponding two zero-state responses due to the individual inputs.\nMathematically, for a continuous-time system, given two arbitrary inputs\nformula_1\nas well as their respective zero-state outputs\nformula_2\nthen a linear system must satisfy\nformula_3\nfor any scalar values and , for any input signals and , and for all time .\nThe system is then defined by the equation , where is some arbitrary function of time, and is the system state. Given and the system can be solved for \nThe behavior of the resulting system subjected to a complex input can be described as a sum of responses to simpler inputs. In nonlinear systems, there is no such relation. \nThis mathematical property makes the solution of modelling equations simpler than many nonlinear systems.\nFor time-invariant systems this is the basis of the impulse response or the frequency response methods (see LTI system theory), which describe a general input function in terms of unit impulses or frequency components. \nTypical differential equations of linear time-invariant systems are well adapted to analysis using the Laplace transform in the continuous case, and the Z-transform in the discrete case (especially in computer implementations).\nAnother perspective is that solutions to linear systems comprise a system of functions which act like vectors in the geometric sense.\nA common use of linear models is to describe a nonlinear system by linearization. This is usually done for mathematical convenience.\nThe previous definition of a linear system is applicable to SISO (single-input single-output) systems. For MIMO (multiple-input multiple-output) systems, input and output signal vectors (formula_4, formula_5, formula_6, formula_7) are considered instead of input and output signals (formula_8, formula_9, formula_10, formula_11.)\nThis definition of a linear system is analogous to the definition of a linear differential equation in calculus, and a linear transformation in linear algebra.\nExamples.\nA simple harmonic oscillator obeys the differential equation:\nformula_12\nIf formula_13\nthen is a linear operator. Letting we can rewrite the differential equation as which shows that a simple harmonic oscillator is a linear system.\nOther examples of linear systems include those described by formula_14, formula_15, formula_16, and any system described by ordinary linear differential equations. Systems described by formula_17, formula_18, formula_19, formula_20, formula_21, formula_22, formula_23, and a system with odd-symmetry output consisting of a linear region and a saturation (constant) region, are non-linear because they don't always satisfy the superposition principle.\nThe output versus input graph of a linear system need not be a straight line through the origin. For example, consider a system described by formula_15 (such as a constant-capacitance capacitor or a constant-inductance inductor). It is linear because it satisfies the superposition principle. However, when the input is a sinusoid, the output is also a sinusoid, and so its output-input plot is an ellipse centered at the origin rather than a straight line passing through the origin.\nAlso, the output of a linear system can contain harmonics (and have a smaller fundamental frequency than the input) even when the input is a sinusoid. For example, consider a system described by formula_25. It is linear because it satisfies the superposition principle. However, when the input is a sinusoid of the form formula_26, using product-to-sum trigonometric identities it can be easily shown that the output is formula_27, that is, the output doesn't consist only of sinusoids of same frequency as the input , but instead also of sinusoids of frequencies and ; furthermore, taking the least common multiple of the fundamental period of the sinusoids of the output, it can be shown the fundamental angular frequency of the output is , which is different than that of the input.\nTime-varying impulse response.\nThe time-varying impulse response of a linear system is defined as the response of the system at time \"t\" = \"t\"2 to a single impulse applied at time In other words, if the input to a linear system is \nformula_28\nwhere represents the Dirac delta function, and the corresponding response of the system is\nformula_29\nthen the function is the time-varying impulse response of the system. Since the system cannot respond before the input is applied the following causality condition must be satisfied:\nformula_30\nThe convolution integral.\nThe output of any general continuous-time linear system is related to the input by an integral which may be written over a doubly infinite range because of the causality condition:\nformula_31\nIf the properties of the system do not depend on the time at which it is operated then it is said to be time-invariant and is a function only of the time difference which is zero for (namely ). By redefinition of it is then possible to write the input-output relation equivalently in any of the ways,\nformula_32\nLinear time-invariant systems are most commonly characterized by the Laplace transform of the impulse response function called the \"transfer function\" which is:\nformula_33\nIn applications this is usually a rational algebraic function of . Because is zero for negative , the integral may equally be written over the doubly infinite range and putting follows the formula for the \"frequency response function\":\nformula_34\nDiscrete-time systems.\nThe output of any discrete time linear system is related to the input by the time-varying convolution sum:\nformula_35\nor equivalently for a time-invariant system on redefining ,\nformula_36\nwhere formula_37 represents the lag time between the stimulus at time \"m\" and the response at time \"n\".", "Automation-Control": 0.9932386875, "Qwen2": "Yes"} {"id": "49809057", "revid": "42268508", "url": "https://en.wikipedia.org/wiki?curid=49809057", "title": "Linear control", "text": "Linear control are control systems and control theory based on \"negative feedback\" for producing a control signal to maintain the controlled process variable (PV) at the desired setpoint (SP). There are several types of linear control systems with different capabilities.\nProportional control.\nProportional control is a type of linear feedback control system in which a correction is applied to the controlled variable which is proportional to the difference between the desired value (SP) and the measured value (PV). Two classic mechanical examples are the toilet bowl float proportioning valve and the fly-ball governor.\nThe proportional control system is more complex than an on–off control system but simpler than a proportional-integral-derivative (PID) control system used, for instance, in an automobile cruise control. On–off control will work for systems that do not require high accuracy or responsiveness but are not effective for rapid and timely corrections and responses. Proportional control overcomes this by modulating the manipulated variable (MV), such as a control valve, at a gain level that avoids instability, but applies correction as fast as practicable by applying the optimum quantity of proportional correction.\nA drawback of proportional control is that it cannot eliminate the residual SP–PV error, as it requires an error to generate a proportional output. A PI controller can be used to overcome this. The PI controller uses a proportional term (P) to remove the gross error, and an integral term (I) to eliminate the residual offset error by integrating the error over time.\nIn some systems, there are practical limits to the range of the MV. For example, a heater has a limit to how much heat it can produce and a valve can open only so far. Adjustments to the gain simultaneously alter the range of error values over which the MV is between these limits. The width of this range, in units of the error variable and therefore of the PV, is called the \"proportional band\" (PB).\nFurnace example.\nWhen controlling the temperature of an industrial furnace, it is usually better to control the opening of the fuel valve \"in proportion to\" the current needs of the furnace. This helps avoid thermal shocks and applies heat more effectively.\nAt low gains, only a small corrective action is applied when errors are detected. The system may be safe and stable but may be sluggish in response to changing conditions. Errors will remain uncorrected for relatively long periods of time and the system is overdamped. If the proportional gain is increased, such systems become more responsive and errors are dealt with more quickly. There is an optimal value for the gain setting when the overall system is said to be critically damped. Increases in loop gain beyond this point lead to oscillations in the PV and such a system is underdamped. Adjusting gain to achieve critically damped behavior is known as \"tuning\" the control system.\nIn the underdamped case, the furnace heats quickly. Once the setpoint is reached, stored heat within the heater sub-system and in the walls of the furnace will keep the measured temperature rising beyond what is required. After rising above the setpoint, the temperature falls back and eventually heat is applied again. Any delay in reheating the heater sub-system allows the furnace temperature to fall further below the setpoint and the cycle repeats. The temperature oscillations that an underdamped furnace control system produces are undesirable.\nIn a critically damped system, as the temperature approaches the setpoint, the heat input begins to be reduced, the rate of heating of the furnace has time to slow and the system avoids overshoot. Overshoot is also avoided in an overdamped system but an overdamped system is unnecessarily slow to initially reach a setpoint response to external changes to the system, e.g. opening the furnace door.\nPID control.\nPure proportional controllers must operate with residual error in the system. Though PI controllers eliminate this error they can still be sluggish or produce oscillations. The PID controller addresses these final shortcomings by introducing a derivative (D) action to retain stability while responsiveness is improved.\nDerivative action.\nThe derivative is concerned with the rate-of-change of the error with time: If the measured variable approaches the setpoint rapidly, then the actuator is backed off early to allow it to coast to the required level; conversely, if the measured value begins to move rapidly away from the setpoint, extra effort is applied—in proportion to that rapidity to help move it back.\nOn control systems involving motion control of a heavy item like a gun or camera on a moving vehicle, the derivative action of a well-tuned PID controller can allow it to reach and maintain a setpoint better than most skilled human operators. If a derivative action is over-applied, it can, however, lead to oscillations.\nIntegral action.\nThe integral term magnifies the effect of long-term steady-state errors, applying an ever-increasing effort until the error is removed. In the example of the furnace above working at various temperatures, if the heat being applied does not bring the furnace up to setpoint, for whatever reason, integral action increasingly \"moves\" the proportional band relative to the setpoint until the PV error is reduced to zero and the setpoint is achieved.\nRamp up % per minute.\nSome controllers include the option to limit the \"ramp up % per minute\". This option can be very helpful in stabilizing small boilers (3 MBTUH), especially during the summer, during light loads. A utility boiler \"unit may be required to change load at a rate of as much as 5% per minute (IEA Coal Online - 2, 2007)\".\nOther techniques.\nIt is possible to filter the PV or error signal. Doing so can help reduce instability or oscillations by reducing the response of the system to undesirable frequencies. Many systems have a resonant frequency. By filtering out that frequency, stronger overall feedback can be applied before oscillation occurs, making the system more responsive without shaking itself apart.\nFeedback systems can be combined. In cascade control, one control loop applies control algorithms to a measured variable against a setpoint but then provides a varying setpoint to another control loop rather than affecting process variables directly. If a system has several different measured variables to be controlled, separate control systems will be present for each of them.\nControl engineering in many applications produces control systems that are more complex than PID control. Examples of such field applications include fly-by-wire aircraft control systems, chemical plants, and oil refineries. Model predictive control systems are designed using specialized computer-aided-design software and empirical mathematical models of the system to be controlled.", "Automation-Control": 0.8738815188, "Qwen2": "Yes"} {"id": "8477282", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=8477282", "title": "Gradient network", "text": "In network science, a gradient network is a directed subnetwork of an undirected \"substrate\" network where each node has an associated scalar potential and one out-link that points to the node with the smallest (or largest) potential in its neighborhood, defined as the union of itself and its neighbors on the substrate network.\nDefinition.\nTransport takes place on a fixed network formula_1 called the substrate graph. It has \"N\" nodes, formula_2 and the set\nof edges formula_3. Given a node \"i\", we can define its set of neighbors in G by Si(1) = {j ∈ V | (i,j)∈ E}. \nLet us also consider a scalar field, \"h\" = {\"h\"0, .., \"h\"\"N\"−1} defined on the set of nodes V, so that every node i has a scalar value \"h\"\"i\" associated to it.\nGradient ∇\"h\"\"i\" on a network: ∇hiformula_4(i, μ(i))\ni.e. the directed edge from \"i\" to \"μ(i)\", where \"μ\"(\"i\") ∈ Si(1) ∪ {i}, and hμ has the maximum value in formula_5.\nGradient network : ∇formula_6 ∇formula_7 formula_8\nwhere \"F\" is the set of gradient edges on \"G\".\nIn general, the scalar field depends on time, due to the flow, external sources and sinks on the network. Therefore, the gradient network ∇formula_7 will be dynamic.\nMotivation and history.\nThe concept of a gradient network was first introduced by Toroczkai and Bassler (2004).\nGenerally, real-world networks (such as citation graphs, the Internet, cellular metabolic networks, the worldwide airport network), which often evolve to transport entities such as information, cars, power, water, forces, and so on, are not globally designed; instead, they evolve and grow through local changes. For example, if a router on the Internet is frequently congested and packets are lost or delayed due to that, it will be replaced by several interconnected new routers. \nMoreover, this flow is often generated or influenced by local gradients of a scalar. For example: electric current is driven by a gradient of electric potential. In information networks, properties of nodes will generate a bias in the way of information is transmitted from a node to its neighbors. This idea motivated the approach to study the flow efficiency of a network by using gradient networks, when the flow is driven by gradients of a scalar field distributed on the network.\nRecent research investigates the connection between network topology and the flow efficiency of the transportation.\nIn-degree distribution of gradient networks.\nIn a gradient network, the in-degree of a node i, \"ki (in)\" is the number of gradient edges pointing into i, and the in-degree distribution is formula_10 .\nWhen the substrate G is a random graph and each pair of nodes is connected with probability \"P\" (i.e. an Erdős–Rényi random graph), the scalars\" hi\" are i.i.d. (independent identically distributed) the exact expression for R(l) is given by\nIn the limit formula_11 and formula_12, the degree distribution becomes the power law \nThis shows in this limit, the gradient network of random network is scale-free.\nFurthermore, if the substrate network G is scale-free, like in the Barabási–Albert model, then the gradient network also follow the power-law with the same exponent as those of G.\nThe congestion on networks.\nThe fact that the topology of the substrate network influence the level of network congestion can be illustrated by a simple example: if the network has a star-like structure, then at the central node, the flow would become congested because the central node should handle all the flow from other nodes. However, if the network has a ring-like structure, since every node takes the same role, there is no flow congestion.\nUnder assumption that the flow is generated by gradients in the network, flow efficiency on networks can be characterized through the jamming factor (or congestion factor), defined as follows:\nwhere \"N\"receive is the number of nodes that receive gradient flow and Nsend is the number of nodes that send gradient flow.\nThe value of \"J\" is between 0 and 1; formula_14 means no congestion, and formula_15 corresponds to maximal congestion.\nIn the limit formula_16, for an Erdős–Rényi random graph, the congestion factor becomes\nThis result shows that random networks are maximally congested in that limit.\nOn the contrary, for a scale-free network, \"J\" is a constant for any \"N\", which means that scale-free networks are not prone to maximal jamming.\nApproaches to control congestion.\nOne problem in communication networks is understanding how to control congestion and maintain normal and efficient network function. \nZonghua Liu et al. (2006) showed that congestion is more likely to occur at the nodes with high degrees in networks, and an efficient approach of selectively enhancing the message-process capability of a small fraction (e.g. 3%) of nodes is shown to perform just as well as enhancing the capability of all nodes.\nAna L Pastore y Piontti et al. (2008) showed that relaxational dynamics can reduce network congestion.\nPan et al. (2011) studied jamming properties in a scheme where edges are given weights of a power of the scalar difference between node potentials.\nNiu and Pan (2016) showed that congestion can be reduced by introducing a correlation between the gradient field and the local network topology.", "Automation-Control": 0.6951434612, "Qwen2": "Yes"} {"id": "47326243", "revid": "39507937", "url": "https://en.wikipedia.org/wiki?curid=47326243", "title": "IBM Machine Code Printer Control Characters", "text": "Early mainframe printers were usually line printers. Line printers provide a limited set of commands to control how the paper is advanced when print lines are printed. The application writing reports, list, etc. to be printed has to include those commands in the print data. These single character print commands are called \"printer control characters\".\nIntroduction into Print Control Characters.\n\"Printer control characters\" and \"Carriage control characters\" are IBM mainframe terms that denote the special meaning which the first character on a line of printable text may have. The first character of each line of text is interpreted as a \"control character\" or \"printer command\" instead of a character to be printed if a corresponding attribute is set for the print data set (\"data set\" is mainframe speak for what is known as a \"file\" on other operating systems).\nHow \"Printer Control Characters\" work.\nWhile mostly replaced by an electronic versions later on, line printers initially used a loop of punched paper tape to control the movement of the paper while printing. This tape is called a \"carriage control tape\" and is mounted on the printer. The looped carriage tape moves synchronously with the stream of fanfold paper.\nThe line printers have 12 sensors to recognize 12 independent positions on the carriage control tape. Each position is called a \"channel\", numbered from 1 to 12. If a hole is punched in a channel, then this hole marked a position on the page that the printer can 'jump to' quickly by advancing until the hole is sensed by the corresponding channel sensor. This is called \"skip to channel number n\".\nCommands are implemented so that each of the 12 channels can be jumped to. Instead of having to write empty print lines, applications can simply jump to a predefined channel if nothing is to be written between the current position and the target position, a huge performance gain at that time.\nIn addition to those \"skip to channel\" commands, there are other commands that the printer interprets to either stay at the current line or to space one, two, or three lines. By staying on the current line, one can create:\nInstructing the printer to skip to a channel which is not punched will cause the printer to continue to feed paper at high speed. This might be caused by a mismatch between the tape installed and the one the application expects.\nSpecial Meaning of \"Channel 1\".\nBy convention the position on a sheet of paper where the first print line has to be written is associated with \"Channel 1\". For example, if the first line of text has to always be on physical line 3 for a given form, then the channel 1 hole has to be punched in line 3 of the carriage control tape.\nBy convention, IBM mainframe applications always jump to channel 1 when beginning a new logical page.\nTypes of Printer Control Characters.\nPrint data sets on IBM mainframe operating systems may have either of two variants of printer control characters:\nThe attribute for specifying the presence of print control characters is part of the \"Record Format\" (aka RECFM) attribute must therefore allow for two variants:\nASA Control Characters.\nASA control characters are logical printer commands. They tell the printer how far to advance the paper \"before\" printing the current line of text. ASA control characters are all displayable characters. Printers do not understand these characters themselves, therefore the printer driver must translate them to the corresponding printer commands when sending the print data to the printer.\nIBM Machine Control Characters.\nMachine control characters, in contrast, are the hardware commands which IBM line printers understand. This is why they are hardware dependent or hardware determined. IBM defined this set of commands for their line printers and made sure all their line printers understand them. Other (mainframe) line printer manufacturers also had to make sure their printers understood those commands. Since machine control characters are hardware commands, many of them are not displayable characters and therefore machine control characters are always specified as hexadecimal values.\nMain difference between ASA and Machine Control Characters.\nThe main difference between the two sets of printer control characters might be the portability of ASA control characters versus the hardware dependency of machine control characters. The fact that the ASA controls were \"space before write\", while the machine controls were \"space after write\" could require some data streams to be converted.\nLanguage support for printer control.\nMany programming languages simply place the desired control character in the first byte of the line to be printed. COBOL and PL/I also have a system-independent method of specifying printer controls. The compiler or run-time will translate these options into the appropriate control character.\nCOBOL.\nCOBOL uses the syntax codice_1, where \"record-name\" is the name of the area containing the line and \"n\" is the number of lines. Additionally codice_2 can be used or codice_3 to skip to the top of a new page.\nPL/I.\nPL/I uses the syntax codice_4 to skip \"n\" lines before printing, or codice_5 to skip to a new page.\nList of IBM Machine Print Control Characters.\nIBM defined two sets of printer commands, and therefore two sets of printer control characters are available. The first set of commands did not send any data to be printed to the printer but only a paper movement instruction. These are called \"immediate commands\". The second set of commands send data to be printed on the current line plus a paper movement instruction to the printer. Note that in contrast to the ASA control characters, the IBM machine print control characters ask the printer to \"firstly\" print the data on the current line, and then \"secondly\" advance the paper.\nImmediate commands.\nThese commands do not send any data to the printer. The commands only ask the printer to advance the paper.\nWrite and Space Commands.\nWrite and space commands ask the printer to write the data on the line and afterwards move the paper.", "Automation-Control": 0.9631364346, "Qwen2": "Yes"} {"id": "44628821", "revid": "5846", "url": "https://en.wikipedia.org/wiki?curid=44628821", "title": "Matrix regularization", "text": "In the field of statistical learning theory, matrix regularization generalizes notions of vector regularization to cases where the object to be learned is a matrix. The purpose of regularization is to enforce conditions, for example sparsity or smoothness, that can produce stable predictive functions. For example, in the more common vector framework, Tikhonov regularization optimizes over\nto find a vector formula_2 that is a stable solution to the regression problem. When the system is described by a matrix rather than a vector, this problem can be written as\nwhere the vector norm enforcing a regularization penalty on formula_2 has been extended to a matrix norm on formula_5.\nMatrix regularization has applications in matrix completion, multivariate regression, and multi-task learning. Ideas of feature and group selection can also be extended to matrices, and these can be generalized to the nonparametric case of multiple kernel learning.\nBasic definition.\nConsider a matrix formula_6 to be learned from a set of examples, formula_7, where formula_8 goes from formula_9 to formula_10, and formula_11 goes from formula_9 to formula_13. Let each input matrix formula_14 be formula_15, and let formula_6 be of size formula_17. A general model for the output formula_18 can be posed as\nwhere the inner product is the Frobenius inner product. For different applications the matrices formula_14 will have different forms, but for each of these the optimization problem to infer formula_6 can be written as\nwhere formula_23 defines the empirical error for a given formula_6, and formula_25 is a matrix regularization penalty. The function formula_25 is typically chosen to be convex and is often selected to enforce sparsity (using formula_27-norms) and/or smoothness (using formula_28-norms). Finally, formula_6 is in the space of matrices formula_30 with Frobenius inner product formula_31.\nGeneral applications.\nMatrix completion.\nIn the problem of matrix completion, the matrix formula_32 takes the form\nwhere formula_34 and formula_35 are the canonical basis in formula_36 and formula_37. In this case the role of the Frobenius inner product is to select individual elements formula_38 from the matrix formula_6. Thus, the output formula_18 is a sampling of entries from the matrix formula_6.\nThe problem of reconstructing formula_6 from a small set of sampled entries is possible only under certain restrictions on the matrix, and these restrictions can be enforced by a regularization function. For example, it might be assumed that formula_6 is low-rank, in which case the regularization penalty can take the form of a nuclear norm.\nwhere formula_45, with formula_8 from formula_9 to formula_48, are the singular values of formula_6.\nMultivariate regression.\nModels used in multivariate regression are parameterized by a matrix of coefficients. In the Frobenius inner product above, each matrix formula_5 is\nsuch that the output of the inner product is the dot product of one row of the input with one column of the coefficient matrix. The familiar form of such models is\nMany of the vector norms used in single variable regression can be extended to the multivariate case. One example is the squared Frobenius norm, which can be viewed as an formula_28-norm acting either entrywise, or on the singular values of the matrix:\nIn the multivariate case the effect of regularizing with the Frobenius norm is the same as the vector case; very complex models will have larger norms, and, thus, will be penalized more.\nMulti-task learning.\nThe setup for multi-task learning is almost the same as the setup for multivariate regression. The primary difference is that the input variables are also indexed by task (columns of formula_55). The representation with the Frobenius inner product is then\nThe role of matrix regularization in this setting can be the same as in multivariate regression, but matrix norms can also be used to couple learning problems across tasks. In particular, note that for the optimization problem\nthe solutions corresponding to each column of formula_55 are decoupled. That is, the same solution can be found by solving the joint problem, or by solving an isolated regression problem for each column. The problems can be coupled by adding an additional regularization penalty on the covariance of solutions\nwhere formula_60 models the relationship between tasks. This scheme can be used to both enforce similarity of solutions across tasks, and to learn the specific structure of task similarity by alternating between optimizations of formula_6 and formula_60. When the relationship between tasks is known to lie on a graph, the Laplacian matrix of the graph can be used to couple the learning problems.\nSpectral regularization.\nRegularization by spectral filtering has been used to find stable solutions to problems such as those discussed above by addressing ill-posed matrix inversions (see for example Filter function for Tikhonov regularization). In many cases the regularization function acts on the input (or kernel) to ensure a bounded inverse by eliminating small singular values, but it can also be useful to have spectral norms that act on the matrix that is to be learned.\nThere are a number of matrix norms that act on the singular values of the matrix. Frequently used examples include the Schatten p-norms, with \"p\" = 1 or 2. For example, matrix regularization with a Schatten 1-norm, also called the nuclear norm, can be used to enforce sparsity in the spectrum of a matrix. This has been used in the context of matrix completion when the matrix in question is believed to have a restricted rank. In this case the optimization problem becomes:\nSpectral Regularization is also used to enforce a reduced rank coefficient matrix in multivariate regression. In this setting, a reduced rank coefficient matrix can be found by keeping just the top formula_10 singular values, but this can be extended to keep any reduced set of singular values and vectors.\nStructured sparsity.\nSparse optimization has become the focus of much research interest as a way to find solutions that depend on a small number of variables (see e.g. the Lasso method). In principle, entry-wise sparsity can be enforced by penalizing the entry-wise formula_66-norm of the matrix, but the formula_66-norm is not convex. In practice this can be implemented by convex relaxation to the formula_27-norm. While entry-wise regularization with an formula_27-norm will find solutions with a small number of nonzero elements, applying an formula_27-norm to different groups of variables can enforce structure in the sparsity of solutions.\nThe most straightforward example of structured sparsity uses the formula_71 norm with formula_72 and formula_73:\nFor example, the formula_75 norm is used in multi-task learning to group features across tasks, such that all the elements in a given row of the coefficient matrix can be forced to zero as a group. The grouping effect is achieved by taking the formula_28-norm of each row, and then taking the total penalty to be the sum of these row-wise norms. This regularization results in rows that will tend to be all zeros, or dense. The same type of regularization can be used to enforce sparsity column-wise by taking the formula_28-norms of each column.\nMore generally, the formula_75 norm can be applied to arbitrary groups of variables:\nwhere the index formula_80 is across groups of variables, and formula_81 indicates the cardinality of group formula_80.\nAlgorithms for solving these group sparsity problems extend the more well-known Lasso and group Lasso methods by allowing overlapping groups, for example, and have been implemented via matching pursuit: and proximal gradient methods. By writing the proximal gradient with respect to a given coefficient, formula_83, it can be seen that this norm enforces a group-wise soft threshold\nwhere formula_85 is the indicator function for group norms formula_86.\nThus, using formula_75 norms it is straightforward to enforce structure in the sparsity of a matrix either row-wise, column-wise, or in arbitrary blocks. By enforcing group norms on blocks in multivariate or multi-task regression, for example, it is possible to find groups of input and output variables, such that defined subsets of output variables (columns in the matrix formula_55) will depend on the same sparse set of input variables.\nMultiple kernel selection.\nThe ideas of structured sparsity and feature selection can be extended to the nonparametric case of multiple kernel learning. This can be useful when there are multiple types of input data (color and texture, for example) with different appropriate kernels for each, or when the appropriate kernel is unknown. If there are two kernels, for example, with feature maps formula_89 and formula_90 that lie in corresponding reproducing kernel Hilbert spaces formula_91, then a larger space, formula_92, can be created as the sum of two spaces:\nassuming linear independence in formula_89 and formula_90. In this case the formula_75-norm is again the sum of norms:\nThus, by choosing a matrix regularization function as this type of norm, it is possible to find a solution that is sparse in terms of which kernels are used, but dense in the coefficient of each used kernel. Multiple kernel learning can also be used as a form of nonlinear variable selection, or as a model aggregation technique (e.g. by taking the sum of squared norms and relaxing sparsity constraints). For example, each kernel can be taken to be the Gaussian kernel with a different width.", "Automation-Control": 0.68920362, "Qwen2": "Yes"} {"id": "25055225", "revid": "10362851", "url": "https://en.wikipedia.org/wiki?curid=25055225", "title": "Hanning Elektro-Werke", "text": "Hanning Elektro-Werke GmbH und Co. KG, (limited partnership with a limited liability company as general partner) a German family-owned company, was founded in 1947 by Robert Hanning and is part of the international production network. It employs around 1,500 people worldwide.\nCompany history.\nThe company was established in 1947 by Robert Hanning under the name ’Elektrobau Hanning GmbH’ in Lipperreihe near Bielefeld (Germany). In 1977 the company name was changed to ’HANNING ELEKTRO-WERKE GmbH & Co. KG’. Due to HANNINGs' stable and economic growth, the factory as well as the storage in Oerlinghausen was expanded in 1980. After the German reunification the company then established itself in 1992 in Eggesin, as the largest employer of the city. The HANNING production network, to which HANNING ELEKTRO-WERKE belongs, comprises around 1,500 employees at four locations in Germany (Oerlinghausen and Eggesin), Romania and India. In addition, there are sales partners on all continents.\nProducts.\nProducts of the market division Industrial Applications.\nCustomized synchronous/asynchronous motor systems, electronically controlled drives, encapsulated synchronous/asynchronous drives, synchronous/asynchronous drives with housing, frameless drives, customized inverter systems, encapsulated frequency inverters, inverters with housing, frameless frequency inverters and more\nCustomized pump systems, electronically controlled pumps, encapsulated pumps, pumps with housed or frameless motors and more\nProducts of the market division Appliance Applications.\nCustomized pump systems, electronically controlled pumps, encapsulated pumps, pumps with housed or frameless motors, customized synchronous/asynchronous fan systems, electronically controlled fan drives, encapsulated synchronous/asynchronous fan drives, synchronous/asynchronous fan drives with housing, frameless fan drives and more\nProducts of the market division Linear Actuators.\nCustomized linear actuator systems, electronically controlled linear actuators, encapsulated linear actuators, linear actuators with housing, frameless linear actuators, lifting columns and more\nCustomized synchronous/asynchronous motor systems, electronically controlled drives, encapsulated synchronous/asynchronous drives, synchronous/asynchronous drives with housing, frameless drives and more", "Automation-Control": 0.991343677, "Qwen2": "Yes"} {"id": "1393819", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=1393819", "title": "Diamond turning", "text": "Diamond turning is turning using a cutting tool with a diamond tip. It is a process of mechanical machining of precision elements using lathes or derivative machine tools (e.g., turn-mills, rotary transfers) equipped with natural or synthetic diamond-tipped tool bits. The term single-point diamond turning (SPDT) is sometimes applied, although as with other lathe work, the \"single-point\" label is sometimes only nominal (radiused tool noses and contoured form tools being options). The process of diamond turning is widely used to manufacture high-quality aspheric optical elements from crystals, metals, acrylic, and other materials. Plastic optics are frequently molded using diamond turned mold inserts. Optical elements produced by the means of diamond turning are used in optical assemblies in telescopes, video projectors, missile guidance systems, lasers, scientific research instruments, and numerous other systems and devices. Most SPDT today is done with computer numerical control (CNC) machine tools. Diamonds also serve in other machining processes, such as milling, grinding, and honing. Diamond turned surfaces have a high specular brightness and require no additional polishing or buffing, unlike other conventionally machined surfaces.\nProcess.\nDiamond turning is a multi-stage process. Initial stages of machining are carried out using a series of CNC lathes of increasing accuracy. A diamond-tipped lathe tool is used in the final stages of the manufacturing process to achieve sub-nanometer level surface finishes and sub-micrometer form accuracies. The surface finish quality is measured as the peak-to-valley distance of the grooves left by the lathe. The form accuracy is measured as a mean deviation from the ideal target form. Quality of surface finish and form accuracy is monitored throughout the manufacturing process using such equipment as contact and laser profilometers, laser interferometers, optical and electron microscopes. Diamond turning is most often used for making infrared optics, because at longer wavelengths optical performance is less sensitive to surface finish quality, and because many of the materials used are difficult to polish with traditional methods.\nTemperature control is crucial, because the surface must be accurate on distance scales shorter than the wavelength of light. Temperature changes of a few degrees during machining can alter the form of the surface enough to have an effect. The main spindle may be cooled with a liquid coolant to prevent temperature deviations.\nThe diamonds that are used in the process are strong in the downhill regime but tool wear is also highly dependent on crystal anisotropy and work material.\nThe machine tool.\nFor best possible quality natural diamonds are used as single-point cutting elements during the final stages of the machining process. A CNC SPDT lathe rests atop a high-quality granite base with micrometer surface finish quality. The granite base is placed on air suspension on a solid foundation, keeping its working surface strictly horizontal. The machine tool components are placed on top of the granite base and can be moved with high degree of accuracy using a high-pressure air cushion or hydraulic suspension. The machined element is attached to an air chuck using negative air pressure and is usually centered manually using a micrometer. The chuck itself is separated from the electric motor that spins it by another air suspension.\nThe cutting tool is moved with sub-micron precision by a combination of electric motors and piezoelectric actuators. As with other CNC machines, the motion of the tool is controlled by a list of coordinates generated by a computer. Typically, the part to be created is first described using a computer aided design (CAD) model, then converted to G-code using a computer aided manufacturing (CAM) program, and the G-code is then executed by the machine control computer to move the cutting tool. The final surface is achieved with a series of cutting passes to maintain a ductile cutting regime.\nAlternative methods of diamond machining in practice also include diamond fly cutting and diamond milling. Diamond fly cutting can be used to generate diffraction gratings and other linear patterns with appropriately contoured diamond shapes. Diamond milling can be used to generate aspheric lens arrays by annulus cutting methods with a spherical diamond tool.\nMaterials.\nDiamond turning is specifically useful when cutting materials that are viable as infrared optical components and certain non-linear optical components such as potassium dihydrogen phosphate (KDP). KDP is a perfect material in application for diamond turning, because the material is very desirable for its optical modulating properties, yet it is impossible to make optics from this material using conventional methods. KDP is water-soluble, so conventional grinding and polishing techniques are not effective in producing optics. Diamond turning works well to produce optics from KDP.\nGenerally, diamond turning is restricted to certain materials. Materials that are readily machinable include:\nThe most often requested materials that are not readily machinable are:\nFerrous materials are not readily machinable because the carbon in the diamond tool chemically reacts with the substrate, leading to tool damage and dulling after short cut lengths. Several techniques have been investigated to prevent this reaction, but few have been successful for long diamond machining processes at mass production scales. \nTool life improvement has been under consideration in diamond turning as the tool is expensive. Hybrid processes such as laser-assisted machining have emerged in this industry recently. The laser softens hard and difficult-to-machine materials such as ceramics and semiconductors, making them easier to cut.\nQuality control.\nDespite all the automation involved in the diamond turning process, the human operator still plays the main role in achieving the final result. Quality control is a major part of the diamond turning process and is required after each stage of machining, sometimes after each pass of the cutting tool. If it is not detected immediately, even a minute error during any of the cutting stages results in a defective part. The extremely high requirements for quality of diamond-turned optics leave virtually no room for error.\nThe SPDT manufacturing process produces a relatively high percentage of defective parts, which must be discarded. As a result, the manufacturing costs are high compared to conventional polishing methods. Even with the relatively high volume of optical components manufactured using the SPDT process, this process cannot be classified as mass production, especially when compared with production of polished optics. Each diamond-turned optical element is manufactured on an individual basis with extensive manual labor.", "Automation-Control": 0.9885859489, "Qwen2": "Yes"} {"id": "1395309", "revid": "45201693", "url": "https://en.wikipedia.org/wiki?curid=1395309", "title": "Electrochemical machining", "text": "Electrochemical machining (ECM) is a method of removing metal by an electrochemical process. It is normally used for mass production and is used for working extremely hard materials or materials that are difficult to machine using conventional methods. Its use is limited to electrically conductive materials. ECM can cut small or odd-shaped angles, intricate contours or cavities in hard and exotic metals, such as titanium aluminides, Inconel, Waspaloy, and high nickel, cobalt, and rhenium alloys. Both external and internal geometries can be machined.\nECM is often characterized as \"reverse electroplating\", in that it removes material instead of adding it. It is similar in concept to electrical discharge machining (EDM) in that a high current is passed between an electrode and the part, through an electrolytic material removal process having a negatively charged electrode (cathode), a conductive fluid (electrolyte), and a conductive workpiece (anode); however, in ECM there is no tool wear. The ECM cutting tool is guided along the desired path close to the work but without touching the piece. Unlike EDM, however, no sparks are created. High metal removal rates are possible with ECM, with no thermal or mechanical stresses being transferred to the part, and mirror surface finishes can be achieved.\nIn the ECM process, a cathode (tool) is advanced into an anode (workpiece). The pressurized electrolyte is injected at a set temperature to the area being cut. The feed rate is the same as the rate of \"liquefication\" of the material. The gap between the tool and the workpiece varies within 80–800 micrometers (0.003–0.030 in.) As electrons cross the gap, material from the workpiece is dissolved, as the tool forms the desired shape in the workpiece. The electrolytic fluid carries away the metal hydroxide formed in the process.\nElectrochemical machining, as a technological method, originated from the process of electrolytic polishing offered already in 1911 by a Russian chemist E.Shpitalsky.\nAs far back as 1929, an experimental ECM process was developed by W.Gussef, although it was 1959 before a commercial process was established by the Anocut Engineering Company. B.R. and J.I. Lazarenko are also credited with proposing the use of electrolysis for metal removal.\nMuch research was done in the 1960s and 1970s, particularly in the gas turbine industry. The rise of EDM in the same period slowed ECM research in the west, although work continued behind the Iron Curtain. The original problems of poor dimensional accuracy and environmentally polluting waste have largely been overcome, although the process remains a niche technique.\nThe ECM process is most widely used to produce complicated shapes such as turbine blades with good surface finish in difficult to machine materials. It is also widely and effectively used as a deburring process.\nIn deburring, ECM removes metal projections left from the machining process, and so dulls sharp edges. This process is fast and often more convenient than the conventional methods of deburring by hand or nontraditional machining processes.\nCurrents involved.\nThe needed current is proportional to the desired rate of material removal, and the removal rate in mm/minute is proportional to the amps per square mm.\nTypical currents range from 0.1 amp per square mm to 5 amps per square mm. Thus, for a small plunge cut of a 1 by 1 mm tool with a slow cut, only 0.1 amps would be needed.\nHowever, for a higher feed rate over a larger area, more current would be used, just like any machining process—removing more material faster takes more power.\nThus, if a current density of 4 amps per square millimeter was desired over a 100×100 mm area, it would take 40,000 amps (and much coolant/electrolyte).\nSetup and equipment.\nECM machines come in both vertical and horizontal types. Depending on the work requirements, these machines are built in many different sizes as well. The vertical machine consists of a base, column, table, and spindle head. The spindle head has a servo-mechanism that automatically advances the tool and controls the gap between the cathode (tool) and the workpiece.\nCNC machines of up to six axes are available.\nCopper is often used as the electrode material. Brass, graphite, and copper-tungsten are also often used because they are easily machined, they are conductive materials, and they will not corrode.\nApplications.\nSome of the very basic applications of ECM include:", "Automation-Control": 0.9265869856, "Qwen2": "Yes"} {"id": "45687154", "revid": "35936988", "url": "https://en.wikipedia.org/wiki?curid=45687154", "title": "CNC plunge milling", "text": "CNC plunge milling, also called z-axis milling, is a CNC milling process. In this process, the feed is provided linearly along the tool axis while doing CNC processing. \nPlunge milling is effective for the rough machining process of complex shape or free form shapes like impeller parts. In multi axis plunge milling, the optimization of plunge cutter section selection and generating the tool path for free form surface is very important to improve the efficiency and effectiveness.\nIn plunge milling, after each plunge the milling cutter is offset by some value and then the material surface is removed in the form of lunula. The material removal rate is computed by area of lunula and the feed rate. At the entry and exit of milling cutter, the radial offset has not any influence on the condition of surface.\nAt the maximum cutting velocity, the surface obtained is clean whatever the feed rate per tooth on entry but on exit the high value of feed rate gives the deteriorated surface. The surface roughness value always increases with feed rate in plunge milling. The simulation of dynamic uncut chip thickness which is generated by plunge milling can be done by tracking the position of plunge cutter center. This simulation shows the regenerative effect with variation of phase difference.\nThen the model of uncut chip thickness and cutting force coefficient with cutting edge radius are entered into time domain model. Finally, with the help of time domain solution the stability of machine and vibrations are estimated.\nThe cutting parameters play a key role in plunge milling. The cutting force and machine stability both are influenced by machining parameters. Frequency domain model can be used to estimate the machining stability.\nAdvantages of CNC plunge milling.\nThe plunge milling has following advantage over conventional milling-", "Automation-Control": 1.0000034571, "Qwen2": "Yes"} {"id": "45689616", "revid": "2584239", "url": "https://en.wikipedia.org/wiki?curid=45689616", "title": "Freeform surface machining", "text": "Freeform surface or complex surfaces are widely manufactured nowadays. The industries which most often manufactures free-form surfaces are basically aerospace, automotive, die mold industries, bio medical and power sector for turbine blades manufacturing. Generally 3 or 5 axis CNC milling machine is used for this purpose. The manufacturing process of free form surface is not an easy job as the tool path generation in present CAM technology is generally based on geometric computation so tool path are not optimum. The geometry can also be not described explicitly so errors and discontinuities occurrence in the solid structure cannot be avoided. Free-form surfaces are machined with the help of different tool path generation method like adaptive iso-planar tool path generation, constant scallop tool path generation, adaptive iso-parametric method, iso-curvature, isophote and by other methods. The different methods are chosen based on the parameters which is needed to be optimized.\nOptimization of free-form surface machining.\nCAM software generally creates a tool path without considering any mechanics process. These causes risk of tool damage, tool deflection and errors on surface finish. By minimizing the forces we can increase tool life. Different optimization method can be used considering process parameters like feed rate, spindle speed, steps, tool diameter, magnitude and preset maximum force. The optimization can be done for minimum machining time, minimum tool travel, minimum production cost or for good surface finish. Efficiency of surface machined is also considered by maximum scallop height and by gouging. Gouging are the main reason for discrepancies of surface accuracy and texture specification. It also causes damage to part,s surface and machine tool. Scallop height tolerance help us in measuring the quality of free-form surface. Selection of proper topology result in minimum path length. In CAM software choosing NURBS to create surface is considered to be good method for presenting surface as it is accepted by both IGES and STEP files of CAM software.", "Automation-Control": 0.9897111654, "Qwen2": "Yes"} {"id": "45690256", "revid": "35936988", "url": "https://en.wikipedia.org/wiki?curid=45690256", "title": "CNC riveting", "text": "CNC riveting is a CNC process used for obtaining permanent mechanical fastening of geometrical shapes, ranging from simple to complex shapes, such as aircraft fuselages. This is done in a shorter duration of time with a high riveting rate. The process is fast, robust, and is flexible in nature; thus improving its usage and providing reliability to the riveted joint along with the final product quality. CNC riveting can be used for a variety of operations ranging from riveting and fastening belts, skin panels, shear ties, and other internal fuselage components.\nThe CNC Riveting machines generally consist of a solid frame made of welded steel and aluminum frames used for protection fitted with polycarbonate panes. The dynamic drive of the coordinates axes is achieved by a recirculating ball and screw, servo motors and motion control units that make the high-speed movement possible. For the mounting of the riveting units, solid C Frames are used. The riveting program can be given various parameters and these can be changed or altered as desired to be CNC programs as per the requirement.\nCNC riveting machine variants.\nCNC duct riveting cell.\nA CNC controlled automatic riveting work cell with a knee type design drill machine that has sixty-inch throat depth with four positions on the upper head. This machine can apply sixteen pounds of upset force. This machine is equipped with dual drill spindles, one for carrying out drilling and other for deburring. It was developed for the fabrication of tubular assemblies, which are fed into the throat of the machine over an eight-inch square lower knee. The CNC controlled four axis positioning system presents the part to the machine that carries out the riveting.\nCNC riveting machines with stationary machine table.\nThese CNC machines act as stand-alone workstations for heavy and bigger workpieces. They are simple in design and require work holding fixtures only, so the clamping and component query devices are much more affordable.\nCNC riveting machines with Indexing Tables.\nThese special purpose CNC riveting machines have different sizes of indexing tables that be composed of different coordinate axes and riveting machines. These machines have different versions for the particular application and are configured accordingly. The coordinate system has linear units and recirculating ball screws, and an index table that is electrically operated with a brake motor. It has two to four fixed indexing stations, and the index table are NC flexible rotary indexing tables which are actuated by two hand controls or by a pedal switch. The machine has an automatic tool changer.\nCNC riveting machines with transfer system.\nThese CNC riveting machines are made for use in manufacturing lines. The fixtures are coded and interfaces are customized to make the connection of several CNC riveting machines and linking them with other manufacturing systems, making it possible to obtain a high degree of automation.\nMultiple axes CNC riveting cell.\nThis is the latest technological development in automated fastening. This technology is versatile and can be used for riveting of high curvature fuselage panels to low curvature wing panels, bulkheads, floor etc. The tooling changeover is minimized thus part throughput is maximized.\nAdvantages.\nThe main advantages of this type of CNC riveting machine are that it can use a variable minimum distance between rivets, and rivets of different length or heights can be used. High flexibility and change over time due to programmable memory. It can process many workpieces and different rivets can be used in one operation. Picking and placing operations are done in parallel with the primary operation time, saving money. Menu based navigation makes the programming fluid. High acceleration rate with high positioning accuracy. ", "Automation-Control": 0.9990945458, "Qwen2": "Yes"} {"id": "45704362", "revid": "1161198397", "url": "https://en.wikipedia.org/wiki?curid=45704362", "title": "Automatic tool changer", "text": "In machining, an automatic tool changer (ATC) is used in computerized numerical control (CNC) machine tools to improve the production and tool carrying capacity of the machine. ATCs change tools rapidly, reducing non-productive time. They are generally used to improve the capacity of the machines to work with a number of tools. They are also used to change worn out or broken tools. They are one more step towards complete automation.\nDescription.\nSimple CNC machines work with a single tool. Turrets can work with a large number of tools. But if even more tools are required, then an ATC is needed. The tools are stored in a magazine. This allows the machine to work with a large number of tools without operator intervention. \nThe main parts of an automatic tool changer are the base, the gripper arm, the tool holder, the support arm, and the tool magazines. \nAlthough the ATC increases the reliability, speed, and accuracy of a machine, it creates more challenges compared to manual tool change. For example, the tooling used must be easy to center, be easy for the changer to grab, and there should be a simple way to provide the tool's self-disengagement. Tools used in ATC are secured in tool holders specially designed for this purpose.\nTypes of tool changers.\nDepending on the shape of the magazine, an ATC can be of two types: 1) Drum Type changers are used when the number of tools is lower than 30. The tools are stored on the periphery of the drum. \n2) Chain type changers are used when the number of tools is higher than 30 (The number is different depending on the design and manufacturer. It is important to note that the number of tools for the drum type is fewer than the chain type). But the tool search speed will be lower in this case.\nAutomatic tool changer mechanism.\nAfter receiving the tool change command, the tool to be changed will assume a fixed position known as the \"tool change position\". The ATC arm comes to this position and picks up the tool. The arm swivels between the machine turret and the magazine. It will have one gripper on each of the two sides. Each gripper can rotate 90°, to deliver tools to the front face of the turret. One will pick up the old tool from the turret and the other will pick up the new tool from the magazine. It then rotates 180° and places the tools into their needed position.\nTool changers on sheet metal working machinery.\nATCs were first used on chip-removal machines, such as mills and lathes. Systems for automatic rearrangement of tools have also been used on sheet metal working machinery. Panel benders have an integrated CNC-controlled device that allows punches to be moved according to the size of the part. Automated tool changes on press brakes were limited to machines integrated on a robotic bending cell. Typically a 6-axis robot used for handling sheet metal blanks is also in charge of changing punches and dies between different batches.\nSince the 2020s automatic tool changers have appeared on non-robotic press brakes. The most common configuration is a tool rack on the side of the press brakes, with a shuttle picking up tools and positioning them where needed. This reduces physical strain on the operator and increases overall productivity.\nFunctions of a tool changer.\nThe use of automatic changers increases the productive time and reduces unproductive time. It provides the storage of the tools which are returned automatically to the machine tool after carrying out the required operations, increases the flexibility of the machine tool, makes it easier to change heavy and large tools, and permits the automatic renewal of cutting edges.", "Automation-Control": 0.992184341, "Qwen2": "Yes"} {"id": "185325", "revid": "45431779", "url": "https://en.wikipedia.org/wiki?curid=185325", "title": "Coining (metalworking)", "text": "Coining is a form of precision stamping in which a workpiece is subjected to a sufficiently high stress to induce plastic flow on the surface of the material. A beneficial feature is that in some metals, the plastic flow reduces surface grain size, and work hardens the surface, while the material deeper in the part retains its toughness and ductility. The term comes from the initial use of the process: manufacturing of coins.\nCoining is used to manufacture parts for all industries and is commonly used when high relief or very fine features are required. For example, it is used to produce coins, badges, buttons, precision-energy springs and precision parts with small or polished surface features.\nCoining is a cold working process similar in other respects to forging, which takes place at elevated temperature; it uses a great deal of force to elastically deform a workpiece, so that it conforms to a die. Coining can be done using a gear driven press, a mechanical press, or more commonly, a hydraulically actuated press. Coining typically requires higher tonnage presses than stamping, because the workpiece is elastically deformed and not actually cut, as in some other forms of stamping. The coining process is preferred when there is a high tonnage. \nCoining in electronic industry.\nIn soldering of electronic components, bumps are formed on bonding pads to enhance adhesion, which are further flattened by the coining process. Unlike typical coining applications, in this case the goal of coining is to create a flat, rather than patterned, surface.", "Automation-Control": 0.9984605312, "Qwen2": "Yes"} {"id": "162289", "revid": "33594889", "url": "https://en.wikipedia.org/wiki?curid=162289", "title": "Computer-aided manufacturing", "text": "Computer-aided manufacturing (CAM) also known as computer-aided modeling or computer-aided machining is the use of software to control machine tools in the manufacturing of work pieces. This is not the only definition for CAM, but it is the most common. It may also refer to the use of a computer to assist in all operations of a manufacturing plant, including planning, management, transportation and storage. Its primary purpose is to create a faster production process and components and tooling with more precise dimensions and material consistency, which in some cases, uses only the required amount of raw material (thus minimizing waste), while simultaneously reducing energy consumption.\nCAM is now a system used in schools and lower educational purposes.\nCAM is a subsequent computer-aided process after computer-aided design (CAD) and sometimes computer-aided engineering (CAE), as the model generated in CAD and verified in CAE can be input into CAM software, which then controls the machine tool. CAM is used in many schools alongside computer-aided design (CAD) to create objects.\nOverview.\nTraditionally, CAM has been numerical control (NC) programming tool, wherein two-dimensional (2-D) or three-dimensional (3-D) models of components are generated in CAD. As with other \"computer-aided\" technologies, CAM does not eliminate the need for skilled professionals such as manufacturing engineers, NC programmers, or machinists. CAM leverages both the value of the most skilled manufacturing professionals through advanced productivity tools, while building the skills of new professionals through visualization, simulation and optimization tools.\nA CAM tool generally converts a model to a language the target machine in question understands, typically G-Code. The numerical control can be applied to machining tools, or more recently to 3D printers.\nHistory.\nEarly commercial applications of CAM were in large companies in the automotive and aerospace industries; for example, Pierre Béziers work developing the CAD/CAM application UNISURF in the 1960s for car body design and tooling at Renault. Alexander Hammer at DeLaval Steam Turbine Company invented a technique to progressively drill turbine blades out of a solid metal block of metal with the drill controlled by a punch card reader in 1950.\nHistorically, CAM software was seen to have several shortcomings that necessitated an overly high level of involvement by skilled CNC machinists. Fallows created the first CAD software but this had severe shortcomings and was promptly taken back into the developing stage. CAM software would output code for the least capable machine, as each machine tool control added on to the standard G-code set for increased flexibility. In some cases, such as improperly set up CAM software or specific tools, the CNC machine required manual editing before the program will run properly. None of these issues were so insurmountable that a thoughtful engineer or skilled machine operator could not overcome for prototyping or small production runs; G-Code is a simple language. In high production or high precision shops, a different set of problems were encountered where an experienced CNC machinist must both hand-code programs and run CAM software.\nThe integration of CAD with other components of CAD/CAM/CAE Product lifecycle management (PLM) environment requires an effective CAD data exchange. Usually it had been necessary to force the CAD operator to export the data in one of the common data formats, such as IGES or STL or Parasolid formats that are supported by a wide variety of software.\nThe output from the CAM software is usually a simple text file of G-code/M-codes, sometimes many thousands of commands long, that is then transferred to a machine tool using a direct numerical control (DNC) program or in modern Controllers using a common USB Storage Device.\nCAM packages could not, and still cannot, reason as a machinist can. They could not optimize toolpaths to the extent required of mass production. Users would select the type of tool, machining process and paths to be used. While an engineer may have a working knowledge of G-code programming, small optimization and wear issues compound over time. Mass-produced items that require machining are often initially created through casting or some other non-machine method. This enables hand-written, short, and highly optimized G-code that could not be produced in a CAM package.\nAt least in the United States, there is a shortage of young, skilled machinists entering the workforce able to perform at the extremes of manufacturing; high precision and mass production. As CAM software and machines become more complicated, the skills required of a machinist or machine operator advance to approach that of a computer programmer and engineer rather than eliminating the CNC machinist from the workforce.\nOvercoming historical shortcomings.\nOver time, the historical shortcomings of CAM are being attenuated, both by providers of niche solutions and by providers of high-end solutions. This is occurring primarily in three arenas:\nMachining process.\nMost machining progresses through many stages, each of which is implemented by a variety of basic and sophisticated strategies, depending on the part design, material, and software available.", "Automation-Control": 0.8504629135, "Qwen2": "Yes"} {"id": "11462382", "revid": "18872885", "url": "https://en.wikipedia.org/wiki?curid=11462382", "title": "Self-tuning", "text": "In control theory a self-tuning system is capable of optimizing its own internal running parameters in order to maximize or minimize the fulfilment of an objective function; typically the maximization of efficiency or error minimization.\nSelf-tuning and auto-tuning often refer to the same concept. Many software research groups consider auto-tuning the proper nomenclature.\nSelf-tuning systems typically exhibit non-linear adaptive control. Self-tuning systems have been a hallmark of the aerospace industry for decades, as this sort of feedback is necessary to generate optimal multi-variable control for non-linear processes. In the telecommunications industry, adaptive communications are often used to dynamically modify operational system parameters to maximize efficiency and robustness.\nExamples.\nExamples of self-tuning systems in computing include:\nPerformance benefits can be substantial. Professor Jack Dongarra, an American computer scientist, claims self-tuning boosts performance, often on the order of 300%.\nDigital self-tuning controllers are an example of self-tuning systems at the hardware level.\nArchitecture.\nSelf-tuning systems are typically composed of four components: expectations, measurement, analysis, and actions. The expectations describe how the system should behave given exogenous conditions.\nMeasurements gather data about the conditions and behaviour. Analysis helps determine whether the expectations are being met- and which subsequent actions should be performed. Common actions are gathering more data and performing dynamic reconfiguration of the system.\nSelf-tuning (self-adapting) systems of automatic control are systems whereby adaptation to randomly changing conditions is performed by means of automatically changing parameters or via automatically determining their optimum configuration. In any non-self-tuning automatic control system there are parameters which have an influence on system stability and control quality and which can be tuned. If these parameters remain constant whilst operating conditions (such as input signals or different characteristics of controlled objects) are substantially varying, control can degrade or even become unstable. Manual tuning is often cumbersome and sometimes impossible. In such cases, not only is using self-tuning systems technically and economically worthwhile, but it could be the only means of robust control. Self-tuning systems can be with or without parameter determination.\nIn systems with parameter determination the required level of control quality is achieved by automatically searching for an optimum (in some sense) set of parameter values. Control quality is described by a generalised characteristic which is usually a complex and not completely known or stable function of the primary parameters. This characteristic is either measured directly or computed based on the primary parameter values. The parameters are then tentatively varied. An analysis of the control quality characteristic oscillations caused by the varying of the parameters makes it possible to figure out if the parameters have optimum values, i.e.. if those values deliver extreme (minimum or maximum) values of the control quality characteristic. If the characteristic values deviate from an extremum, the parameters need to be varied until optimum values are found. Self-tuning systems with parameter determination can reliably operate in environments characterised by wide variations of exogenous conditions.\nIn practice systems with parameter determination require considerable time to find an optimum tuning, i.e. time necessary for self-tuning in such systems is bounded from below. Self-tuning systems without parameter determination do not have this disadvantage. In such systems, some characteristic of control quality is used (e.g., the first time derivative of a controlled parameter). Automatic tuning makes sure that this characteristic is kept within given bounds. Different self-tuning systems without parameter determination exist that are based on controlling transitional processes, frequency characteristics, etc. All of those are examples of closed-circuit self-tuning systems, whereby parameters are automatically corrected every time the quality characteristic value falls outside the allowable bounds. In contrast, open-circuit self-tuning systems are systems with para-metrical compensation, whereby input signal itself is controlled and system parameters are changed according to a specified procedure. This type of self-tuning can be close to instantaneous. However, in order to realise such self-tuning one needs to control the environment in which the system operates and a good enough understanding of how the environment influences the controlled system is required.\nIn practice self-tuning is done through the use of specialised hardware or adaptive software algorithms. Giving software the ability to self-tune (adapt):", "Automation-Control": 0.9364589453, "Qwen2": "Yes"} {"id": "11463665", "revid": "31339567", "url": "https://en.wikipedia.org/wiki?curid=11463665", "title": "Iteratively reweighted least squares", "text": "The method of iteratively reweighted least squares (IRLS) is used to solve certain optimization problems with objective functions of the form of a \"p\"-norm:\nformula_1\nby an iterative method in which each step involves solving a weighted least squares problem of the form:\nformula_2\nIRLS is used to find the maximum likelihood estimates of a generalized linear model, and in robust regression to find an M-estimator, as a way of mitigating the influence of outliers in an otherwise normally-distributed data set, for example, by minimizing the least absolute errors rather than the least square errors.\nOne of the advantages of IRLS over linear programming and convex programming is that it can be used with Gauss–Newton and Levenberg–Marquardt numerical algorithms.\nExamples.\n\"L\"1 minimization for sparse recovery.\nIRLS can be used for \"ℓ\"1 minimization and smoothed \"ℓ\"p minimization, \"p\" compressed sensing problems. It has been proved that the algorithm has a linear rate of convergence for \"ℓ\"1 norm and superlinear for \"ℓ\"\"t\" with \"t\" restricted isometry property, which is generally a sufficient condition for sparse solutions. However, in most practical situations, the restricted isometry property is not satisfied. \n\"Lp\" norm linear regression.\nTo find the parameters β = (\"β\"1, …,\"β\"\"k\")T which minimize the \"Lp\" norm for the linear regression problem,\nformula_3\nthe IRLS algorithm at step \"t\" + 1 involves solving the weighted linear least squares problem:\nformula_4\nwhere \"W\"(\"t\") is the diagonal matrix of weights, usually with all elements set initially to:\nformula_5\nand updated after each iteration to:\nformula_6\nIn the case \"p\" = 1, this corresponds to least absolute deviation regression (in this case, the problem would be better approached by use of linear programming methods, so the result would be exact) and the formula is:\nformula_7\nTo avoid dividing by zero, regularization must be done, so in practice the formula is:\nformula_8\nwhere formula_9 is some small value, like 0.0001. Note the use of formula_9 in the weighting function is equivalent to the Huber loss function in robust estimation. ", "Automation-Control": 0.9297071695, "Qwen2": "Yes"} {"id": "11464105", "revid": "2797907", "url": "https://en.wikipedia.org/wiki?curid=11464105", "title": "In-cell charge control", "text": "In-Cell Charge Control or I-C3 is a method for very rapid charging of a Nickel-metal hydride battery, patented by Rayovac. Batteries using this technology are commonly sold as \"15-minute rechargeables\".\nThe charge control consists of a pressure switch built into the cell, which disconnects the charging current when the internal cell pressure rises above a certain limit; usually to . This prevents overcharging and damage to the cell.", "Automation-Control": 0.7198997736, "Qwen2": "Yes"} {"id": "11471157", "revid": "38448542", "url": "https://en.wikipedia.org/wiki?curid=11471157", "title": "Pocketdelta robot", "text": "The PocketDelta Robot is a microrobot based on a parallel structure called \"Delta robot\". It has been designed to perform micro-assembly tasks where high-speed and high-precision are needed in a reduced working space. The robot's size is 120×120×200 mm offering a workspace diameter up to 150mm × 30mm.\nDesign.\nPrerequisites to high-speed and high-precision are a light, but stiff mechanical structure. Thus stiff and lightweight materials build up the moving part of the robot, whereas the heavy motors are attached to the supporting frame. The motors are attached directly to the robot arm (direct drive), eliminating backlash, friction and elasticity.\nA high-level of integration reduces the size of the whole system. The mechanics, motion drives, control electronics and computing are all integrated into one compact unit. This innovative configuration simplifies its use and increases the flexibility of the robot.\nThe controller is based on PC architecture with interface boards for communication with the robot hardware. The realtime software enables the generation of trajectories in the Cartesian space using a direct and inverse kinematics model of the robot.\nThe robot integrates a web server which enables an easy communication through an Ethernet network using HTTP, TCP or UDP protocol.\nApplications.\nTypical application's domains for the PocketDelta robot are the following:\nExternal links.\nCompanies and universities\nVideos\nPublications and articles\n__notoc__", "Automation-Control": 0.6961052418, "Qwen2": "Yes"} {"id": "11478229", "revid": "12023796", "url": "https://en.wikipedia.org/wiki?curid=11478229", "title": "Fault-tolerant messaging", "text": "Fault Tolerant Messaging in the context of computer systems and networks, refers to a design approach and set of techniques aimed at ensuring reliable and continuous communication between components or nodes even in the presence of errors or failures. This concept is especially critical in distributed systems, where components may be geographically dispersed and interconnected through networks, making them susceptible to various potential points of failure.\nThe primary objective of fault-tolerant messaging is to maintain the integrity and availability of information exchange among system components, even when some components or communication channels encounter disruptions or errors. These errors may arise from hardware failures, network outages, software bugs, or other unexpected events.\nKey characteristics and mechanisms commonly employed in fault-tolerant messaging include:\nSeveral common protocols and technologies are employed to provide fault-tolerant messaging in distributed systems. These protocols are designed to ensure reliable communication, error detection and correction, and seamless failover mechanisms. Some of the most widely used protocols for fault-tolerant messaging include:", "Automation-Control": 0.6488583088, "Qwen2": "Yes"} {"id": "49122599", "revid": "57939", "url": "https://en.wikipedia.org/wiki?curid=49122599", "title": "Consumption map", "text": "A consumption map or efficiency map shows the brake-specific fuel consumption in g per kWh over mean effective pressure per rotational speed of an internal combustion engine.\nThe x-axis shows the rotational speed range. The y-axis represents the load on the engine. The contour lines show the specific fuel consumption, indicating the areas of the speed/load regime where the engine is more or less efficient.\nThe map contains each possible condition, combining rotational speed and mean effective pressure. It shows the result of specific fuel consumption. A typical rotation power output P (linear to formula_1 ) is reached on several locations on the map but differing in the amount of fuel consumption. Automatic transmissions, are designed to keep the engine at the speed with the lowest possible fuel consumption, given the power demand.\nThe map also shows the efficiency of the engine. Depending on the fuel type, diesel and gasoline engines reach up to 210 g/kWh and about 40% efficiency. Using natural gas this efficiency is reached at 200 g/kWh.\nAverage values are 160–180 g/kWh for slow moving two stroke diesel cargoship engines using fuel oil, reaching up to 55% efficiency at 300 rpm. 195–210 g/kWh at cooled and pre-charged diesel engines for passenger cars, trucks 195–225 g/kWh. Non-charged Otto cycle gasoline engines for passenger cars 250–350 g/kWh.", "Automation-Control": 0.6518827081, "Qwen2": "Yes"} {"id": "3132241", "revid": "46393657", "url": "https://en.wikipedia.org/wiki?curid=3132241", "title": "Differential evolution", "text": "In evolutionary computation, differential evolution (DE) is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as they make few or no assumptions about the optimized problem and can search very large spaces of candidate solutions. However, metaheuristics such as DE do not guarantee an optimal solution is ever found.\nDE is used for multidimensional real-valued functions but does not use the gradient of the problem being optimized, which means DE does not require the optimization problem to be differentiable, as is required by classic optimization methods such as gradient descent and quasi-newton methods. DE can therefore also be used on optimization problems that are not even continuous, are noisy, change over time, etc.\nDE optimizes a problem by maintaining a population of candidate solutions and creating new candidate solutions by combining existing ones according to its simple formulae, and then keeping whichever candidate solution has the best score or fitness on the optimization problem at hand. In this way, the optimization problem is treated as a black box that merely provides a measure of quality given a candidate solution and the gradient is therefore not needed.\nStorn and Price introduced DE in the 1990s. Books have been published on theoretical and practical aspects of using DE in parallel computing, multiobjective optimization, constrained optimization, and the books also contain surveys of application areas. Surveys on the multi-faceted research aspects of DE can be found in journal articles .\nAlgorithm.\nA basic variant of the DE algorithm works by having a population of candidate solutions (called agents). These agents are moved around in the search-space by using simple mathematical formulae to combine the positions of existing agents from the population. If the new position of an agent is an improvement then it is accepted and forms part of the population, otherwise the new position is simply discarded. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered.\nFormally, let formula_1 be the fitness function which must be minimized (note that maximization can be performed by considering the function formula_2 instead). The function takes a candidate solution as argument in the form of a vector of real numbers. It produces a real number as output which indicates the fitness of the given candidate solution. The gradient of formula_3 is not known. The goal is to find a solution formula_4 for which formula_5 for all formula_6 in the search-space, which means that formula_4 is the global minimum.\nLet formula_8 designate a candidate solution (agent) in the population. The basic DE algorithm can then be described as follows:\nParameter selection.\nThe choice of DE parameters formula_37, formula_38 and formula_39 can have a large impact on optimization performance. Selecting the DE parameters that yield good performance has therefore been the subject of much research. Rules of thumb for parameter selection were devised by Storn et al. and Liu and Lampinen. Mathematical convergence analysis regarding parameter selection was done by Zaharie.\nVariants.\nVariants of the DE algorithm are continually being developed in an effort to improve optimization performance. Many different schemes for performing crossover and mutation of agents are possible in the basic algorithm given above, see e.g.", "Automation-Control": 0.7180758119, "Qwen2": "Yes"} {"id": "47085800", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=47085800", "title": "Reference Broadcast Infrastructure Synchronization", "text": "The Reference Broadcast Infrastructure Synchronization (RBIS) protocol is a master/slave synchronization protocol. RBIS, as the Reference Broadcast Time Synchronization (RBS), is a receiver/receiver synchronization protocol, as a consequence timestamps used for clock regulation are acquired only on the receiving of synchronization events. RBIS is specifically tailored to be used in IEEE 802.11 Wi-Fi networks configured in infrastructure mode. Such a kind of networks are based on an access point that coordinates the communication between the wireless nodes (i.e., the STAs), and they are very common.\nThe advantages of RBIS are that it can be directly used with common access points, no modification is required to the STAs (or minor modifications to improve synchronization performance) and a very small overhead is added to the wireless channel (typically a message every 1 second). Moreover, it allows an easily synchronization with an external time source, because it is a master/slave protocol. Its major drawback is that it does not compensate the propagation delay. This fact limits the achievable synchronization quality to a couple of microsecond, which is typically enough for the very majority of the applications, especially for home automation. An example is the connection of wireless speakers to a television.", "Automation-Control": 0.8751633167, "Qwen2": "Yes"} {"id": "398786", "revid": "20466351", "url": "https://en.wikipedia.org/wiki?curid=398786", "title": "Multidimensional scaling", "text": "Multidimensional scaling (MDS) is a means of visualizing the level of similarity of individual cases of a dataset. MDS is used to translate \"information about the pairwise 'distances' among a set of formula_1 objects or individuals\" into a configuration of formula_1 points mapped into an abstract Cartesian space.\nMore technically, MDS refers to a set of related ordination techniques used in information visualization, in particular to display the information contained in a distance matrix. It is a form of non-linear dimensionality reduction.\nGiven a distance matrix with the distances between each pair of objects in a set, and a chosen number of dimensions, \"N\", an MDS algorithm places each object into \"N\"-dimensional space (a lower-dimensional representation) such that the between-object distances are preserved as well as possible. For \"N\" = 1, 2, and 3, the resulting points can be visualized on a scatter plot.\nCore theoretical contributions to MDS were made by James O. Ramsay of McGill University, who is also regarded as the founder of functional data analysis.\nTypes.\nMDS algorithms fall into a taxonomy, depending on the meaning of the input matrix:\nClassical multidimensional scaling.\nIt is also known as Principal Coordinates Analysis (PCoA), Torgerson Scaling or Torgerson–Gower scaling. It takes an input matrix giving dissimilarities between pairs of items and outputs a coordinate matrix whose configuration minimizes a loss function called \"strain\", which is given by\nformula_3 \nwhere formula_4 denote vectors in \"N\"-dimensional space, formula_5 denotes the scalar product between formula_4 and formula_7, and formula_8 are the elements of the matrix formula_9 defined on step 2 of the following algorithm, which are computed from the distances.\nMetric multidimensional scaling (mMDS).\nIt is a superset of classical MDS that generalizes the optimization procedure to a variety of loss functions and input matrices of known distances with weights and so on. A useful loss function in this context is called \"stress\", which is often minimized using a procedure called stress majorization. Metric MDS minimizes the cost function called “stress” which is a residual sum of squares:formula_33\nMetric scaling uses a power transformation with a user-controlled exponent formula_34: formula_35 and formula_36 for distance. In classical scaling formula_37 Non-metric scaling is defined by the use of isotonic regression to nonparametrically estimate a transformation of the dissimilarities.\nNon-metric multidimensional scaling (NMDS).\nIn contrast to metric MDS, non-metric MDS finds both a non-parametric monotonic relationship between the dissimilarities in the item-item matrix and the Euclidean distances between items, and the location of each item in the low-dimensional space. For NMDS, it is unnecessary to \nLet formula_38 be the dissimilarity between points formula_39. Let formula_40 be the Euclidean distance between embedded points formula_41.\nNow, for each choice of the embedded points formula_42 and is a monotonically increasing function formula_43, define the \"stress\" function:\nThe factor of formula_45 in the denominator is necessary to prevent a \"collapse\". Suppose we define instead formula_46, then it can be trivially minimized by setting formula_47, then collapse every point to the same point.\nA few variants of this cost function exist. MDS programs automatically minimize stress in order to obtain the MDS solution.\nNMDS algorithm is a twofold optimization. The optimal monotonic transformation of the proximities has to be found. The points of a configuration have to be optimally arranged, so that their distances match the scaled proximities as closely as possible. This is usually done iteratively: \nLouis Guttman's smallest space analysis (SSA) is an example of a non-metric MDS procedure.\nGeneralized multidimensional scaling (GMD).\nAn extension of metric multidimensional scaling, in which the target space is an arbitrary smooth non-Euclidean space. In cases where the dissimilarities are distances on a surface and the target space is another surface, GMDS allows finding the minimum-distortion embedding of one surface into another.\nDetails.\nThe data to be analyzed is a collection of formula_54 objects (colors, faces, stocks, . . .) on which a \"distance function\" is defined,\nThese distances are the entries of the \"dissimilarity matrix\"\nThe goal of MDS is, given formula_59, to find formula_54 vectors \nformula_61 such that\nwhere formula_64 is a vector norm. In classical MDS, this norm is the Euclidean distance, but, in a broader sense, it may be a metric or arbitrary distance function.\nIn other words, MDS attempts to find a mapping from the formula_54 objects into formula_66 such that distances are preserved. If the dimension formula_67 is chosen to be 2 or 3, we may plot the vectors formula_42 to obtain a visualization of the similarities between the formula_54 objects. Note that the vectors formula_42 are not unique: With the Euclidean distance, they may be arbitrarily translated, rotated, and reflected, since these transformations do not change the pairwise distances formula_71.\nThere are various approaches to determining the vectors formula_42. Usually, MDS is formulated as an optimization problem, where formula_78 is found as a minimizer of some cost function, for example,\nA solution may then be found by numerical optimization techniques. For some particularly chosen cost functions, minimizers can be stated analytically in terms of matrix eigendecompositions.\nProcedure.\nThere are several steps in conducting MDS research:", "Automation-Control": 0.8321403265, "Qwen2": "Yes"} {"id": "64731118", "revid": "1142808657", "url": "https://en.wikipedia.org/wiki?curid=64731118", "title": "List of SysML tools", "text": "This article compares SysML tools. SysML tools are software applications which support some functions of the Systems Modeling Language.", "Automation-Control": 0.992653966, "Qwen2": "Yes"} {"id": "64760108", "revid": "1170380256", "url": "https://en.wikipedia.org/wiki?curid=64760108", "title": "Track technology", "text": "Depending on the supplier, track technology has been variously termed a smart conveyance system, intelligent track system, industrial transport system, independent cart technology, smart carriage technology, linear or extended or flexible transport system, or simply a conveyor or conveyance platform. They are also referred to as linear motors or long stator linear motors, reflecting the underlying technology of the track (stator) and shuttles (platen, equivalent to the rotor in a conventional rotary electric motor).  Shuttles have also been called carriers, movers, platforms and pallets.\nList of commercially available track systems.\nThe following is a list of commercially available track systems by product name:\nAreas of application.\nTrack technology is – among other technologies like machine vision and robotics – one of the key enablers for the adaptive machine.\nThe concept of the adaptive machine also goes beyond track technology to achieve their high levels of flexibility.  One complementary technology is the industrial robot, which by definition possesses the same programmable flexibility.  Of particular interest is the ability of both robots and track systems to operate safely along with humans in a collaborative environment. This recent development allows for a combination of manual and automated assembly tasks, maintenance and materials replenishment without stopping production.\nMachine vision can play a pivotal role when integrated into an adaptive machine. Vision can identify individual shuttles and their contents in order to guide them to the appropriate workstations. Vision has long been used to automate robot guidance, inspection, orientation and related tasks.\nGiven the adaptive machine's flexibility to respond to consumer demand generation, Internet of Things and e-commerce technologies are complementary, providing the connection between internal production resources and commercial systems in a manufacturer's digital business model.", "Automation-Control": 0.8768179417, "Qwen2": "Yes"} {"id": "544238", "revid": "36643270", "url": "https://en.wikipedia.org/wiki?curid=544238", "title": "Artefaktur", "text": "Artefaktur Component Development Kit (ACDK) is a platform-independent library for generating distributed server-based components and applications. Services are provided by a C++ framework.\nArtefaktur is free software, distributed under the terms of the GNU Lesser General Public License.", "Automation-Control": 0.8710936904, "Qwen2": "Yes"} {"id": "34624069", "revid": "10951369", "url": "https://en.wikipedia.org/wiki?curid=34624069", "title": "Siconos", "text": "SICONOS is an Open Source scientific software primarily targeted at\nmodeling and simulating non-smooth dynamical systems (NSDS):\nOther applications are found in Systems and Control (hybrid systems, differential inclusions, optimal control with state constraints), Optimization (Complementarity problem and Variational inequality) Biology Gene regulatory network, Fluid Mechanics and Computer graphics, etc.\nComponents.\nThe software is based on 3 main components\nPerformance.\nAccording to peer reviewed studies published by its developers, Siconos was approximately five times faster than Ngspice or ELDO (a commercial SPICE by Mentor Graphics) and 250 times faster than PLECS when solving a buck converter.", "Automation-Control": 0.7752805948, "Qwen2": "Yes"} {"id": "6487368", "revid": "6908984", "url": "https://en.wikipedia.org/wiki?curid=6487368", "title": "PivotPoint Technology Corporation", "text": "PivotPoint Technology Corporation is a software and systems engineering services company headquartered in Fallbrook, California. PivotPoint was founded in 2003 by Cris Kobryn, a noted expert in visual modeling languages and model-driven development technologies. PivotPoint is best known for its model-driven development consulting and training services, the latter which feature UML, SysML, BPMN and DoDAF workshops. PivotPoint is a founding member and a major contributor to the SysML Partners, the group of software tool vendors and industry leaders that convened in 2003 to create a UML dialect for systems engineering called SysML (Systems Modeling Language). In June 2007 the SysML Partners were named a winner in the \"Modeling\" category of the SD Times 100, which recognizes the leaders and innovators of the software development industry. ", "Automation-Control": 0.9601364136, "Qwen2": "Yes"} {"id": "10064136", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=10064136", "title": "Separation principle", "text": "In control theory, a separation principle, more formally known as a principle of separation of estimation and control, states that under some assumptions the problem of designing an optimal feedback controller for a stochastic system can be solved by designing an optimal observer for the state of the system, which feeds into an optimal deterministic controller for the system. Thus the problem can be broken into two separate parts, which facilitates the design.\nThe first instance of such a principle is in the setting of deterministic linear systems, namely that if a stable observer and a stable state feedback are designed for a linear time-invariant system (LTI system hereafter), then the combined observer and feedback is stable. The separation principle does not hold in general for nonlinear systems.\nAnother instance of the separation principle arises in the setting of linear stochastic systems, namely that state estimation (possibly nonlinear) together with an optimal state feedback controller designed to minimize a quadratic cost, is optimal for the stochastic control problem with output measurements. When process and observation noise are Gaussian, the optimal solution separates into a Kalman filter and a linear-quadratic regulator. This is known as linear-quadratic-Gaussian control. More generally, under suitable conditions and when the noise is a martingale (with possible jumps), again a separation principle applies and is known as the separation principle in stochastic control.\nThe separation principle also holds for high gain observers used for state estimation of a class of nonlinear systems and control of quantum systems.\nProof of separation principle for deterministic LTI systems.\nConsider a deterministic LTI system:\nwhere\nWe can design an observer of the form\nand state feedback\nDefine the error \"e\":\nThen\nNow we can write the closed-loop dynamics as\nSince this is a triangular matrix, the eigenvalues are just those of \"A\" − \"BK\" together with those of \"A\" − \"LC\". Thus the stability of the observer and feedback are independent.", "Automation-Control": 0.9999528527, "Qwen2": "Yes"} {"id": "730585", "revid": "88026", "url": "https://en.wikipedia.org/wiki?curid=730585", "title": "Hamilton–Jacobi–Bellman equation", "text": "The Hamilton-Jacobi-Bellman (HJB) equation is a nonlinear partial differential equation that provides necessary and sufficient conditions for optimality of a control with respect to a loss function. Its solution is the value function of the optimal control problem which, once known, can be used to obtain the optimal control by taking the maximizer (or minimizer) of the Hamiltonian involved in the HJB equation.\nThe equation is a result of the theory of dynamic programming which was pioneered in the 1950s by Richard Bellman and coworkers. The connection to the Hamilton–Jacobi equation from classical physics was first drawn by Rudolf Kálmán. In discrete-time problems, the analogous difference equation is usually referred to as the Bellman equation.\nWhile classical variational problems, such as the brachistochrone problem, can be solved using the Hamilton–Jacobi–Bellman equation, the method can be applied to a broader spectrum of problems. Further it can be generalized to stochastic systems, in which case the HJB equation is a second-order elliptic partial differential equation. A major drawback, however, is that the HJB equation admits classical solutions only for a sufficiently smooth value function, which is not guaranteed in most situations. Instead, the notion of a viscosity solution is required, in which conventional derivatives are replaced by (set-valued) subderivatives.\nOptimal-control-problems.\nConsider the following problem in deterministic optimal control over the time period formula_1:\nwhere formula_3 is the scalar cost rate function and formula_4 is a function that gives the bequest value at the final state, formula_5 is the system state vector, formula_6 is assumed given, and formula_7 for formula_8 is the control vector that we are trying to find. Thus, formula_9 is the value function.\nThe system must also be subject to\nwhere formula_11 gives the vector determining physical evolution of the state vector over time.\nThe partial differential equation.\nFor this simple system, the Hamilton–Jacobi–Bellman partial differential equation is\nsubject to the terminal condition\nAs before, the unknown scalar function formula_9 in the above partial differential equation is the Bellman value function, which represents the cost incurred from starting in state formula_15 at time formula_16 and controlling the system optimally from then until time formula_17.\nDeriving the equation.\nIntuitively, the HJB equation can be derived as follows. If formula_18 is the optimal cost-to-go function (also called the 'value function'), then by Richard Bellman's principle of optimality, going from time \"t\" to \"t\" + \"dt\", we have\nNote that the Taylor expansion of the first term on the right-hand side is\nwhere formula_21 denotes the terms in the Taylor expansion of higher order than one in little-\"o\" notation. Then if we subtract formula_18 from both sides, divide by \"dt\", and take the limit as \"dt\" approaches zero, we obtain the HJB equation defined above.\nSolving the equation.\nThe HJB equation is usually solved backwards in time, starting from formula_23 and ending at formula_24.\nWhen solved over the whole of state space and formula_25 is continuously differentiable, the HJB equation is a necessary and sufficient condition for an optimum when the terminal state is unconstrained. If we can solve for formula_26 then we can find from it a control formula_27 that achieves the minimum cost.\nIn general case, the HJB equation does not have a classical (smooth) solution. Several notions of generalized solutions have been developed to cover such situations, including viscosity solution (Pierre-Louis Lions and Michael Crandall), minimax solution , and others.\nApproximate dynamic programming has been introduced by D. P. Bertsekas and J. N. Tsitsiklis with the use of artificial neural networks (multilayer perceptrons) for approximating the Bellman function in general. This is an effective mitigation strategy for reducing the impact of dimensionality by replacing the memorization of the complete function mapping for the whole space domain with the memorization of the sole neural network parameters. In particular, for continuous-time systems, an approximate dynamic programming approach that combines both policy iterations with neural networks was introduced. In discrete-time, an approach to solve the HJB equation combining value iterations and neural networks was introduced.\nAlternatively, it has been shown that sum-of-squares optimization can yield an approximate polynomial solution to the Hamilton–Jacobi–Bellman equation arbitrarily well with respect to the formula_28 norm.\nExtension to stochastic problems.\nThe idea of solving a control problem by applying Bellman's principle of optimality and then working out backwards in time an optimizing strategy can be generalized to stochastic control problems. Consider similar as above\nnow with formula_30 the stochastic process to optimize and formula_31 the steering. By first using Bellman and then expanding formula_32 with Itô's rule, one finds the stochastic HJB equation\nwhere formula_34 represents the stochastic differentiation operator, and subject to the terminal condition\nNote that the randomness has disappeared. In this case a solution formula_36 of the latter does not necessarily solve the primal problem, it is a candidate only and a further verifying argument is required. This technique is widely used in Financial Mathematics to determine optimal investment strategies in the market (see for example Merton's portfolio problem).\nApplication to LQG-Control.\nAs an example, we can look at a system with linear stochastic dynamics and quadratic cost. If the system dynamics is given by\nand the cost accumulates at rate formula_38, the HJB equation is given by\nwith optimal action given by\nAssuming a quadratic form for the value function, we obtain the usual Riccati equation for the Hessian of the value function as is usual for Linear-quadratic-Gaussian control.", "Automation-Control": 0.9309731126, "Qwen2": "Yes"} {"id": "2475896", "revid": "754619", "url": "https://en.wikipedia.org/wiki?curid=2475896", "title": "Control knob", "text": "A control knob is a rotary device used to provide manual input adjustments to a mechanical/electrical system when grasped and turned by a human operator, so that differing extent of knob rotation corresponds to different desired input. Control knobs are a simpler type of input hardware and one of the most common components in control systems, and are found on all sorts of devices from taps and gas stoves to optical microscopes, potentiometers, radio tuners and digital cameras, as well as in aircraft cockpits.\nOperation.\nA control knob works by turning a shaft which connects to the component which produces the actual input. Common control components used include potentiometers, variable capacitors, and rotary switches. An example where the knob does not produce a variation in an electrical signal may be found in many toasters, where the darkness knob moves the thermostat in such a way as to change the temperature at which it opens and releases the cooked toast. Some similar controls produce similar inputs using different geometry; for example, the knob may be replaced by a lever which is moved through an angle. Another example is the sliding controls which frequently replace knobs as level controls in audio equipment.\nFeedback.\nThe use of knobs is an important aspect of the design of user interfaces in these devices. Particular attention needs to be paid to the feedback to the operator from the adjustments being made. The use of a pointer on the knob in conjunction with a scale assists in producing repeatable settings; in other cases there may be a dial or other indicator which is either mechanically linked the knob's rotation (as in many older radio tuners) or which reports the behavior being controlled.", "Automation-Control": 0.9581082463, "Qwen2": "Yes"} {"id": "31998221", "revid": "88026", "url": "https://en.wikipedia.org/wiki?curid=31998221", "title": "Wahba's problem", "text": "In applied mathematics, Wahba's problem, first posed by Grace Wahba in 1965, seeks to find a rotation matrix (special orthogonal matrix) between two coordinate systems from a set of (weighted) vector observations. Solutions to Wahba's problem are often used in satellite attitude determination utilising sensors such as magnetometers and multi-antenna GPS receivers. The cost function that Wahba's problem seeks to minimise is as follows:\nwhere formula_3 is the \"k\"-th 3-vector measurement in the reference frame, formula_4 is the corresponding \"k\"-th 3-vector measurement in the body frame and formula_5 is a 3 by 3 rotation matrix between the coordinate frames.\nformula_6 is an optional set of weights for each observation.\nA number of solutions to the problem have appeared in literature, notably Davenport's q-method, QUEST and methods based on the singular value decomposition (SVD). Several methods for solving Wahba's problem are discussed by Markley and Mortari.\nThis is an alternative formulation of the Orthogonal Procrustes problem (consider all the vectors multiplied by the square-roots of the corresponding weights as columns of two matrices with \"N\" columns to obtain the alternative formulation). An elegant derivation of the solution on one and a half page can be found in.\nSolution via SVD.\nOne solution can be found using a singular value decomposition (SVD).\n1. Obtain a matrix formula_7 as follows:\n2. Find the singular value decomposition of formula_7\n3. The rotation matrix is simply:\nwhere formula_12", "Automation-Control": 0.654245019, "Qwen2": "Yes"} {"id": "39228560", "revid": "8766034", "url": "https://en.wikipedia.org/wiki?curid=39228560", "title": "Moving magnet actuator", "text": "A moving magnet actuator is a type of electromagnetic linear actuator. It typically consists of an arrangement of a mobile permanent magnet and fixed coil, arranged so that currents in the coil generate a pair of equal and opposite forces between the coil and magnet. \nA voice coil actuator, also called a voice coil motor (VCM), is an electromagnetic linear actuator where the magnet is fixed and the coil is mobile. In this configuration the coil is common called a voice coil. ", "Automation-Control": 0.9097833037, "Qwen2": "Yes"} {"id": "39251785", "revid": "5042921", "url": "https://en.wikipedia.org/wiki?curid=39251785", "title": "Cross Gramian", "text": "In control theory, the cross Gramian (formula_1, also referred to by formula_2) is a Gramian matrix used to determine how controllable and observable a linear system is.\nFor the stable time-invariant linear system\nthe cross Gramian is defined as:\nand thus also given by the solution to the Sylvester equation:\nThis means the cross Gramian is not strictly a Gramian matrix, since it is generally neither positive semi-definite nor symmetric.\nThe triple formula_7 is controllable and observable, and hence minimal, if and only if the matrix formula_1 is nonsingular, (i.e. formula_1 has full rank, for any formula_10).\nIf the associated system formula_7 is furthermore symmetric, such that there exists a transformation formula_12 with\nthen the absolute value of the eigenvalues of the cross Gramian equal Hankel singular values:\nThus the direct truncation of the Eigendecomposition of the cross Gramian allows model order reduction (see ) without a balancing procedure as opposed to balanced truncation.\nThe cross Gramian has also applications in decentralized control, sensitivity analysis, and the inverse scattering transform.", "Automation-Control": 0.7555446625, "Qwen2": "Yes"} {"id": "20405001", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=20405001", "title": "Controlled invariant subspace", "text": "In control theory, a controlled invariant subspace of the state space representation of some system is a subspace such that, if the state of the system is initially in the subspace, it is possible to control the system so that the state is in the subspace at all times. This concept was introduced by Giuseppe Basile and Giovanni Marro .\nDefinition.\nConsider a linear system described by the differential equation\nHere, x(\"t\") ∈ R\"n\" denotes the state of the system and u(\"t\") ∈ R\"p\" is the input. The matrices \"A\" and \"B\" have size \"n\" × \"n\" and \"n\" × \"p\" respectively.\nA subspace \"V\" ⊂ R\"n\" is a \"controlled invariant subspace\" if for any x(0) ∈ \"V\", there is an input u(\"t\") such that x(\"t\") ∈ \"V\" for all nonnegative \"t\".\nProperties.\nA subspace \"V\" ⊂ R\"n\" is a controlled invariant subspace if and only if \"AV\" ⊂ \"V\" + Im \"B\". If \"V\" is a controlled invariant subspace, then there exists a matrix \"K\" such that the input u(\"t\") = \"K\"x(\"t\") keeps the state within \"V\"; this is a simple feedback control .", "Automation-Control": 0.6235527992, "Qwen2": "Yes"} {"id": "3458672", "revid": "44203633", "url": "https://en.wikipedia.org/wiki?curid=3458672", "title": "Pole–zero plot", "text": "In mathematics, signal processing and control theory, a pole–zero plot is a graphical representation of a rational transfer function in the complex plane which helps to convey certain properties of the system such as:\nA pole-zero plot shows the location in the complex plane of the poles and zeros of the transfer function of a dynamic system, such as a controller, compensator, sensor, equalizer, filter, or communications channel. By convention, the poles of the system are indicated in the plot by an X while the zeros are indicated by a circle or O.\nA pole-zero plot can represent either a continuous-time (CT) or a discrete-time (DT) system. For a CT system, the plane in which the poles and zeros appear is the s plane of the Laplace transform. In this context, the parameter \"s\" represents the complex angular frequency, which is the domain of the CT transfer function. For a DT system, the plane is the z plane, where \"z\" represents the domain of the Z-transform.\nContinuous-time systems.\nIn general, a rational transfer function for a continuous-time LTI system has the form:\nformula_1\nwhere\nEither M or N or both may be zero, but in real systems, it should be the case that formula_9; otherwise the gain would be unbounded at high frequencies.\nRegion of convergence.\nThe region of convergence (ROC) for a given CT transfer function is a half-plane or vertical strip, either of which contains no poles. In general, the ROC is not unique, and the particular ROC in any given case depends on whether the system is causal or anti-causal.\nThe ROC is usually chosen to include the imaginary axis since it is important for most practical systems to have BIBO stability.\nExample.\nformula_14\nThis system has no (finite) zeros and two poles:\nformula_15\nand\nformula_16\nThe pole-zero plot would be:\nNotice that these two poles are complex conjugates, which is the necessary and sufficient condition to have real-valued coefficients in the differential equation representing the system.\nDiscrete-time systems.\nIn general, a rational transfer function for a discrete-time LTI system has the form:\nformula_17\nwhere\nEither M or N or both may be zero.\nRegion of convergence.\nThe region of convergence (ROC) for a given DT transfer function is a disk or annulus which contains no poles. In general, the ROC is not unique, and the particular ROC in any given case depends on whether the system is causal or anti-causal.\nThe ROC is usually chosen to include the unit circle since it is important for most practical systems to have BIBO stability.\nExample.\nIf formula_26 and formula_27 are completely factored, their solution can be easily plotted in the z-plane. For example, given the following transfer function:\nformula_28\nThe only (finite) zero is located at: formula_29, and the two poles are located at: formula_30, where \"j\" is the imaginary unit.\nThe pole–zero plot would be:", "Automation-Control": 0.8314363956, "Qwen2": "Yes"} {"id": "48752218", "revid": "6908984", "url": "https://en.wikipedia.org/wiki?curid=48752218", "title": "Data warehouse automation", "text": "Data warehouse automation (DWA) refers to the process of accelerating and automating the data warehouse development cycles, while assuring quality and consistency. DWA is believed to provide automation of the entire lifecycle of a data warehouse, from source system analysis to testing to documentation. It helps improve productivity, reduce cost, and improve overall quality.\nGeneral.\nData warehouse automation primarily focuses on automation of each and every step involved in the lifecycle of a data warehouse, thus reducing the efforts required in managing it.\nData warehouse automation works on the principles of design patterns. It comprises a central repository of design patterns, which encapsulate architectural standards as well as best practices for data design, data management, data integration, and data usage.\nIn November 2015, an analyst firm has published a guide \"Which Data Warehouse Automation Tool is Right for You?\" covering four of the leading products in the DWA space. In November 2015, an international software and technology services company engaged in developing ‘agile tools’ for the data integration industry, was named by CIO Review as one of the 20 most promising productivity tools solution providers 2015 \nBenefits.\nData warehouse automation can provide advantages like source data exploration, warehouse data models, ETL generation, test automation, metadata management, managed deployment, scheduling, change impact analysis and easier maintenance and modification of the data warehouse.\nMore important than the technical features of DWA tools, however, is the ability to deliver projects faster and with less resources.", "Automation-Control": 0.9155998826, "Qwen2": "Yes"} {"id": "4088765", "revid": "41204854", "url": "https://en.wikipedia.org/wiki?curid=4088765", "title": "Comparison function", "text": "In applied mathematics, comparison functions are several classes of continuous functions, which are used in stability theory to characterize the stability properties of control systems as Lyapunov stability, uniform asymptotic stability etc. 1 + 1 equals 2, which can be used in comparison functions.\nLet formula_1 be a space of continuous functions acting from formula_2 to formula_3. The most important classes of comparison functions are:\nFunctions of class formula_5 are also called \"positive-definite functions\".\nOne of the most important properties of comparison functions is given by Sontag’s formula_6-Lemma, named after Eduardo Sontag. It says that for each formula_7 and any formula_8 there exist formula_9: \nMany further useful properties of comparison functions can be found in.\nComparison functions are primarily used to obtain quantitative restatements of stability properties as Lyapunov stability, uniform asymptotic stability, etc. These restatements are often more useful than the qualitative definitions of stability properties given in formula_10 language.\nAs an example, consider an ordinary differential equation \nwhere formula_11 is locally Lipschitz. Then:\nThe comparison-functions formalism is widely used in input-to-state stability theory.", "Automation-Control": 0.6278784871, "Qwen2": "Yes"} {"id": "22717006", "revid": "43863887", "url": "https://en.wikipedia.org/wiki?curid=22717006", "title": "GestureTek", "text": "GestureTek is an American-based interactive technology company headquartered in Silicon Valley, California, with offices in Toronto and Ottawa, Ontario and Asia.\nFounding.\nFounded in 1986 by Canadians Vincent John Vincent and Francis MacDougall, this privately held company develops and licenses gesture recognition software based on computer vision techniques. The partners invented video gesture control in 1986 and received their base patent in 1996 for the GestPoint video gesture control system. GestPoint technology is a camera-enabled video tracking software system that translates hand and body movement into computer control. The system enables users to navigate and control interactive multi-media and menu-based content, engage in virtual reality game play, experience immersion in an augmented reality environment or interact with a consumer device (such a television, mobile phone or set top box) without using touch-based peripherals. Similar companies include gesture recognition specialist LM3LABS based in Tokyo, Japan.\nTechnology.\nGestureTek's gesture interface applications include multi-touch and 3D camera tracking. GestureTek's multi-touch technology powers the multi-touch table in Melbourne's Eureka Tower. A GestureTek multi-touch table with object recognition is found at the New York City Visitors Center. Telefónica has a multi-touch window with technology from GestureTek. GestureTek's 3D tracking technology is used in a 3D television prototype from Hitachi and various digital signage and display solutions based on 3D interaction.\nPatents.\nGestureTek currently has 8 patents awarded, including: 5,534,917 (Video Gesture Control Motion Detection); 7,058,204 (Multiple Camera Control System, Point to Control Base Patent); 7,421,093 (Multiple Camera Tracking System for Interfacing With an Application); 7,227,526 (Stereo Camera Control, 3D-Vision Image Control System); 7,379,563 (Two-Handed Movement Tracker Tracking Bi-Manual Movements); 7,379,566 (Optical Flow-Based Tilt Sensor For Phone Tilt Control); 7,389,591 (Phone Tilt for Typing & Menus/Orientation-Sensitive Signal Output); 7,430,312 (Five Camera 3D Face Capture).\nGestureTek's software and patents have been licensed by Microsoft for the Xbox 360, Sony for the EyeToy, NTT DoCoMo for their mobile phones and Hasbro for the ION Educational Gaming System. In addition to software provision, GestureTek also fabricates interactive gesture control display systems with natural user interface for interactive advertising, games and presentations.\nIn addition, GestureTek's natural user interface virtual reality system has been the subject of research by universities and hospitals for its application in both physical therapy and physical rehabilitation.\nIn 2008, GestureTek received the Mobile Innovation Global Award from the GSMA for its software-based, gesture-controlled user interface for mobile games and applications. The technology is used by Java platform integration providers and mobile developers. Katamari Damacy is one example of a gesture control mobile game powered by GestureTek software.\nCompetitors.\nOther companies in the industry of interactive projections for marketing and retail experiences include Po-motion Inc., Touchmagix and LM3LABS.", "Automation-Control": 0.6095750928, "Qwen2": "Yes"} {"id": "2910613", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=2910613", "title": "Polished plate glass", "text": "Polished plate is a type of hand-made glass. It is produced by casting glass onto a table and then subsequently grinding and polishing the glass. This was originally done by hand, and then later by machine. It was an expensive process requiring a large capital investment.\nOther methods of producing hand-blown window glass included: broad sheet, blown plate, crown glass and cylinder blown sheet. These methods of manufacture lasted at least until the end of the 19th century. The early 20th century marks the move away from hand-blown to machine manufactured glass such as rolled plate, machine drawn cylinder sheet, flat drawn sheet, single and twin ground polished plate, and float glass.\nThe Frenchman, Louis Lucas de Nehou, in 1688, in conjunction with Abraham Thevart, succeeded in perfecting the process of casting plate-glass. Mirror plates previous to the invention had been made from blown \"sheet\" glass, and were consequently very limited in size. De Nehou's process of rolling molten glass poured on an iron table rendered the manufacture of very large plates possible.\nIn 1773 English polished plate (by the French process) was produced at Ravenhead.\nBy 1800 a steam engine was used to carry out the grinding and polishing of the cast glass.", "Automation-Control": 0.8490201235, "Qwen2": "Yes"} {"id": "37651645", "revid": "35622810", "url": "https://en.wikipedia.org/wiki?curid=37651645", "title": "Outline of control engineering", "text": "The following outline is provided as an overview of and topical guide to control engineering:\nControl engineering – engineering discipline that applies control theory to design systems with desired behaviors. The practice uses sensors to measure the output performance of the device being controlled and those measurements can be used to give feedback to the input actuators that can make corrections toward desired performance. When a device is designed to perform without the need of human inputs for correction it is called automatic control (such as cruise control for regulating a car's speed).", "Automation-Control": 0.9974940419, "Qwen2": "Yes"} {"id": "12184173", "revid": "46270674", "url": "https://en.wikipedia.org/wiki?curid=12184173", "title": "Mitsubishi S-AWC", "text": "S-AWC (Super All Wheel Control) is the brand name of an advanced full-time four-wheel drive system developed by Mitsubishi Motors. The technology, specifically developed for the new 2007 Lancer Evolution, the 2010 Outlander (if equipped), the 2014 Outlander (if equipped), the Outlander PHEV and the Eclipse Cross have an advanced version of Mitsubishi Motors' AWC system. Mitsubishi Motors first exhibited S-AWC integration control technology in the \"Concept-X\" model at the 39th Tokyo Motor Show in 2005. According to Mitsubishi Motors, \"the ultimate embodiment of the company's AWC philosophy is the S-AWC system, a 4WD-based integrated vehicle dynamics control system\".\nIt integrates management of its \"Active Center Differential\" (ACD), \"Active Yaw Control\" (AYC), \"Active Stability Control\" (ASC), and \"Sports ABS\" components, while adding braking force control to Mitsubishi Motors' own AYC system, allowing regulation of torque and braking force at each wheel. S-AWC employs yaw rate feedback control, a direct yaw moment control technology that affects left-right torque vectoring (this technology forms the core of S-AWC system) and controls cornering maneuvers as desired during acceleration, steady state driving, and deceleration. Mitsubishi Motors claims the result is elevated drive power, cornering performance, and vehicle stability regardless of driving conditions.\nComponents.\nActive Center Differential (ACD).\n\"Active Center Differential\" incorporates an electronically-controlled hydraulic multi-plate clutch. The system optimizes clutch cover clamp load for different driving conditions, regulating the differential limiting action between free and locked states to optimize front/rear wheel torque split and thereby producing the best balance between traction and steering response.\nActive Yaw Control (AYC).\n\"Active Yaw Control\" uses a torque transfer mechanism in the rear differential to control rear wheel torque differential for different driving conditions and so limit the yaw moment that acts on the vehicle body and enhance cornering performance. AYC also acts like a limited slip differential by suppressing rear wheel slip to improve traction. In its latest form, AYC now features yaw rate feedback control using a yaw rate sensor and also gains braking force control. Accurately determining the cornering dynamics on a realtime basis, the system operates to control vehicle behavior through corners and realize vehicle behavior that more closely mirrors driver intent.\nActive Stability Control (ASC).\nActive Stability Control stabilizes vehicle attitude while maintaining optimum traction by regulating engine power and the braking force at each wheel. Taking a step beyond the previous generation Lancer Evolution, the fitting of a brake pressure sensor at each wheel allows more precise and positive control of braking force. ASC improves traction under acceleration by preventing the driving wheels from spinning on slippery surfaces. It also elevates vehicle stability by suppressing skidding in an emergency evasive maneuver or the result of other sudden steering inputs.\nSport ABS.\nThe Sports ABS system supports braking when entering into a corner by controlling power to all tires depending on handling characteristics. Braking can be controlled to obtain optimal damping at each tyre based on information from four wheel-speed sensors and steering wheel angle sensor. The addition of yaw rate sensors and brake pressure sensors to the Sport ABS system has improved braking performance through corners compared to the Lancer Evolution IX.\nConcept components for 2007 Lancer Evolution.\nThe prototype system also featured two additional components controlling suspensions and steering, which failed to make the production version of S-AWC system:\nActive Steering System.\nActive Steering System realizes handling with more linear response by adaptively controlling front wheel turn angle according to steering input and vehicle speed. At slower vehicle speeds the system improves response by shifting to a quicker steering gear ratio, while at higher speeds it substantially improves stability by moving to a slower gear ratio. For rapid steering inputs, S-AWC momentarily increases front wheel turn angle and Super AYC control to realize sharper response. In countersteer situations, S-AWC increases responsiveness further to assist the driver with steering precision.\nRoll Control Suspension (RCS).\nRCS effectively reduces body roll and pitching by hydraulically connecting all the shock absorbers together and regulating their damping pressures as necessary. Able to control both roll and pitching stiffness separately, RCS can operate in a variety of ways. It can, for example, reduce roll only when required during turn in or in other situations while being set up on the soft side to prioritize tire contact and ride comfort. Since the system controls roll stiffness hydraulically, it eliminates the need for stabilizer bars. In the integrated control of its component systems, S-AWC employs information from RCS's hydraulic system to estimate the tire load at each wheel.\nControl system.\nThe use of engine torque and brake pressure information in the regulation of the ACD and AYC components allows the S-AWC system to determine more quickly whether the vehicle is accelerating or decelerating. S-AWC also employs yaw rate feedback for the first time. The system helps the driver follow his chosen line more closely by comparing how the car is running, as determined from data from the yaw rate sensors, and how the driver wants it to behave, as determined from steering inputs, and operates accordingly to correct any divergence. The addition of braking force regulation to AYC's main role of transferring torque between the right and left wheels allows S-AWC to exert more control over vehicle behavior in on-the-limit driving situations. Increasing braking force on the inside wheel during understeer and on the outer wheel during oversteer situations, AYC's new braking force control feature works in conjunction with torque transfer regulation to realize higher levels of cornering performance and vehicle stability.\nUsing integrated management of the ASC and ABS systems allows S-AWC to effectively and seamlessly control vehicle dynamics when accelerating, decelerating or cornering under all driving conditions. S-AWC offers three operating modes: \nWhen the driver selects the mode best suited to current road surface conditions S-AWC operates to control vehicle behavior accordingly and allow the driver to extract the maximum dynamic performance from his vehicle.\nECU integration.\nTwo electronic control units (ECU) regulate vehicle motion. One is an ECU developed by Mitsubishi Electric to control ACD and AYC. The other is an ECU developed by Continental Automotive Systems of Germany that controls ASC and ABS. The two ECUs can communicate with other ECUs through a CAN, an in-vehicle LAN interface standard. In addition, the two ECUs are communicating with each other through a dedicated CAN, enabling vehicle motion to be controlled more quickly. The cable and communication standard for the dedicated CAN are the same as those for other CANs.\nA longitudinal acceleration sensor, lateral acceleration sensor and yaw rate sensor are installed as one module near the gravity center of a vehicle, which is located between the driver's and passenger's seats. Other sensors, such as a wheel-speed sensor and steering-angle sensor, are installed in different places. However, no vertical acceleration sensor is used.\nAlso, when the vehicle is equipped with Mitsubishi's \"Twin Clutch SST\" transmission, S-AWC analyzes the behavior of the turning vehicle and if it judges that it is safer not to shift gears, it sends a signal to tell Twin Clutch SST that the gear must not be changed. However, S-AWC does not control vehicle motion by using control information from Twin Clutch SST. The co-operation is a one-way communication.\nThe control algorithms of vehicle motion were developed by Mitsubishi in-house, with MATLAB and Simulink: control system modeling tools. Mitsubishi adopted model-based method, which combines an algorithm and physical model of a vehicle to run a simulation. The physical model of a vehicle was constructed with CarSim, a simulation-package software developed by Mechanical Simulation Corporation of the United States. The algorithms were developed for each function such as ACD and AYC, not for each vehicle type. Therefore, the algorithms can be employed by various types of vehicles.\nConcept components for 2010 Outlander.\nThe 2010MY Outlander adopts a new S-AWC (Super All Wheel Control) that has added and refined an active front differential that controls the differential limiting force of the left and right front wheels based on an electronically controlled 4WD that distributes drive force to the rear wheels and integrates this Active Stability Control (ASC) and ABS. The result is greater turning performance, stability and drive performance while maintaining fuel economy equal to traditional electronically-controlled 4WD.\nStructure.\nThe S-AWC ECU calculates the amount of control according to drive condition and vehicle behavior based on sensor and switch data and ECU operation data. Control instructions are sent to the active front diff and electronic control couplings.\nActive control differential.\nElectronically-controlled couplings used in electronic –control 4WD are located in the transfer case to limit differential between the front left and right wheels and control drive force distribution on either side.\nElectronic control coupling.\nAn electronic control coupling within the rear differential distributes drive force to the rear wheels according to driving conditions. This is the same as used for 4WD electronic control in the 2009 model Outlander.\nS-AWC ECU.\nThe optimal amount of drive force control is calculated from sensor information obtained from CAN communications etc. to control the active front diff and the electronically-controlled coupling. Compared with the 2009 Outlander, Microcomputer performance has been enhanced and calculation speed and accuracy have been improved.\nSensor information.\nCompared with electronically-controlled 4WD, sensor information has been significantly augmented to accurately assess vehicle driving conditions and realize highly-responsive, finely tuned control.\nS-AWC control mode switch.\nS-AWC in the 2010 model Outlander has three selectable modes of control (NORMAL/SNOW/OFFROAD) that have been tuned to suit the road surface. Making the switch according to road surface conditions enables proper control.\nIndicator.\nS-AWC control information will be constantly displayed on the upper level of the multi-information display. A dedicated screen has been provided to display S-AWC operation information. The center displays traction control condition while yaw movement control conditions are displayed on either side.\nControl.\nChanges to the 2009 Outlander's electronically-controlled 4WD.\n1) Addition of integrated control with the active front differential\nIn addition to front and rear drive force distribution, enabling integrated control of drive force distribution to both front wheels delivers a higher level of driving on all fronts (turning performance, stability and road performance) compared with the 2009 Outlander.\n2) Introduction of a yaw rate feedback control\nVehicle behavior faithful to drive input is realized by precise assessment of vehicle turning movement based on yaw rate sensor data and the provision of achieve close to target vehicle behavior obtained from speed and steering angle.\n3) Evolution of coordinated ASC/ABS control\nProperly controlling active front differential and electronically controlled coupling according to the operating status of ASC and ABS, improves turning performance and stability.\nConcept components for 2014 Outlander.\nThe following functions have been recently added.\nBrake control.\nWhen the under steer condition, the beginning of turning response by steering operation\nis dramatically improved by adding the brake force to the inner wheel.\nIn addition, the wheel slippage is reduced during start moving.\nEPS control.\nSuppress the steering wheel movement which generated by the slippery road.\nAs a result, the traction performance improves because the amount of the control of Active Front Differential (AFD) can be increased.\nSynchronized with ECO MODE.\nBy selecting the ECO MODE, Engine and climate control are controlled as an \"ECO\nmode.\" Likewise, S-AWC control also turn to AWC ECO.\nAt the result of this control, the driver can easily engage \"ECO mode.\"\nControl.\nS-AWC Control Mode\nBy pushing S-AWC Control switch, the control mode can be changed.\nConcept components for Outlander PHEV.\nFail-safe function.\nFault detection\nThe ECU performs the following checks at the appropriate moment. The ECU determines that a fault has occurred when the fault detection conditions are met. Then the ECU stores the diagnosis code and ensures that the vehicle can still be driven. When the failure resume conditions are met, ECU determines the status is normal, and resumes the system. Start-up (Initial check immediately after the power supply mode of the electric motor switch is turned on.)\n• CPU check\n• Performs the ROM and RAM checks.\nAlways (while the power supply mode of the electric motor switch is turned on except during initial check)\n1. CPU check\n• Performs CAN communication and interactive check between CPUs.\n2. Power supply check\n• Monitors the CPU supply voltage and checks if the voltage is within specifications.\n3. External wire connection check\n• Checks if the input and output of each external wire connection is open or shorted.\n4WD lock switch.\nThe 4WD lock switch is located on the floor console. When the 4WD lock switch is pressed with the electric motor switch ON, \"4WD LOCK\" will be turned on and off. When the 4WD lock switch is turned on with the drive mode at ECO, or the ECO mode switch is turned on with the drive mode at 4WD lock, the drive mode will be switched to \"ECO MODE/4WD LOCK\". The driver can obtain better ground-covering ability by choosing the drive mode between \"4WD LOCK\" and \"ECO MODE/4WD LOCK\". When the ECO mode switch is turned off, the drive mode will return from \"ECO MODE/4WD LOCK\" to \"4WD LOCK.\"\nCornering Performance.\nEnhancement of the cornering stability.\nIt is optimization of the torque distribution ratio between front and rear wheels when cornering. In order to keep the cornering stability against the direction of steering wheel on the slippery road.\nEnhancement of the vehicle maneuverability.\nThe optimization of the control value for the AYC (Active Yaw Control) with braking, in order to enhance the vehicle maneuverability.\nTraction performance.\nLaunching performance on the icy slope is enhanced.\nConcept components for Eclipse Cross.\nS-AWC (Super All Wheel Control) is an integration of vehicle dynamics control systems whose design goals include safety and comfort.\nS-AWC of NEW ECLIPSE CROSS adopted the integration system that controlled with Active Stability Control (ASC) and ABS based on Electronically Controlled 4WD that distributes driving torque to rear wheel and Active Yaw Control (AYC) that controlled drive/braking torque between right and left wheel. The goal of the design is to prevent loss of control while excessive braking or accelerating on slippery roads. AYC of ECLIPSE CROSS controls drive/braking torque between right and left wheel by additional brake force. There are three modes of operation:\n• AUTO This mode achieves adequate 4WD performance on various conditions.\n• SNOW This mode enhances stability on the slippery road surface.\n• GRAVEL This mode excels at rough road driving and escape from stuck conditions.\nElectronically controlled 4WD.\nAn electronically-controlled coupling integrated within the rear differential assembly distributes optimum driving forces between the front and rear axles, thus improving acceleration and driving stability.\nBrake AYC.\nThe AWC-ECU is a computer that uses the inputs from various sensors to assess the state of vehicle stability and, if necessary, compensates for an instability by controlling the braking forces of the left and right wheels to generate a yaw moment.\nAWC-ECU Function.\nThe main functions of AWC-ECU are as follows:\n1. Communication function\n• CAN communication with other ECUs (Engine-ECU, CVT-ECU, ASC-ECU, ETACS, EPS-ECU).\n• Communication with drive mode selector: The signal from the drive mode selector changes the drive mode.\n• Combination meter display: Drive mode is displayed.\n2. Coupling control function\n• Current output: Differential control function of the electronic control coupling according to the vehicle conditions.\n3. ECU self-diagnosis function\n• Initial check: ROM check, relay check, etc.\n• Recording function of diagnostic trouble codes and freeze frame data in case of failure.\n• If a fault occurs, the system will be disabled and a warning icon will be displayed.\n• Normal control: Malfunction of CPU power supply, relay check, open or short circuit of the I/O signal, abnormal CAN communication.\nExternal links.\n\"S-AWC schematics\"\n\"ACD/AYC programming information\"\n'Tangime'", "Automation-Control": 0.8094644547, "Qwen2": "Yes"} {"id": "30027235", "revid": "7611264", "url": "https://en.wikipedia.org/wiki?curid=30027235", "title": "Individual wheel drive", "text": "Individual-wheel drive (IWD) is a wheeled vehicle with a drivetrain that allows all wheels to receive torque from several motors independent of each other. The term was coined to identify those electric vehicles whereby each wheel is driven by its own individual electric motor, as opposed to conventional differentials.\nCharacteristics.\nThese vehicles inherently have a range of characteristics built-in that are more commonly attributed to four-wheel drive vehicles or vehicles with extensive control systems. These characteristics can be:\nOther features\nThe motors that are used in these vehicles are commonly wheel hub motors, since no transmission components are then required. Alternative layouts with inboard motors and drive shafts are also possible.\nHydraulic wheel drive.\nHydraulic wheel drives share many of the same features as an electric wheel drive. They also lack the need for a central gear box, mechanical differentials, drive shafts, and provide on the fly switching between front, rear and all-wheel drive. Hydraulic individual wheel drives are standard in various machines, such as zero-turn mowers, multi one lifts / front end loaders, and forklifts. Hydraulic drives are primarily found in machines that serve uses which benefit from the ability to \"turn on a dime\", i.e. with an exceptionally short turning diameter, and move between forward and reverse modes without shifting gears, such as lawn mowers and loading equipment.\nAlthough one may be conflicted in considering such systems as a direct drive system, being that a motorized pump must drive the hydraulic system from a position other than the wheel hub. Nonetheless the drive is provided directly from the hydraulic rotary motor found in or adjacent to the wheel hub.", "Automation-Control": 0.7808167338, "Qwen2": "Yes"} {"id": "1009552", "revid": "36529075", "url": "https://en.wikipedia.org/wiki?curid=1009552", "title": "GNU Linear Programming Kit", "text": "The GNU Linear Programming Kit (GLPK) is a software package intended for solving large-scale linear programming (LP), mixed integer programming (MIP), and other related problems. It is a set of routines written in ANSI C and organized in the form of a callable library. The package is part of the GNU Project and is released under the GNU General Public License.\nGLPK uses the revised simplex method and the primal-dual interior point method for non-integer problems and the branch-and-bound algorithm together with Gomory's mixed integer cuts for (mixed) integer problems.\nHistory.\nGLPK was developed by Andrew O. Makhorin (Андрей Олегович Махорин) of the Moscow Aviation Institute. The first public release was in October 2000.\nInterfaces and wrappers.\nSince version 4.0, GLPK problems can be modeled using GNU MathProg (GMPL), a subset of the AMPL modeling language used only by GLPK. However, GLPK is most commonly called from other programming languages. Wrappers exist for:", "Automation-Control": 0.8154680133, "Qwen2": "Yes"} {"id": "2001956", "revid": "5229428", "url": "https://en.wikipedia.org/wiki?curid=2001956", "title": "Computer-aided production engineering", "text": "Computer-aided production engineering (CAPE) is a relatively new and significant branch of engineering. Global manufacturing has changed the environment in which goods are produced. Meanwhile, the rapid development of electronics and communication technologies has required design and manufacturing to keep pace.\nDescription of CAPE.\nCAPE is seen as a new type of computer-aided engineering environment which will improve the productivity of manufacturing/industrial engineers. This environment would be used by engineers to design and implement future manufacturing systems and subsystems. Work is currently underway at the United States National Institute of Standards and Technology (NIST) on CAPE systems. The NIST project is aimed at advancing the development of software environments and tools for the design and engineering of manufacturing systems.\nCAPE and the Future of Manufacturing.\nThe future of manufacturing will be determined by the efficiency with which it can incorporate new technologies. The current process in engineering manufacturing systems is often ad hoc, with computerized tools being used on a limited basis. Given the costs and resources involved in the construction and operation of manufacturing systems, the engineering process must be made more efficient. New computing environments for engineering manufacturing systems could help achieve that objective.\nWhy is CAPE important? In much the same way that product designers need computer-aided design systems, manufacturing and industrial engineers need sophisticated computing capabilities to solve complex problems and manage the vast data associated with the design of a manufacturing system.\nIn order to solve these complex problems and manage design data, computerized tools must be used in the application of scientific and engineering methods to the problem of the\ndesign and implementation of manufacturing systems. Engineers must address the entire factory as a system and the interactions of that system with its surrounding environment.\nComponents of a factory system include:\nCAPE must not only be concerned with the initial design and engineering of the factory, it must also address enhancements over time. CAPE should support standard engineering methods and problem-solving techniques, automate mundane tasks, and provide reference data to support the decision-making process.\nThe environment should be designed to help engineers become more productive and effective in their work. This would be implemented on personal computers or engineering workstations which have been configured with appropriate peripheral devices. Engineering tool developers will have to integrate the functions and data used by a number of different disciplines, for example:\nMany of the methods, formulas, and data associated with these technical areas currently exist only in engineering handbooks. Although some computerized tools are available, they are often very specialized, difficult to use, and do not share information or work together. Engineering tools built by different vendors must be made compatible through open systems architectures and interface standards.\nWhat CAPE will look like.\nCAPE will be based upon computer systems providing an integrated set of design and engineering tools. These software tools will be used by a company's manufacturing engineers to continuously improve its production systems. They will maintain information about manufacturing resources, enhance production capabilities, and develop new facilities and systems. Engineers working on different workstations will share information through a common database.\nUsing CAPE, an engineering team will prepare detailed plans and working models for an entire factory in a matter of days. Alternative solutions to production problems could be quickly developed and evaluated. This would be a significant improvement over current manual methods which may require weeks or months of intensive activity.\nTo achieve this goal, a new set of engineering tools are needed. Examples of functions which should be supported include:\nThe tools implementing these functions must be highly automated and integrated; and will need to provide quick access to a wide range of data. This data must be maintained in a format that is accessible and usable by the engineering tools. Some examples of the information that might be contained in these electronic libraries include:\nThese on-line libraries would allow engineers to quickly develop solutions based upon the work of others.\nAnother critical aspect of this engineering environment is affordability, which\ncan best be achieved by designing an environment that can be constructed from low cost \"off-the-shelf\" commercial products, rather than custombuilt computer hardware and software. The basic engineering environment must be affordable. For both cost and technical reasons, it must be designed to be able to support incremental upgrades. Incremental upgrades would allow companies to add capabilities as they are needed. Commercial software products must be easy to install and integrate with other software already in use. These capabilities exist to a limited extent in some general purpose commercial software today, e.g., word processors, databases, spreadsheets.\nTechnical Concerns.\nMany technical issues must be considered in the design and development of new engineering tools for CAPE. These issues include:\nThere are three critical elements to be addressed: creating a common manufacturing systems information model; using an engineering life cycle approach; and developing a software tool integration framework.\nResolution of these elements will help ensure that independently developed systems will be able to work together. The common information model should identify the elements of the manufacturing system and their relationships to each other; the functions or processes performed by each element; the tools, materials, and information required to perform those functions; and measures of effectiveness for the model and its component elements.\nThere have been many efforts over the years to develop information models for different\naspects of manufacturing, but no known existing model fully meets the needs of a CAPE ernviroment. Therefore, a life cycle approach is needed to identify the different processes that a CAPE environment must support, and must define all phases of a manufacturing system or subsystem's existence. Some of the major phases which may be included in a system life cycle approach are, requirements identification; system design specification; vendor selection; system development and upgrades; installation, testing, and training; and benchmarking of production operations.\nManagement, coordination, and administration functions need to be performed during each phase of the life cycle. Phases may be repeated over time as a system is upgraded or re-engineered to meet changing needs or incorporate new technologies.\nA software tool integration framework should specify how the tools could be independently designed and developed. The framework would define how CAPE tools would deal with common services, interact with each other and coordinate problem solving activities. Although some existing software products and standards currently address the common services issue, the problem of tool interaction remains largely unsolved. The problem of tool interaction is not limited to the domain of computer-aided manufacturing systems engineering—it is pervasive across the software industry.\nCAPE's current state.\nAn initial CAPE environment has been established from commercial off-the-shelf (COTS) software packages. This new environment is being used to demonstrate commercially available tools to perform CAPE functions, to develop a better understanding and define functional requirements for individual engineering tools and the overall environment, and to identify the integration issues which must be addressed to implement compatible environments in the future.\nSeveral engineering demonstrations using COTS tools are under development. These demonstrations are designed to illustrate the various types of functions that must be performed in engineering a manufacturing system.\nFunctions supported by the current COTS environment include: system specification/diagramming,\nprocess flowcharting, information modeling, computer-aided design of products, plant layout, material flow analysis, ergonomic workplace design, mathematical modeling, statistical analysis, line balancing, manufacturing simulation, investment analysis, project management, knowledge-based system development, spreadsheets, document preparation, user interface development, document illustration, forms and database management.", "Automation-Control": 0.6116451621, "Qwen2": "Yes"} {"id": "67858994", "revid": "1056575282", "url": "https://en.wikipedia.org/wiki?curid=67858994", "title": "Iterative rational Krylov algorithm", "text": "The iterative rational Krylov algorithm (IRKA), is an iterative algorithm, useful for model order reduction (MOR) of single-input single-output (SISO) linear time-invariant dynamical systems. At each iteration, IRKA does an Hermite type interpolation of the original system transfer function. Each interpolation requires solving formula_1 shifted pairs of linear systems, each of size formula_2; where formula_3 is the original system order, and formula_1 is the desired reduced model order (usually formula_5).\nThe algorithm was first introduced by Gugercin, Antoulas and Beattie in 2008. It is based on a first order necessary optimality condition, initially investigated by Meier and Luenberger in 1967. The first convergence proof of IRKA was given by Flagg, Beattie and Gugercin in 2012, for a particular kind of systems.\nMOR as an optimization problem.\nConsider a SISO linear time-invariant dynamical system, with input formula_6, and output formula_7:\nApplying the Laplace transform, with zero initial conditions, we obtain the transfer function formula_9, which is a fraction of polynomials:\nAssume formula_9 is stable. Given formula_12, MOR tries to approximate the transfer function formula_9, by a stable rational transfer function formula_14, of order formula_1:\nA possible approximation criterion is to minimize the absolute error in formula_17 norm:\nThis is known as the formula_17 optimization problem. This problem has been studied extensively, and it is known to be non-convex; which implies that usually it will be difficult to find a global minimizer.\nMeier–Luenberger conditions.\nThe following first order necessary optimality condition for the formula_17 problem, is of great importance for the IRKA algorithm.\nNote that the poles formula_21 are the eigenvalues of the reduced formula_22 matrix formula_23.\nHermite interpolation.\nAn Hermite interpolant formula_14 of the rational function formula_9, through formula_1 distinct points formula_27, has components:\nwhere the matrices formula_29 and formula_30 may be found by solving formula_1 dual pairs of linear systems, one for each shift [Theorem 1.1]:\nIRKA algorithm.\nAs can be seen from the previous section, finding an Hermite interpolator formula_14 of formula_9, through formula_1 given points, is relatively easy. The difficult part is to find the correct interpolation points. IRKA tries to iteratively approximate these \"optimal\" interpolation points.\nFor this, it starts with formula_1 arbitrary interpolation points (closed under conjugation), and then, at each iteration formula_37, it imposes the first order necessary optimality condition of the formula_38 problem:\n1. find the Hermite interpolant formula_14 of formula_9, through the actual formula_1 shift points: formula_42.\n2. update the shifts by using the poles of the new formula_14: formula_44\nThe iteration is stopped when the relative change in the set of shifts of two successive iterations is less than a given tolerance. This condition may be stated as:\nAs already mentioned, each Hermite interpolation requires solving formula_1 shifted pairs of linear systems, each of size formula_2:\nAlso, updating the shifts requires finding the formula_1 poles of the new interpolant formula_14. That is, finding the formula_1 eigenvalues of the reduced formula_22 matrix formula_23.\nPseudocode.\nThe following is a pseudocode for the IRKA algorithm [Algorithm 4.1].\n algorithm IRKA\n input: formula_54, formula_55, formula_56 closed under conjugation\n formula_57 % Solve primal systems\n formula_58 % Solve dual systems\n while relative change in {formula_59} > tol\n formula_60 % Reduced order matrix\n formula_61 % Update shifts, using poles of formula_62\n formula_57 % Solve primal systems\n formula_64 % Solve dual systems\n end while\n return formula_65 % Reduced order model\nConvergence.\nA SISO linear system is said to have symmetric state space (SSS), whenever: formula_66 This type of systems appear in many important applications, such as in the analysis of RC circuits and in inverse problems involving 3D Maxwell's equations. For SSS systems with distinct poles, the following convergence result has been proven: \"IRKA is a locally convergent fixed point iteration to a local minimizer of the formula_17 optimization problem.\"\nAlthough there is no convergence proof for the general case, numerous experiments have shown that IRKA often converges rapidly for different kind of linear dynamical systems.\nExtensions.\nIRKA algorithm has been extended by the original authors to multiple-input multiple-output (MIMO) systems, and also to discrete time and differential algebraic systems [Remark 4.1].\nSee also.\nModel order reduction", "Automation-Control": 0.9614295959, "Qwen2": "Yes"} {"id": "33886025", "revid": "1150059149", "url": "https://en.wikipedia.org/wiki?curid=33886025", "title": "Stability (learning theory)", "text": "Stability, also known as algorithmic stability, is a notion in computational learning theory of how a machine learning algorithm output is changed with small perturbations to its inputs. A stable learning algorithm is one for which the prediction does not change much when the training data is modified slightly. For instance, consider a machine learning algorithm that is being trained to recognize handwritten letters of the alphabet, using 1000 examples of handwritten letters and their labels (\"A\" to \"Z\") as a training set. One way to modify this training set is to leave out an example, so that only 999 examples of handwritten letters and their labels are available. A stable learning algorithm would produce a similar classifier with both the 1000-element and 999-element training sets.\nStability can be studied for many types of learning problems, from language learning to inverse problems in physics and engineering, as it is a property of the learning process rather than the type of information being learned. The study of stability gained importance in computational learning theory in the 2000s when it was shown to have a connection with generalization. It was shown that for large classes of learning algorithms, notably empirical risk minimization algorithms, certain types of stability ensure good generalization.\nHistory.\nA central goal in designing a machine learning system is to guarantee that the learning algorithm will generalize, or perform accurately on new examples after being trained on a finite number of them. In the 1990s, milestones were reached in obtaining generalization bounds for supervised learning algorithms. The technique historically used to prove generalization was to show that an algorithm was consistent, using the uniform convergence properties of empirical quantities to their means. This technique was used to obtain generalization bounds for the large class of empirical risk minimization (ERM) algorithms. An ERM algorithm is one that selects a solution from a hypothesis space formula_1 in such a way to minimize the empirical error on a training set formula_2.\nA general result, proved by Vladimir Vapnik for an ERM binary classification algorithms, is that for any target function and input distribution, any hypothesis space formula_1 with VC-dimension formula_4, and formula_5 training examples, the algorithm is consistent and will produce a training error that is at most formula_6 (plus logarithmic factors) from the true error. The result was later extended to almost-ERM algorithms with function classes that do not have unique minimizers.\nVapnik's work, using what became known as VC theory, established a relationship between generalization of a learning algorithm and properties of the hypothesis space formula_1 of functions being learned. However, these results could not be applied to algorithms with hypothesis spaces of unbounded VC-dimension. Put another way, these results could not be applied when the information being learned had a complexity that was too large to measure. Some of the simplest machine learning algorithms—for instance, for regression—have hypothesis spaces with unbounded VC-dimension. Another example is language learning algorithms that can produce sentences of arbitrary length.\nStability analysis was developed in the 2000s for computational learning theory and is an alternative method for obtaining generalization bounds. The stability of an algorithm is a property of the learning process, rather than a direct property of the hypothesis space formula_1, and it can be assessed in algorithms that have hypothesis spaces with unbounded or undefined VC-dimension such as nearest neighbor. A stable learning algorithm is one for which the learned function does not change much when the training set is slightly modified, for instance by leaving out an example. A measure of Leave one out error is used in a Cross Validation Leave One Out (CVloo) algorithm to evaluate a learning algorithm's stability with respect to the loss function. As such, stability analysis is the application of sensitivity analysis to machine learning.\nPreliminary definitions.\nWe define several terms related to learning algorithms training sets, so that we can then define stability in multiple ways and present theorems from the field.\nA machine learning algorithm, also known as a learning map formula_9, maps a training data set, which is a set of labeled examples formula_11, onto a function formula_12 from formula_13 to formula_14, where formula_13 and formula_14 are in the same space of the training examples. The functions formula_12 are selected from a hypothesis space of functions called formula_1.\nThe training set from which an algorithm learns is defined as\nformula_19\nand is of size formula_20 in formula_21\ndrawn i.i.d. from an unknown distribution D.\nThus, the learning map formula_9 is defined as a mapping from formula_23 into formula_1, mapping a training set formula_2 onto a function formula_26 from formula_13 to formula_14. Here, we consider only deterministic algorithms where formula_9 is symmetric with respect to formula_2, i.e. it does not depend on the order of the elements in the training set. Furthermore, we assume that all functions are measurable and all sets are countable.\nThe loss formula_31 of a hypothesis formula_12 with respect to an example formula_33 is then defined as formula_34.\nThe empirical error of formula_12 is formula_36.\nThe true error of formula_12 is formula_38\nGiven a training set S of size m, we will build, for all i = 1...,m, modified training sets as follows:\nformula_39\nformula_40\nDefinitions of stability.\nHypothesis Stability.\nAn algorithm formula_9 has hypothesis stability β with respect to the loss function V if the following holds:\nformula_42\nPoint-wise Hypothesis Stability.\nAn algorithm formula_9 has point-wise hypothesis stability β with respect to the loss function V if the following holds:\nformula_44\nError Stability.\nAn algorithm formula_9 has error stability β with respect to the loss function V if the following holds:\nformula_46\nUniform Stability.\nAn algorithm formula_9 has uniform stability β with respect to the loss function V if the following holds:\nformula_48\nA probabilistic version of uniform stability β is:\nformula_49\nAn algorithm is said to be stable, when the value of formula_50 decreases as formula_51.\nLeave-one-out cross-validation (CVloo) Stability.\nAn algorithm formula_9 has CVloo stability β with respect to the loss function V if the following holds:\nformula_53\nThe definition of (CVloo) Stability is equivalent to Pointwise-hypothesis stability seen earlier.\nExpected-leave-one-out error (formula_54) Stability.\nAn algorithm formula_9 has formula_54 stability if for each n there exists a formula_57 and a formula_58 such that:\nformula_59, with formula_57 and formula_58 going to zero for formula_62\nClassic theorems.\nFrom Bousquet and Elisseeff (02):\nFor symmetric learning algorithms with bounded loss, if the algorithm has Uniform Stability with the probabilistic definition above, then the algorithm generalizes.\nUniform Stability is a strong condition which is not met by all algorithms but is, surprisingly, met by the large and important class of Regularization algorithms.\nThe generalization bound is given in the article.\nFrom Mukherjee et al. (06):\nThis is an important result for the foundations of learning theory, because it shows that two previously unrelated properties of an algorithm, stability and consistency, are equivalent for ERM (and certain loss functions).\nThe generalization bound is given in the article.\nAlgorithms that are stable.\nThis is a list of algorithms that have been shown to be stable, and the article where the associated generalization bounds are provided.", "Automation-Control": 0.9673998952, "Qwen2": "Yes"} {"id": "40372902", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=40372902", "title": "AMPPS", "text": "AMPPS is a solution stack of Apache, MySQL, MongoDB, PHP, Perl and Python for Windows NT, Linux and macOS. It comes with 419 PHP web applications, over 1000 PHP classes and 6 versions of PHP. AMPPS is created by Softaculous Ltd. a company founded in 2009 which makes the Softaculous Auto installer. AMPPS is used to develop on PHP, MySQL applications like WordPress, Joomla, and Drupal.\nSoftware list.\nThe software has 419 PHP applications.", "Automation-Control": 0.9985174537, "Qwen2": "Yes"} {"id": "40397682", "revid": "26398660", "url": "https://en.wikipedia.org/wiki?curid=40397682", "title": "Laser machine control", "text": "Laser machine control is an electronic system for automatic operation of land scrapers or excavators. Advanced systems with GPS replaced these laser-based systems in some countries, but it is still used in some countries like India. \nSystem Components.\nThere are different types of systems patented by different inventors or companies. But the generalised system has four essential parts: rotating laser transmitter, laser receiver, control unit, solenoid valves.\nRotating laser transmitter.\nThe laser transmitter emits a narrow laser beam which rotates horizontally. this creates a Horizontal reference plane of laser beam. The beam can be spread horizontally but should be sharp and narrow in vertical plane. The transmitter may use infrared or red laser but for safety measure all manufacturers limits the power to 5 milliwatts. The transmitter uses semiconductor laser diodes as laser source which is powered by batteries.\nThe transmitter corrects its position to get perfect horizontal plane with acceleration sensor/ gyro sensors / bubble based level sensor and a microprocessor based servo mechanism.\nLaser receiver.\nThe laser receiver detects the height of laser plane with an array of photo diodes, generally a series of photo diodes in 9\" strip with a gap between diodes equal to the width of the beam. the receiver reads the height with the sensors and convert to a usable analog or digital signal and sends to the machine control unit. Modern systems use CAN protocol for the output signals where old systems use a 4/5/6 channel analog current signal.\nControl unit.\nThe control unit is the processing unit which decides how to react with the incoming signals get a desired land level. It adjusts the height of the scraper by activating or deactivating the electro-hydraulic actuators (solenoid valves). Generally these are ON OFF type solenoid valves but some modern systems use proportional valves . On the dash board of control unit there are some keys and indicators are provided as user interface through which user can monitor the operation or manually operate the machine when needed. \nSolenoid valves.\nSolenoid valves are act like an interface between the electrical systems and the hydraulic system, these can be on-off type or proportional depending on the control unit.", "Automation-Control": 0.90187186, "Qwen2": "Yes"} {"id": "51813998", "revid": "43778658", "url": "https://en.wikipedia.org/wiki?curid=51813998", "title": "Nozzle and flapper", "text": "The nozzle and flapper mechanism is a displacement type detector which converts mechanical movement into a pressure signal by covering the opening of a nozzle with a flat plate called the flapper. This restricts fluid flow through the nozzle and generates a pressure signal.\nIt is a widely used mechanical means of creating a high gain fluid amplifier. In industrial control systems, they played an important part in the development of pneumatic PID controllers and are still widely used today in pneumatic and hydraulic control and instrumentation systems.\nOperating principle.\nThe operating principle makes use of the high gain effect when a flapper plate is placed a small distance from a small pressurized nozzle emitting a fluid.\nThe example shown is pneumatic. At sub-millimeter distances, a small movement of the flapper plate results in a large change in flow. The nozzle is fed from a chamber which is in turn fed by a restriction, so changes of flow result in changes of chamber pressure. The nozzle diameter must be larger than the restriction orifice in order to work. The high gain of the open loop mechanism can be made linear using a pressure feedback bellows on the flapper to create a force balance system with a linear output. The \"live\" zero of 0.2 bar or 3 psi is set by the bias spring which ensures that the device is working in its linear region.\nThe industry standard ranges of either 3-15 psi (USA), or 0.2 - 1.0 bar (metric), is normally used in pneumatic PID controllers, valve positioning servomechanisms and force balance transducers.\nApplication.\nThe nozzle and flapper in pneumatic controls is a simple low maintenance device which operates well in a harsh industrial environment, and does not present an explosion risk in hazardous atmospheres. They were the industry controller amplifier for many decades until the advent of practical and reliable electronic high gain amplifiers. However they are still used extensively for field devices such as control valve positioners, and I to P and P to I converters.\nA proportional controller schematic is shown here.\nThe set point is transmitted through the flapper plate via the fulcrum to close the orifice and increase the chamber pressure. The feedback bellows resists and the output signal goes to the control valve which opens with increasing actuator pressure. As the flow increases, the process value bellows counteracts the set point bellows until equilibrium is reached. This will be a value below the set point, as there must always be an error to generate an output. The addition of an integral or \"reset\" bellows would remove this error.\nThe principle is also used in hydraulic systems controls.", "Automation-Control": 0.7807132006, "Qwen2": "Yes"} {"id": "49353098", "revid": "7696790", "url": "https://en.wikipedia.org/wiki?curid=49353098", "title": "Factory Bot (Rails Testing)", "text": "Factory Bot, originally known as Factory Girl, is a software library for the Ruby programming language that provides factory methods to create test fixtures for automated software testing. The fixture objects can be created on the fly; they may be plain Ruby objects with a predefined state, ORM objects with existing database records or mock objects.\nFactory Bot is often used in testing Ruby on Rails applications; where it replaces Rails' built-in fixture mechanism. Rails' default setup uses a pre-populated database as test fixtures, which are global for the complete test suite. Factory Bot, on the other hand, allows developers to define a different setup for each test and thus helps to avoid dependencies within the test suite.\nFactories.\nDefining Factories.\nA factory is defined by a name and its set of attributes. The class of the test object is either determined through the name of the factory or set explicitly.\nFactoryBot.define do\n # Determine class automatically\n factory :user do\n end\n # Specify class explicitly\n factory :superhero, class: User do\n end\nend\nFeatures.\nTraits.\nTraits allow grouping of attributes which can be applied to any factory.\nfactory :status do \n trait :international do\n end\n trait :resident do\n end\n trait :comp_sci do\n end\n trait :electrical do\n end\n factory :comp_sci_international_student, traits: [:international, :comp_sci]\n factory :electrical_resident_student, traits: [:resident, :electrical]\nend\nAlias.\nFactory Bot allows creating aliases for existing factories so that the factories can be reused.\nfactory :user, aliases: [:student, :teacher] do\nend\nfactory :notice do\n teacher\n # Alias used \"teacher\" for \"user\"\n end\nfactory :notification do\n student\n #Alias used student for user \nend\nSequences.\nFactory Bot allows creating unique values for a test attribute in a given format.\nFactoryBot.define do\n factory :title do\n # Title 1, Title 2 and so on...\n end\nend\nInheritance.\nFactories can be inherited while creating a factory for a class. This allows the user to reuse common attributes from parent factories and avoid writing duplicate code for duplicate attributes. Factories can be written in a nested fashion to leverage inheritance.\nfactory :user do\n factory :admin do\n admin_rights true\n end\nend\nadmin_user = create(:admin)\nadmin_user.name # Micheal\nadmin_user.admin_rights # true\nParent factories can also be specified explicitly.\nfactory :user do\nend\nfactory :admin, parent: :user do\nend\nCallback.\nFactory Bot allows custom code to be injected at four different stages:", "Automation-Control": 0.7118929625, "Qwen2": "Yes"} {"id": "61797927", "revid": "3610", "url": "https://en.wikipedia.org/wiki?curid=61797927", "title": "Distribution-free control chart", "text": "Distribution-free (nonparametric) control charts are one of the most important tools of statistical process monitoring and control. Implementation techniques of distribution-free control charts do not require any knowledge about the underlying process distribution or its parameters. The main advantage of distribution-free control charts is its in-control robustness, in the sense that, irrespective of the nature of the underlying process distributions, the properties of these control charts remain the same when the process is smoothly operating without presence of any assignable cause.\nEarly research on nonparametric control charts may be found in 1981 when P.K. Bhattacharya and D. Frierson introduced a nonparametric control chart for detecting small disorders. However, major growth of nonparametric control charting schemes has taken place only in the recent years.\nPopular distribution-free control charts.\nThere are distribution-free control charts for both Phase-I analysis and Phase-II monitoring. \nOne of the most notable distribution-free control charts for Phase-I analysis is RS/P chart proposed by G. Capizzi and G. Masaratto. RS/P charts separately monitor location and scale parameters of a univariate process using two separate charts. In 2019, Chenglong Li, Amitava Mukherjee and Qin Su proposed a single distribution-free control chart for Phase-I analysis using multisample Lepage statistic.\nSome popular Phase-II distribution-free control charts for univariate continuous processes includes:", "Automation-Control": 0.9860746264, "Qwen2": "Yes"} {"id": "20879499", "revid": "10289486", "url": "https://en.wikipedia.org/wiki?curid=20879499", "title": "Variable structure system", "text": "A variable structure system, or VSS, is a discontinuous nonlinear system of the form\nwhere formula_2 is the state vector, formula_3 is the time variable, and formula_4 is a \"piecewise continuous\" function. Due to the \"piecewise\" continuity of these systems, they behave like different continuous nonlinear systems in different regions of their state space. At the boundaries of these regions, their dynamics switch abruptly. Hence, their \"structure\" \"varies\" over different parts of their state space.\nThe development of variable structure control depends upon methods of analyzing variable structure systems, which are special cases of hybrid dynamical systems.\nReferences.\n2. Emelyanov, S.V., ed. (1967). Variable Structure Control Systems. Moscow: Nauka.\n3. Emelyanov S, Utkin V, Tarin V, Kostyleva N, Shubladze A, Ezerov V, Dubrovsky E. 1970. Theory of Variable Structure Control Systems (in Russian). Moscow: Nauka.\n4. Variable Structure Systems: From Principles to Implementation. A. Sabanovic, L. Fridman and S. Spurgeon (eds.), IEE, London, 2004, ISBN 0863413501.\n5. Advances in Variable Structure Systems and Sliding Mode Control—Theory and Applications. Li, S., Yu, X., Fridman, L., Man, Z., Wang, X.(Eds.), Studies in Systems, Decision and Control,v.115, Springer, 2017, ISBN 978-3-319-62895-0\n6.Variable-Structure Systems and Sliding-Mode Control. M. Steinberger, M. Horn, L. Fridman.(eds.), Studies in Systems, Decision and Control, v.271, Springer International Publishing, Cham, 2020, ISBN 978-3-030-36620-9.\nFurther reading.\nY. Shtessel, C. Edwards, L. Fridman, A. Levant. Sliding Mode Control and Observation, Series: Control Engineering, Birkhauser: Basel, 2014, ISBN 978-0-81764-8923", "Automation-Control": 0.9999678135, "Qwen2": "Yes"} {"id": "20899875", "revid": "46226506", "url": "https://en.wikipedia.org/wiki?curid=20899875", "title": "System Center Mobile Device Manager", "text": "System Center Mobile Device Manager is a Mobile device management (MDM) solution providing over-the-air (OTA) management of Windows Mobile Smartphone security, applications and settings. System Center Mobile Device Manager supports devices running the Windows Mobile 6.1 and above operating system. Earlier, functions of this product provided by System Center Configuration Manager.\nImportant: Mainstream support for System Center Mobile Device Manager 2008 ended on July 9, 2013, and extended support ended on July 10, 2018.\nFeatures.\nThrough ActiveDirectory-based policies, the product provides the following functions:\nClient.\nSystem Center Mobile Device Manager client is located in ROM. All device management activities are centrally managed from the server side.\nServer.\nSystem Center Mobile Device Manager server components are deployed on multiple server computers, including a Mobile VPN server, a Windows Update server, and an ActiveDirectory domain controller.", "Automation-Control": 0.8466509581, "Qwen2": "Yes"} {"id": "972944", "revid": "7852030", "url": "https://en.wikipedia.org/wiki?curid=972944", "title": "List of sensors", "text": "This is a list of sensors sorted by sensor type.\nSpeed sensor.\nSpeed sensors are machines used to detect the speed of an object, usually a transport vehicle. They include:", "Automation-Control": 0.9999474287, "Qwen2": "Yes"} {"id": "32303268", "revid": "13777151", "url": "https://en.wikipedia.org/wiki?curid=32303268", "title": "Monitorix", "text": "Monitorix is a computer network monitoring tool that periodically collects system data and uses the web interface to show the information as graphs. Monitorix allows monitoring of overall system performance, and can help detect bottlenecks, failures, unusually long response times and other anomalies.\nOne part of the tool is a collector, called \"monitorix.\" This Perl daemon is started automatically like any other system service. The second program of Monitorix is a CGI script (\"monitorix.cgi).\" Since version 3.0 Monitorix has its own HTTP server included, what makes installing an own web server unnecessary.\nMonitorix is free software licensed under the terms of the GNU General Public License version 2 (GPLv2) as published by the Free Software Foundation. It uses the RRDtool (written by Tobi Oetiker) and is written in Perl.", "Automation-Control": 0.7352768183, "Qwen2": "Yes"} {"id": "43415470", "revid": "1108292", "url": "https://en.wikipedia.org/wiki?curid=43415470", "title": "Markov chain approximation method", "text": "In numerical methods for stochastic differential equations, the Markov chain approximation method (MCAM) belongs to the several numerical (schemes) approaches used in stochastic control theory. Regrettably the simple adaptation of the deterministic schemes for matching up to stochastic models such as the Runge–Kutta method does not work at all.\nIt is a powerful and widely usable set of ideas, due to the current infancy of stochastic control it might be even said 'insights.' for numerical and other approximations problems in stochastic processes. They represent counterparts from deterministic control theory such as optimal control theory.\nThe basic idea of the MCAM is to approximate the original controlled process by a chosen controlled markov process on a finite state space. In case of need, one must as well approximate the cost function for one that matches up the Markov chain chosen to approximate the original stochastic process.", "Automation-Control": 0.7618254423, "Qwen2": "Yes"} {"id": "17509328", "revid": "8586731", "url": "https://en.wikipedia.org/wiki?curid=17509328", "title": "Discrete event dynamic system", "text": "In control engineering, a discrete-event dynamic system (DEDS) is a discrete-state, event-driven system of which the state evolution depends entirely on the occurrence of asynchronous discrete events over time. Although similar to continuous-variable dynamic systems (CVDS), DEDS consists solely of discrete state spaces and event-driven state transition mechanisms.\nTopics in DEDS include:", "Automation-Control": 0.9978460073, "Qwen2": "Yes"} {"id": "17510593", "revid": "12416903", "url": "https://en.wikipedia.org/wiki?curid=17510593", "title": "Hyperstability", "text": "In stability theory, hyperstability is a property of a system that requires the state vector to remain bounded if the inputs are restricted to belonging to a subset of the set of all possible inputs. \nDefinition: A system is hyperstable if there are two constants formula_1 such that any state trajectory of the system satisfies the inequality:", "Automation-Control": 0.8998438716, "Qwen2": "Yes"} {"id": "861757", "revid": "42342156", "url": "https://en.wikipedia.org/wiki?curid=861757", "title": "IBM 1132", "text": "The IBM 1132 line printer was the normal printer for the IBM 1130 computer system. It printed 120 character lines at 80 lines per minute. The character set consisted of numbers, upper-case letters and some special characters.\nThe 1965-introduced 1132 was built around a stripped down IBM 407 printing mechanism. The 407 was IBM's top-of-the-line accounting machine from the 1950s. The 1130 had 120 power transistors, each wired to the print magnet for one printer column. The magnet released a lever that engaged a cam with a spinning clutch shaft. The engaged cam then made one revolution, pushing its print wheel toward the ribbon and paper, thereby printing one character. \nAs the set of 120 print wheels spun, the 1130 received an interrupt as each of the possible 48 characters was about to move into position. The printing driver software had to quickly output a 120 bit vector designating which transistors were to fire so as to drive the print wheel against the ribbon and paper. This put a big performance burden on the CPU, but resulted in an inexpensive (for the time) printer.\nSometimes a printer output line transistor would fail, resulting in a blank print position. If you knew your way around inside the 1130, it was possible to swap circuit cards so as to move the bad print position to near the right end of the printed line. This kept the 1130 usable until the repair person showed up.\nThe 1132 came in two models with the following characteristics:", "Automation-Control": 0.9792288542, "Qwen2": "Yes"} {"id": "3638586", "revid": "575347", "url": "https://en.wikipedia.org/wiki?curid=3638586", "title": "Cryptix General License", "text": "The Cryptix General License is in use by the Cryptix project, well known for their Java Cryptography Extension. It is a modified version of the BSD license, with similarly liberal terms. The Free Software Foundation states that it is a permissive free software license compatible with the GNU General Public License.", "Automation-Control": 0.8534365296, "Qwen2": "Yes"} {"id": "47341174", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=47341174", "title": "Collaborative diffusion", "text": "Collaborative Diffusion is a type of pathfinding algorithm which uses the concept of \"antiobjects\", objects within a computer program that function opposite to what would be conventionally expected. Collaborative Diffusion is typically used in video games, when multiple agents must path towards a single target agent. For example, the ghosts in Pac-Man. In this case, the background tiles serve as antiobjects, carrying out the necessary calculations for creating a path and having the foreground objects react accordingly, whereas having foreground objects be responsible for their own pathing would be conventionally expected.\nCollaborative Diffusion is favored for its efficiency over other pathfinding algorithms, such as A*, when handling multiple agents. Also, this method allows elements of competition and teamwork to easily be incorporated between tracking agents. Notably, the time taken to calculate paths remains constant as the number of agents increases.", "Automation-Control": 0.9074557424, "Qwen2": "Yes"} {"id": "65829281", "revid": "41591971", "url": "https://en.wikipedia.org/wiki?curid=65829281", "title": "Micro injection molding", "text": "Micro injection molding is a molding process for the manufacture of plastics components for shot weights of 1 to 0.1 grams with tolerances in the range of 10 to 100 microns. This molding process permits the manufacture of complicated small geometries with maximum possible accuracy and precision.\nBasic concept.\nThe basic concept of the micro injection molding process is quite similar to the regular injection molding process. In this process, a micro injection unit is integrated in the injection molding machine. When it comes to the production of micro components the machine and process technology mainly depend on the below points:\nCritical factors.\nParting line issue.\nA parting line (PL) is the line of separation on the part where the two halves of the mold meet. The parting line matching for micro parts is a big issue. The interlocking features of mold cavity and core for precise mating are used to reduce such issues.\nDegating issue.\nAnother major critical factor of micro injection technology is that the smaller part size causes problems with degating (gate removal).\nSprue and runner size.\nRunner and sprue diameters are another concern. The total volume of the feed system (sprue, runners and gates) can exceed the volume of the parts by a factor of 100 or more.\nMaterials and applications for micro injection molding.\nThe most common polymers used in micro injection molding are reported in the table below: \nMachine used for micro injection molding.\nIn the 1980s, micro injection molding techniques utilized traditional injection molding, but no dedicated machines were available until the mid-1990s. Currently, commercial micro molding systems are produced from Milacron, Arburg, and Sumitomo Demag as micro injection units for regular machines. At the same time, Wittmann Battenfeld, Babyplast and Desma are manufacturers of dedicated micro injection molding machines.\nMilacron developed two types of micro injection units: \nArburg developed a micro injection molding machine with an 8 mm injection to ensure high degree of dosing precision. This type of machine is combined with a second screw, which is responsible for melting and homogenous mixing of the material.\nSumitomo Demag developed a customized micro molding injection unit suitable for micro parts weighing of 5 g to 0.1 g.\nApplications.\nMicro injection molding is widely applied for parts and devices in the medical, pharmaceutical, electronics, automotive, optical and other industries. In general, the medical micro injection molding market is the leading one, due to an increase in the usage of sophisticated micro components for endoscopic surgery, minimally invasive treatments, point-of-care testing and other advanced technology developments. Applications in other fields include parts for electric motors, micron-tolerance door components, thing-wall containers, etc.\nMarket prospects.\nThe miniaturization of automotive, medical, electronics, telecommunications devices is driving the need for micro molding of smaller components. The global polymer and thermoplastic micro molding market covering medical, automotive, electronics and telecommunications was valued at $308m in 2012. The micro injection molding plastic market was valued at $1,145.85 million in 2022 and is anticipated to reach $2,640 million by 2030, growing at a compound annual growth rate (CAGR) of 11.0% from 2023 to 2030. ", "Automation-Control": 0.9932353497, "Qwen2": "Yes"} {"id": "65374549", "revid": "35936988", "url": "https://en.wikipedia.org/wiki?curid=65374549", "title": "Monique Chyba", "text": "Monique Chyba (born 1969) is a control theorist who works as a professor of mathematics at the University of Hawaiʻi at Mānoa. Her work on control theory has involved the theory of singular trajectories, and applications in the control of autonomous underwater vehicles. More recently, she has also applied control theory to the prediction and modeling of the spread of COVID-19 in Hawaii.\nEducation and career.\nChyba's parents Mirek and Jana Chyba were Czech, but settled in Geneva, in Switzerland. Chyba earned a Ph.D. through the University of Burgundy in Dijon, France, in 1997, while working as a teaching assistant at the University of Geneva. Her dissertation, \"Le Cas Martinet en Geometrie Sous-Riemannienne [the Martinet case in sub-Riemannian geometry]\", was supervised by Bernard Bonnard.\nAfter postdoctoral research at Pierre and Marie Curie University, Harvard University, INRIA Sophia Antipolis, Princeton University, and the University of California, Santa Cruz, she joined the University of Hawaiʻi faculty in 2002. and was promoted to full professor in 2012.\nBook.\nChyba is an author of the book \"Singular Trajectories and their Role in Control Theory\" (with Bernard Bonnard, Springer, 2003).\nRecognition.\nIn 2014, Chiba University in Japan gave Chyba their Science and Lectureship Award.", "Automation-Control": 0.6489708424, "Qwen2": "Yes"} {"id": "53959215", "revid": "1010306059", "url": "https://en.wikipedia.org/wiki?curid=53959215", "title": "FIR transfer function", "text": "Transfer function filter utilizes the transfer function and the Convolution theorem to produce a filter. In this article, an example of such a filter using finite impulse response is discussed and an application of the filter into real world data is shown.\nFIR (Finite Impulse Response) Linear filters.\nIn digital processing, an FIR filter is a time-continuous filter that is invariant with time. This means that the filter does not depend on the specific point of time, but rather depends on the time duration. The specification of this filter uses a transfer function which has a frequency response which will only pass the desired frequencies of the input. This type of filter is non-recursive, which means that the output can be completely derived at from a combination of the input without any recursive values of the output. This means that there is no feedback loop that feeds the new output the values of previous outputs. This is an advantage over recursive filters such as IIR filter (Infinite Impulse Response) in the applications that require a linear phase response because it will pass the input without a phase distortion.\nMathematical model.\nLet the output function be formula_1 and the input is formula_2. The convolution of the input with a transfer function formula_3 provides a filtered output. The mathematical model of this type of filter is:\nh(formula_5) is a transfer function of an impulse response to the input. The convolution allows the filter to only be activated when the input recorded a signal at the same time value. This filter returns the input values (x(t)) if k falls into the support region of function h. This is the reason why this filter is called finite response. If k is outside of the support region, the impulse response is zero which makes output zero. The central idea of this h(formula_5) function can be thought of as a quotient of two functions.\nAccording to Huang (1981) Using this mathematical model, there are four methods of designing non-recursive linear filters with various concurrent filter designs:\nSingle-sided Linear Filter.\nInput function.\nDefine the input signal:\nformula_8 adds a random number from 1 to 200 to the sinusoidal function which serves to distort the data.\nSingle-sided filter.\nUse an exponential function as the impulse response for the support region of positive values.\nThe frequency response of this filter resembles a low-pass filter as in the lower frequency.\nDouble-sided filter.\nLet the input signal to be the same as the single-sided function.\nUse an exponential function as the impulse response for the support region of positive values as before. In this double-sided filter, also implement another exponential function. The opposite in signs of the powers of the exponent is to maintain the non-infinite results when computing the exponential functions.\nformula_10\nExamine this filter in its frequency domain, we see that the magnitude response is the same trend as the single sided filter. However, the frequencies that can be passed are smaller than those of the single-sided filter. This resulted in a smoother output. The significant of this consequence is that the double-sided filters types of linear filters are better being a filter.\nFIR Transfer function Linear filter Application.\nLinear filter performs better when it is a double-sided filter. This requires the data to be known in advance which makes it a challenge for these filters to function well in situations where signals cannot be known ahead of time such as radio signal processing. However, this means that linear filters are extremely useful in filtering pre-loaded data. In addition, because of its non-recursive nature which preserves the phase angles of the input, linear filters are usually used in image processing, video processing, data processing or pattern detection. Some examples are image enhancement, restoration and pre-whitening for spectral analysis. Additionally, linear non-recursive filters are always stable and usually produce a purely real output which makes them more favorable. They are also computationally easy which usually creates a big advantage for using this FIR linear filter.", "Automation-Control": 0.7428646088, "Qwen2": "Yes"} {"id": "56685825", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=56685825", "title": "Unscented optimal control", "text": "In mathematics, unscented optimal control combines the notion of the unscented transform with deterministic optimal control to address a class of uncertain optimal control problems. It is a specific application of Riemmann-Stieltjes optimal control theory, a concept introduced by Ross and his coworkers.\nMathematical description.\nSuppose that the initial state formula_1 of a dynamical system,\nformula_2\nis an uncertain quantity. Let formula_3 be the sigma points. Then sigma-copies of the dynamical system are given by,\nformula_4\nApplying standard deterministic optimal control principles to this ensemble generates an unscented optimal control. Unscented optimal control is a special case of tychastic optimal control theory. According to Aubin and Ross, tychastic processes differ from stochastic processes in that a tychastic process is conditionally deterministic.\nApplications.\nUnscented optimal control theory has been applied to UAV guidance, spacecraft attitude control, air-traffic control and low-thrust trajectory optimization", "Automation-Control": 1.0000014305, "Qwen2": "Yes"} {"id": "41761", "revid": "41865877", "url": "https://en.wikipedia.org/wiki?curid=41761", "title": "Supervisory program", "text": "A supervisory program or supervisor is a computer program, usually part of an operating system, that controls the execution of other routines and regulates work scheduling, input/output operations, error actions, and similar functions and regulates the flow of work in a data processing system. \nIt can also refer to a program that allocates computer component space and schedules computer events by task queuing and system interrupts. Control of the system is returned to the supervisory program frequently enough to ensure that demands on the system are met.\nHistorically, this term was essentially associated with IBM's line of mainframe operating systems starting with OS/360. In other operating systems, the supervisor is generally called the kernel.\nIn the 1970s, IBM further abstracted the supervisor state from the hardware, resulting in a hypervisor that enabled full virtualization, i.e. the capacity to run multiple operating systems on the same machine totally independently from each other. Hence the first such system was called \"Virtual Machine\" or \"VM\".", "Automation-Control": 0.7895527482, "Qwen2": "Yes"} {"id": "21067297", "revid": "237572", "url": "https://en.wikipedia.org/wiki?curid=21067297", "title": "Impact extrusion", "text": "Impact extrusion is a manufacturing process similar to extrusion and drawing by which products are made with a metal slug. The slug is pressed at a high velocity with extreme force into a die or mold by a punch.\nProcess.\nThe punch is attached to a mechanical or hydraulic press. These machines reciprocate in a cycle 20 to 60 times per minute. A cold slug is placed below the punch and over the die. The punch makes contact with the slug forcing it around the circumference of the punch and into the die. The metal slug deforms to fit the punch on the inside and the die on the outside. Lubricants are added to aid the machine for an easier punch-out. It only takes one impact for the finished shape to form from the slug. Once the slug has been contoured to the desired shape, a counter-punch ejector removes the work piece from within the die.\nSome Characteristics of the Process.\nThe wall thickness of the work piece is directly correlated with the clearance between the punch and die.\nThe thinner the wall of the work piece the tighter its tolerances are.\nThe end product has a better surface finish than the starting piece and the grain of the material is reformed to its new shape. This adds strength to the new form compared to cutting into the grain like in a machining process.\nEffects on Work Material Properties.\nAfter going through this process the properties of the material used are altered. Its hardness and yield strength are increased, cross-sectional area is decreased, some residual surface stresses will be present and micro cracks may appear. Physical and chemical properties are only influenced slightly.\nDie Style.\nFour major types of dies (tools) can be used. They are: forward, backward/reverse, combined, and hydrostatic extrusion. Forward extrusion pushes the slug into the die. Backward/reverse extrusion pushes the slug around the punch. Combined extrusion forces the slug both into the die and around the punch. Hydrostatic extrusion is used on brittle materials (i.e. molybdenum, beryllium, and tungsten) by applying pressure gradually to force the brittle material through the die. This is generally accomplished by the same method as forward extrusion.\nTypical Workpiece Materials.\nTypical materials for this process are: aluminium, brass, tin, mild steel, stainless steel, magnesium, titanium, and zinc.\nTool Materials.\nTypical tool steels used in extruding aluminum:\nTool Geometry.\nWhen using the technique of backward impact extrusion, putting an angle on the punch in the press is used to decrease the amount of pressure applied to the punch. This decreases the chance of creating a dead zone, which is an area of no pressure. On the opposite end of things, forward impact extrusion uses a radius on punch to keep the course in the workpiece material moving.", "Automation-Control": 0.97601372, "Qwen2": "Yes"} {"id": "63526503", "revid": "44857423", "url": "https://en.wikipedia.org/wiki?curid=63526503", "title": "PDE-constrained optimization", "text": "PDE-constrained optimization is a subset of mathematical optimization where at least one of the constraints may be expressed as a partial differential equation. Typical domains where these problems arise include aerodynamics, computational fluid dynamics, image segmentation, and inverse problems. A standard formulation of PDE-constrained optimization encountered in a number of disciplines is given by:formula_1where formula_2 is the control variable and formula_3 is the squared Euclidean norm and is not a norm itself. Closed-form solutions are generally unavailable for PDE-constrained optimization problems, necessitating the development of numerical methods.\nApplications.\nOptimal control of bacterial chemotaxis system.\nThe following example comes from p. 20-21 of Pearson. Chemotaxis is the movement of an organism in response to an external chemical stimulus. One problem of particular interest is in managing the spatial dynamics of bacteria that are subject to chemotaxis to achieve some desired result. For a cell density formula_4 and concentration density formula_5 of a chemoattractant, it is possible to formulate a boundary control problem:formula_6where formula_7 is the ideal cell density, formula_8 is the ideal concentration density, and formula_2 is the control variable. This objective function is subject to the dynamics:formula_10where formula_11 is the Laplace operator.", "Automation-Control": 0.6704672575, "Qwen2": "Yes"} {"id": "1145733", "revid": "1148560674", "url": "https://en.wikipedia.org/wiki?curid=1145733", "title": "BIBO stability", "text": "In signal processing, specifically control theory, bounded-input, bounded-output (BIBO) stability is a form of stability for signals and systems that take inputs. If a system is BIBO stable, then the output will be bounded for every input to the system that is bounded.\nA signal is bounded if there is a finite value formula_1 such that the signal magnitude never exceeds formula_2, that is\nTime-domain condition for linear time-invariant systems.\nContinuous-time necessary and sufficient condition.\nFor a continuous time linear time-invariant (LTI) system, the condition for BIBO stability is that the impulse response, formula_5 , be absolutely integrable, i.e., its L1 norm exists.\nDiscrete-time sufficient condition.\nFor a discrete time LTI system, the condition for BIBO stability is that the impulse response be absolutely summable, i.e., its formula_7 norm exists.\nProof of sufficiency.\nGiven a discrete time LTI system with impulse response formula_9 the relationship between the input formula_10 and the output formula_11 is\nwhere formula_13 denotes convolution. Then it follows by the definition of convolution\nLet formula_15 be the maximum value of formula_16, i.e., the formula_17-norm.\nIf formula_21 is absolutely summable, then formula_22 and\nSo if formula_21 is absolutely summable and formula_25 is bounded, then formula_26 is bounded as well because formula_27.\nThe proof for continuous-time follows the same arguments.\nFrequency-domain condition for linear time-invariant systems.\nContinuous-time signals.\nFor a rational and continuous-time system, the condition for stability is that the region of convergence (ROC) of the Laplace transform includes the imaginary axis. When the system is causal, the ROC is the open region to the right of a vertical line whose abscissa is the real part of the \"largest pole\", or the pole that has the greatest real part of any pole in the system. The real part of the largest pole defining the ROC is called the abscissa of convergence. Therefore, all poles of the system must be in the strict left half of the s-plane for BIBO stability.\nThis stability condition can be derived from the above time-domain condition as follows:\nwhere formula_29 and formula_30\nThe region of convergence must therefore include the imaginary axis.\nDiscrete-time signals.\nFor a rational and discrete time system, the condition for stability is that the region of convergence (ROC) of the z-transform includes the unit circle. When the system is causal, the ROC is the open region outside a circle whose radius is the magnitude of the pole with largest magnitude. Therefore, all poles of the system must be inside the unit circle in the z-plane for BIBO stability.\nThis stability condition can be derived in a similar fashion to the continuous-time derivation:\nwhere formula_32 and formula_33.\nThe region of convergence must therefore include the unit circle.", "Automation-Control": 0.9999432564, "Qwen2": "Yes"} {"id": "36231779", "revid": "10289486", "url": "https://en.wikipedia.org/wiki?curid=36231779", "title": "Ross' π lemma", "text": "Ross' lemma, named after I. Michael Ross, is a result in computational optimal control. Based on generating Carathéodory- solutions for feedback control, Ross' -lemma states that there is fundamental time constant within which a control solution must be computed for controllability and stability. This time constant, known as Ross' time constant, is proportional to the inverse of the Lipschitz constant of the vector field that governs the dynamics of a nonlinear control system.\nTheoretical implications.\nThe proportionality factor in the definition of Ross' time constant is dependent upon the magnitude of the disturbance on the plant and the specifications for feedback control. When there are no disturbances, Ross' -lemma shows that the open-loop optimal solution is the same as the closed-loop one. In the presence of disturbances, the proportionality factor can be written in terms of the Lambert W-function.\nPractical applications.\nIn practical applications, Ross' time constant can be found by numerical experimentation using DIDO. Ross \"et al\" showed that this time constant is connected to the practical implementation of a Caratheodory- solution. That is, Ross \"et al\" showed that if feedback solutions are obtained by zero-order holds only, then a significantly faster sampling rate is needed to achieve controllability and stability. On the other hand, if a feedback solution is implemented by way of a Caratheodory- technique, then a larger sampling rate can be accommodated. This implies that the computational burden on generating feedback solutions is significantly less than the standard implementations. These concepts have been used to generate collision-avoidance maneuvers in robotics in the presence of uncertain and incomplete information of the static and dynamic obstacles.", "Automation-Control": 0.9983292222, "Qwen2": "Yes"} {"id": "43034358", "revid": "39540292", "url": "https://en.wikipedia.org/wiki?curid=43034358", "title": "IBM Z System Automation", "text": "IBM Z System Automation (SA z/OS) is a policy-based automation solution to ensure the availability of applications and system resources. It runs within IBM Z NetView, and uses its capabilities to interact with z/OS.\nFunctionality.\nThe primary objective of this software is to keep Resources on the z/OS systems in a desired (or goal) state of Available or Unavailable. Each resource can be individually controlled via commands to set the goal state by means of Requests (or Votes). This goal state is stored, so should a resource fail or the system be shut down, it can be brought back to its previous Desired state quickly and easily. A Resource is any Application or System Resource which can be monitored and controlled. Resources have relationships defined between them, which ensures they are started and stopped in the correct order.\nThe resources are defined in a policy database (PDB), which enables all the definitions for an entire enterprise to be defined just once. A Build process is used to extract the relevant resources for each systems into Automation Control Files (ACF). As a minimum, a resource definition contains: jobname, start commands, stop commands, status messages, relationships.\nSA z/OS is IBM Parallel Sysplex compliant, and can manage your business applications whether they run within a monoplex, between multiple sysplexes, or in distributed Linux, UNIX or Windows platforms. The Service Management Unite Automation dashboard provides a single point of control to monitor and operate in your environment.\nSA z/OS is required for Geographically Dispersed Parallel Sysplex (GDPS).\nSupported Operating Systems.\nThis software is available on following Operating Systems:", "Automation-Control": 0.9220294952, "Qwen2": "Yes"} {"id": "932345", "revid": "16861812", "url": "https://en.wikipedia.org/wiki?curid=932345", "title": "Protocol analyzer", "text": "A protocol analyzer is a tool (hardware or software) used to capture and analyze signals and data traffic over a communication channel. Such a channel varies from a local computer bus to a satellite link, that provides a means of communication using a standard communication protocol (networked or point-to-point). Each type of communication protocol has a different tool to collect and analyze signals and data. \nSpecific types of protocol analyzers include:", "Automation-Control": 0.6129360199, "Qwen2": "Yes"} {"id": "22653770", "revid": "38053341", "url": "https://en.wikipedia.org/wiki?curid=22653770", "title": "FreeNATS", "text": "FreeNATS (the Free Network Automatic Testing System) is an open-source network monitoring software application developed by David Cutting under the banner of PurplePixie Systems.\nFreeNATS is free software licensed under the terms of the GNU General Public License version 3 as published by the Free Software Foundation.", "Automation-Control": 0.7221870422, "Qwen2": "Yes"} {"id": "48662306", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=48662306", "title": "A. Stephen Morse", "text": "A. Stephen Morse (born June 18, 1939) is the Dudley Professor of distributed control and adaptive control in electrical engineering at Yale University.\nEarly life and education.\nMorse was born in Mt. Vernon, New York. He received his B.S. from Cornell University, his M.S. from the University of Arizona, and his Ph.D. from Purdue University.\nAwards.\nMorse received the IEEE Control Systems Award and the Richard E. Bellman Control Heritage Award in 1999 and 2013, respectively. Morse was elected a member of the National Academy of Engineering in 2002 for contributions to geometric control theory, adaptive control, and the stability of hybrid systems.", "Automation-Control": 0.7311952114, "Qwen2": "Yes"} {"id": "1261170", "revid": "1169522726", "url": "https://en.wikipedia.org/wiki?curid=1261170", "title": "State observer", "text": "In control theory, a state observer or state estimator is a system that provides an estimate of the internal state of a given real system, from measurements of the input and output of the real system. It is typically computer-implemented, and provides the basis of many practical applications.\nKnowing the system state is necessary to solve many control theory problems; for example, stabilizing a system using state feedback. In most practical cases, the physical state of the system cannot be determined by direct observation. Instead, indirect effects of the internal state are observed by way of the system outputs. A simple example is that of vehicles in a tunnel: the rates and velocities at which vehicles enter and leave the tunnel can be observed directly, but the exact state inside the tunnel can only be estimated. If a system is observable, it is possible to fully reconstruct the system state from its output measurements using the state observer.\nTypical observer model.\nLinear, delayed, sliding mode, high gain, Tau, homogeneity-based, extended and cubic observers are among several observer structures used for state estimation of linear and nonlinear systems. A linear observer structure is described in the following sections.\nDiscrete-time case.\nThe state of a linear, time-invariant discrete-time system is assumed to satisfy\nwhere, at time formula_3, formula_4 is the plant's state; formula_5 is its inputs; and formula_6 is its outputs. These equations simply say that the plant's current outputs and its future state are both determined solely by its current states and the current inputs. (Although these equations are expressed in terms of discrete time steps, very similar equations hold for continuous systems). If this system is observable then the output of the plant, formula_6, can be used to steer the state of the state observer.\nThe observer model of the physical system is then typically derived from the above equations. Additional terms may be included in order to ensure that, on receiving successive measured values of the plant's inputs and outputs, the model's state converges to that of the plant. In particular, the output of the observer may be subtracted from the output of the plant and then multiplied by a matrix formula_8; this is then added to the equations for the state of the observer to produce a so-called \"Luenberger observer\", defined by the equations below. Note that the variables of a state observer are commonly denoted by a \"hat\": formula_9 and formula_10 to distinguish them from the variables of the equations satisfied by the physical system.\nThe observer is called asymptotically stable if the observer error formula_13 converges to zero when formula_14. For a Luenberger observer, the observer error satisfies formula_15. The Luenberger observer for this discrete-time system is therefore asymptotically stable when the matrix formula_16 has all the eigenvalues inside the unit circle.\nFor control purposes the output of the observer system is fed back to the input of both the observer and the plant through the gains matrix formula_17.\nThe observer equations then become:\nor, more simply,\nDue to the separation principle we know that we can choose formula_17 and formula_8 independently without harm to the overall stability of the systems. As a rule of thumb, the poles of the observer formula_25 are usually chosen to converge 10 times faster than the poles of the system formula_26.\nContinuous-time case.\nThe previous example was for an observer implemented in a discrete-time LTI system. However, the process is similar for the continuous-time case; the observer gains formula_8 are chosen to make the continuous-time error dynamics converge to zero asymptotically (i.e., when formula_25 is a Hurwitz matrix).\nFor a continuous-time linear system\nwhere formula_31, the observer looks similar to discrete-time case described above:\nThe observer error formula_34 satisfies the equation\nThe eigenvalues of the matrix formula_25 can be chosen arbitrarily by appropriate choice of the observer gain formula_8 when the pair formula_38 is observable, i.e. observability condition holds. In particular, it can be made Hurwitz, so the observer error formula_39 when formula_40.\nPeaking and other observer methods.\nWhen the observer gain formula_8 is high, the linear Luenberger observer converges to the system states very quickly. However, high observer gain leads to a peaking phenomenon in which initial estimator error can be prohibitively large (i.e., impractical or unsafe to use). As a consequence, nonlinear high-gain observer methods are available that converge quickly without the peaking phenomenon. For example, sliding mode control can be used to design an observer that brings one estimated state's error to zero in finite time even in the presence of measurement error; the other states have error that behaves similarly to the error in a Luenberger observer after peaking has subsided. Sliding mode observers also have attractive noise resilience properties that are similar to a Kalman filter.\nAnother approach is to apply multi observer, that significantly improves transients and reduces observer overshoot. Multi-observer can be adapted to every system where high-gain observer is applicable.\nState observers for nonlinear systems.\nHigh gain, sliding mode and extended observers are the most common observers for nonlinear systems. \nTo illustrate the application of sliding mode observers for nonlinear systems, first consider the no-input non-linear system:\nwhere formula_43. Also assume that there is a measurable output formula_44 given by\nThere are several non-approximate approaches for designing an observer. The two observers given below also apply to the case when the system has an input. That is,\nLinearizable error dynamics.\nOne suggestion by Krener and Isidori and Krener and Respondek can be applied in a situation when there exists a linearizing transformation (i.e., a diffeomorphism, like the one used in feedback linearization) formula_48 such that in new variables the system equations read\nThe Luenberger observer is then designed as\nThe observer error for the transformed variable formula_52 satisfies the same equation as in classical linear case.\nAs shown by Gauthier, Hammouri, and Othman\nand Hammouri and Kinnaert, if there exists transformation formula_48 such that the system can be transformed into the form\nthen the observer is designed as\nwhere formula_58 is a time-varying observer gain.\nCiccarella, Dalla Mora, and Germani obtained more advanced and general results, removing the need for a nonlinear transform and proving global asymptotic convergence of the estimated state to the true state using only simple assumptions on regularity.\nSwitched observers.\nAs discussed for the linear case above, the peaking phenomenon present in Luenberger observers justifies the use of switched observers. A switched observer encompasses a relay or binary switch that acts upon detecting minute changes in the measured output. Some common types of switched observers include the sliding mode observer, nonlinear extended state observer, fixed time observer, switched high gain observer and uniting observer. The sliding mode observer uses non-linear high-gain feedback to drive estimated states to a hypersurface where there is no difference between the estimated output and the measured output. The non-linear gain used in the observer is typically implemented with a scaled switching function, like the signum (i.e., sgn) of the estimated – measured output error. Hence, due to this high-gain feedback, the vector field of the observer has a crease in it so that observer trajectories \"slide along\" a curve where the estimated output matches the measured output exactly. So, if the system is observable from its output, the observer states will all be driven to the actual system states. Additionally, by using the sign of the error to drive the sliding mode observer, the observer trajectories become insensitive to many forms of noise. Hence, some sliding mode observers have attractive properties similar to the Kalman filter but with simpler implementation.\nAs suggested by Drakunov, a sliding mode observer can also be designed for a class of non-linear systems. Such an observer can be written in terms of original variable estimate formula_59 and has the form\nwhere:\nThe idea can be briefly explained as follows. According to the theory of sliding modes, in order to describe the system behavior, once sliding mode starts, the function formula_85 should be replaced by equivalent values (see \"equivalent control\" in the theory of sliding modes). In practice, it switches (chatters) with high frequency with slow component being equal to the equivalent value. Applying appropriate lowpass filter to get rid of the high frequency component on can obtain the value of the equivalent control, which contains more information about the state of the estimated system. The observer described above uses this method several times to obtain the state of the nonlinear system ideally in finite time.\nThe modified observation error can be written in the transformed states formula_86. In particular,\nand so\nSo:\nSo, for sufficiently large formula_106 gains, all observer estimated states reach the actual states in finite time. In fact, increasing formula_106 allows for convergence in any desired finite time so long as each formula_108 function can be bounded with certainty. Hence, the requirement that the map formula_109 is a diffeomorphism (i.e., that its Jacobian linearization is invertible) asserts that convergence of the estimated output implies convergence of the estimated state. That is, the requirement is an observability condition.\nIn the case of the sliding mode observer for the system with the input, additional conditions are needed for the observation error to be independent of the input. For example, that\ndoes not depend on time. The observer is then\nMulti-observer.\nMulti-observer extends the high-gain observer structure from single to multi observer, with many models working simultaneously. This has two layers: the first consists of multiple high-gain observers with different estimation states, and the second determines the importance weights of the first layer observers. The algorithm is simple to implement and does not contain any risky operations like differentiation. The idea of multiple models was previously applied to obtain information in adaptive control.\nAssuming that the number of high-gain observers equals formula_112,\nwhere formula_115 is the observer index. The first layer observers consists of the same gain formula_116 but they differ with the initial state formula_117. In the second layer all formula_118 from formula_119 observers are combined into one to obtain single state vector estimation\nwhere formula_121 are weight factors. These factors are changed to provide the estimation in the second layer and to improve the observation process.\nLet assume that\nand\nwhere formula_124 is some vector that depends on formula_125 observer error formula_126.\nSome transformation yields to linear regression problem\nThis formula gives possibility to estimate formula_128. To construct manifold we need mapping formula_129 between formula_130 and ensurance that formula_131 is calculable relying on measurable signals. \nFirst thing is to eliminate parking phenomenon for formula_132 from observer error\nCalculate formula_134 times derivative on formula_135 to find mapping m lead to formula_136 defined as\n1 & 0 & 0 & \\cdots & 0 \\\\ \nCL & 1 & 0 & \\cdots & 0 \\\\\nCAL & CL & 1 & \\cdots & 0 \\\\\nCA^{2}L & CAL & CL & \\cdots & 0 \\\\\n\\vdots & \\vdots & \\vdots & \\ddots \\\\ \nCA^{n-2}L & CA^{n-3}L & CA^{n-4}L & \\cdots & 1\n\\end{bmatrix} \n\\begin{bmatrix} \n\\int\\limits^t_{t-t_d} ", "Automation-Control": 0.9999065995, "Qwen2": "Yes"} {"id": "506682", "revid": "7034620", "url": "https://en.wikipedia.org/wiki?curid=506682", "title": "Fluid power", "text": "Fluid power is the use of fluids under pressure to generate, control, and transmit power. Fluid power is conventionally subdivided into hydraulics (using a liquid such as mineral oil or water) and pneumatics (using a gas such as compressed air or other gases). Although steam is also a fluid, steam power is usually classified separately from fluid power (implying hydraulics or pneumatics). Compressed-air and water-pressure systems were once used to transmit power from a central source to industrial users over extended geographic areas; fluid power systems today are usually within a single building or mobile machine.\nFluid power systems perform work by a pressurized fluid bearing directly on a piston in a cylinder or in a fluid motor. A fluid cylinder produces a force resulting in linear motion, whereas a fluid motor produces torque resulting in rotary motion. Within a fluid power system, cylinders and motors (also called actuators) do the desired work. Control components such as valves regulate the system.\nElements.\nA fluid power system has a pump driven by a prime mover (such as an electric motor or internal combustion engine) that converts mechanical energy into fluid energy, Pressurized fluid is controlled and directed by valves into an actuator device such as a hydraulic cylinder or pneumatic cylinder, to provide linear motion, or a hydraulic motor or pneumatic motor, to provide rotary motion or torque. Rotary motion may be continuous or confined to less than one revolution.\nHydraulic pumps.\nDynamic (non positive displacement) pumps\nThis type is generally used for low-pressure, high volume flow applications. Since they are not capable of withstanding high pressures, there is little use in the fluid power field. Their maximum pressure is limited to 250-300 psi (1.7 - 2.0 MPa). This type of pump is primarily used for transporting fluids from one location to another. Centrifugal and axial flow propeller pumps are the two most common types of dynamic pumps.\nPositive displacement pumps\nThis type is universally used for fluid power systems. With this pump, a fixed amount of fluid is ejected into the hydraulic system per revolution of pump shaft rotation. These pumps are capable of overcoming the pressure resulting from the mechanical loads on the system as well as the resistance to flow due to friction. These two features are highly desirable in fluid power pumps. These pumps also have the following advantages over non positive displacement pumps:\nCharacteristics.\nFluid power systems can produce high power and high forces in small volumes, compared with electrically-driven systems. The forces that are exerted can be easily monitored within a system by gauges and meters. In comparison to systems that provide force through electricity or fuel, fluid power systems are known to have long service lives if maintained properly. The working fluid passing through a fluid motor inherently provides cooling of the motor, which must be separately arranged for an electric motor. Fluid motors normally produce no sparks, which are a source of ignition or explosions in hazardous areas containing flammable gases or vapors.\nFluid power systems are susceptible to pressure and flow losses within pipes and control devices. Fluid power systems are equipped with filters and other measures to preserve the cleanliness of the working fluid. Any dirt in the system can cause wear of seals and leakage, or can obstruct control valves and cause erratic operation. The hydraulic fluid itself is sensitive to temperature and pressure along with being somewhat compressible. These can cause systems to not run properly. If not run properly, cavitation and aeration can occur.\nApplication.\nMobile applications of fluid power are widespread. Nearly every self-propelled wheeled vehicle has either hydraulically-operated or pneumatically-operated brakes. Earthmoving equipment such as bulldozers, backhoes and others use powerful hydraulic systems for digging and also for propulsion. A very compact fluid power system is the automatic transmission found in many vehicles, which includes a hydraulic torque converter.\nFluid power is also used in automated systems, where tools or work pieces are moved or held using fluid power. Variable-flow control valves and position sensors may be included in a servomechanism system for precision machine tools. Below is a more detailed list of applications and categories that fluid power is used for:\nCommon hydraulic circuit application.\nSynchronizing.\nThis circuit works off of synchronization. As a cylinder reaches a certain point another will be activated, either by a hydraulic limit switch valve or by the build-up of pressure in the cylinder. These circuits are used in manufacturing. An example of this would be on an assembly line. As a hydraulic arm is activated to grab an object. It then will reach a point of extension or retraction, where the other cylinder is activated to screw a cap or top onto the object. Hence the term \"synchronizing\".\nRegenerative.\nIn a regenerative circuit, a double acting cylinder is used. This cylinder has a pump that has a fixed output. The use of a regenerative circuit permits use of a smaller size pump for any given application. This works by re-routing the fluid to the cap instead of back to the tank. For example, in a drilling process a regenerative circuit will allow drilling at a consistent speed, and retraction at a much faster speed. This gives the operator faster and more precise production.\nElectrical control.\nCombinations of electrical control of fluid power elements are widespread in automated systems. A wide variety of measuring, sensing, or control elements are available in electrical form. These can be used to operate solenoid valves or servo valves that control the fluid power element. Electrical control may be used to allow, for example, remote control of a fluid power system without running long control lines to a remotely located manual control valve.", "Automation-Control": 0.9685548544, "Qwen2": "Yes"} {"id": "26149056", "revid": "44838831", "url": "https://en.wikipedia.org/wiki?curid=26149056", "title": "IEC 62351", "text": "IEC 62351 is a standard developed by WG15 of IEC TC57. This is developed for handling the security of TC 57 series of protocols including IEC 60870-5 series, IEC 60870-6 series, IEC 61850 series, IEC 61970 series & IEC 61968 series. The different security objectives include authentication of data transfer through digital signatures, ensuring only authenticated access, prevention of eavesdropping, prevention of playback and spoofing, and intrusion detection.", "Automation-Control": 0.9102392793, "Qwen2": "Yes"} {"id": "25527301", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=25527301", "title": "Capacitive displacement sensor", "text": "Capacitive displacement sensors \"are non-contact devices capable of high-resolution measurement of the position and/or change of position of any conductive target\". They are also able to measure the thickness or density of non-conductive materials. Capacitive displacement sensors are used in a wide variety of applications including semiconductor processing, assembly of precision equipment such as disk drives, precision thickness measurements, machine tool metrology and assembly line testing. These types of sensors can be found in machining and manufacturing facilities around the world.\nBasic capacitive theory.\nCapacitance is an electrical property which is created by applying an electrical charge to two conductive objects with a gap between them. A simple demonstration is two parallel conductive plates of the same profile with a gap between them and a charge applied to them. In this situation, the Capacitance can be expressed by the equation:\nWhere \"C\" is the capacitance, ε0 is the permittivity of free space constant, \"K\" is the dielectric constant of the material in the gap, \"A\" is the area of the plates, and \"d\" is the distance between the plates.\nThere are two general types of capacitive displacement sensing systems. One type is used to measure thicknesses of conductive materials. The other type measures thicknesses of non conductive materials or the level of a fluid.\nA capacitive sensing system for conductive materials uses a model similar to the one described above, but in place of one of the conductive plates, is the sensor, and in place of the other, is the conductive target to be measured. Since the area of the probe and target remain constant, and the dielectric of the material in the gap (usually air) also remains constant, \"any change in capacitance is a result of a change in the distance between the probe and the target.\" Therefore, the equation above can be simplified to:\nWhere α indicates a proportional relationship.\nDue to this proportional relationship, a capacitive sensing system is able to measure changes in capacitance and translate these changes in distance measurements.\nThe operation of the sensor for measuring thickness of non-conductive materials can be thought of as two capacitors in series, with each having a different dielectric (and dielectric constant). The sum of the thicknesses of the two dielectric materials remains constant but the thickness of each can vary. The thickness of the material to be measured displaces the other dielectric. The gap is often an air gap, (dielectric constant = 1) and the material has a higher dielectric. As the material gets thicker, the capacitance increases and is sensed by the system.\nA sensor for measuring fluid levels works as two capacitors in parallel with constant total area. Again the difference in the dielectric constant of the fluid and the dielectric constant of air results in detectable changes in the capacitance between the conductive probes or plates.\nApplications.\nPrecision positioning.\nOne of the more common applications of capacitive sensors is for precision positioning. Capacitive displacement sensors can be used to measure the position of objects down to the nanometer level. This type of precise positioning is used in the semiconductor industry where silicon wafers need to be positioned for exposure. Capacitive sensors are also used to pre-focus the electron microscopes used in testing and examining the wafers.\nDisc drive industry.\nIn the disc drive industry, capacitive displacement sensors are used to measure the runout (a measure of how much the axis of rotation deviates from an ideal fixed line) of disc drive spindles. By knowing the exact runout of these spindles, disc drive manufacturers are able to determine the maximum amount of data that can be placed onto the drives. Capacitive sensors are also used to ensure that disc drive platters are orthogonal to the spindle before data is written to them.\nPrecision thickness measurements.\nCapacitive displacement sensors can be used to make very precise thickness measurements. Capacitive displacement sensors operate by measuring changes in position. If the position of a reference part of known thickness is measured, other parts can be subsequently measured and the differences in position can be used to determine the thickness of these parts. In order for this to be effective using a single probe, the parts must be completely flat and measured on a perfectly flat surface. If the part to be measured has any curvature or deformity, or simply does not rest firmly against the flat surface, the distance between the part to be measured and the surface it is placed upon will be erroneously included in the thickness measurement. This error can be eliminated by using two capacitive sensors to measure a single part. Capacitive sensors are placed on either side of the part to be measured. By measuring the parts from both sides, curvature and deformities are taken into account in the measurement and their effects are not included in the thickness readings.\nThe thickness of plastic materials can be measured with the material placed between two electrodes a set distance apart. These form a type of capacitor. The plastic when placed between the electrodes acts as a dielectric and displaces air (which has dielectric constant of 1, different from the plastic). Consequently, the capacitance between the electrodes changes. The capacitance changes can then be measured and correlated with the material's thickness.\nCapacitive sensors circuits can be constructed that are able to detect changes in capacitance on the order of a 10−5 picofarads (10 attofarads).\nNon-conductive targets.\nWhile capacitive displacement sensors are most often used to sense changes in position of conductive targets, they can also be used to sense the thickness and/or density of non-conductive targets as well. A non-conductive object placed in between the probe and conductive target will have a different dielectric constant than the air in the gap and will therefore change the Capacitance between probe and target. (See the first equation above) By analyzing this change in capacitance, the thickness and density of the non-conductor can be determined.\nMachine tool metrology.\nCapacitive displacement sensors are often used in metrology applications. In many cases, sensors are used “to measure shape errors in the part being produced. But they also can measure the errors arising in the equipment used to manufacture the part, a practice known as machine tool metrology”. In many cases, the sensors are used to analyze and optimize the rotation of spindles in various machine tools, examples include surface grinders, lathes, milling machines, and air bearing spindles. By measuring errors in the machines themselves, rather than simply measuring errors in the final products, problems can be dealt with and fixed earlier in the manufacturing process.\nAssembly line testing.\nCapacitive displacement sensors are often used in assembly line testing. Sometimes they are used to test assembled parts for uniformity, thickness or other design features. At other times, they are used to simply look for the presence or absence of a certain component, such as glue. Using capacitive sensors to test assembly line parts can help to prevent quality concerns further along in the production process.\nComparison to eddy current displacement sensors.\nCapacitive displacement sensors share many similarities to eddy current (or inductive) displacement sensors; however capacitive sensors use an electric field as opposed to the magnetic field used by eddy current sensors This leads to a variety of differences between the two sensing technologies, with the most notable differences being that capacitive sensors are generally capable of higher resolution measurements, and eddy current sensors work in dirty environments while capacitive sensors do not.", "Automation-Control": 0.8453508019, "Qwen2": "Yes"} {"id": "53078439", "revid": "12604737", "url": "https://en.wikipedia.org/wiki?curid=53078439", "title": "IEC/IEEE 61850-9-3", "text": "IEC/IEEE 61850-9-3 (Power Utility Profile) or PUP is an international standard for precise time distribution and clock synchronization in electrical grids with an accuracy of 1 μs.\nIt supports precise time stamping of voltage and current measurement for differential protection, wide area monitoring and protection, busbar protection and event recording.\nIt can be used to ensure deterministic operation of critical functions in the automation system.\nIt belongs to the IEC 61850 standard suite for communication networks and systems for power utility automation. \nIEC/IEEE 61850-9-3 is a profile (subset) of IEEE Std 1588 Precision Time Protocol (PTP) when clocks are singly attached.\nIEC/IEEE 61850-9-3 provides seamless fault tolerance by attaching clocks to duplicated networks paths and by support of simultaneously active redundant master clocks.\nFor this case, the extensions to PTP defined in IEC 62439-3 Annex A apply.\nMain features.\nIEC/IEEE 61850-9-3 uses the following IEEE Std 1588 options:\nPerformance.\nIEC/IEEE 61850-9-3 aims at an accuracy of better than 1 μs after crossing 15 bridges with transparent clocks.\nIt assumes that all network elements (bridges, routers, media converters, links) support PTP with a given performance:\nBy relying on these guaranteed values, the network engineer can calculate the time inaccuracy at different nodes of the network and place the clocks, especially the grandmaster clocks suitably. \nIEC TR 61850-90-4 (Network engineering guidelines) gives advice on the use of IEC/IEEE 61850-9-3.\nIEEE 1588 settings.\nIEC/IEEE 61850-9-3 restricts the parameters of IEEE Std 1588 to the following values:\nLocal time distribution.\nFor applications that do not use the corresponding function in IEC 61850, the grandmaster may distribute local time (e.g. for human display) using the ALTERNATE_TIME_OFFSET_INDICATOR TLV as specified in IEEE Std 1588 §16.3.\nStandard owners.\nThis protocol has been developed 2012-2014 by the IEC SC65C WG15 in the framework of IEC 62439-3, which applies to all IEC industrial networks, as PTP profile L2P2P (Layer2, peer-to-peer). \nTo avoid parallel standards in IEC and IEEE in the field of grid automation, this work has been placed under the umbrella of the IEC&IEEE Joint Development 61850-9-3. \nTechnical responsibility rests with IEC SC65C WG15, which is committed to keep the IEC 62439-3 profile L2P2P and IEC/IEEE 61850-9-3 aligned.", "Automation-Control": 0.6747214794, "Qwen2": "Yes"} {"id": "2711317", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=2711317", "title": "Intelligent agent", "text": "In artificial intelligence, an intelligent agent (IA) is an agent acting in an intelligent manner; It perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or acquiring knowledge. An intelligent agent may be simple or complex: A thermostat or other control system is considered an example of an intelligent agent, as is a human being, as is any system that meets the definition, such as a firm, a state, or a biome.\nLeading AI textbooks define \"artificial intelligence\" as the \"study and design of intelligent agents\", a definition that considers goal-directed behavior to be the essence of intelligence. Goal-directed agents are also described using a term borrowed from economics, \"rational agent\".\nAn agent has an \"objective function\" that encapsulates all the IA's goals. Such an agent is designed to create and execute whatever plan will, upon completion, maximize the expected value of the objective function. For example, a reinforcement learning agent has a \"reward function\" that allows the programmers to shape the IA's desired behavior, and an evolutionary algorithm's behavior is shaped by a \"fitness function\". \nIntelligent agents in artificial intelligence are closely related to agents in economics, and versions of the intelligent agent paradigm are studied in cognitive science, ethics, the philosophy of practical reason, as well as in many interdisciplinary socio-cognitive modeling and computer social simulations. \nIntelligent agents are often described schematically as an abstract functional system similar to a computer program. Abstract descriptions of intelligent agents are called abstract intelligent agents (AIA) to distinguish them from their real-world implementations. An autonomous intelligent agent is designed to function in the absence of human intervention. Intelligent agents are also closely related to software agents (an autonomous computer program that carries out tasks on behalf of users).\nAs a definition of artificial intelligence.\n\"\" defines an \"agent\" as \nAnything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators,\ndefines a \"rational agent\" as:\nAn agent that acts so as to maximize the expected value of a performance measure based on past experience and knowledge.\nand defines the field of \"artificial intelligence\" research as:\nThe study and design of rational agents\nPadgham & Winikoff (2005) agree that an intelligent agent is situated in an environment and responds in a timely (though not necessarily real-time) manner to environment changes. However, intelligent agents must also proactively pursue goals in a flexible and robust way. Optional desiderata include that the agent be rational, and that the agent be capable of belief-desire-intention analysis.\nKaplan and Haenlein define artificial intelligence as \"A system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.\" This definition is closely related to that of an intelligent agent.\nAdvantages.\nPhilosophically, this definition of artificial intelligence avoids several lines of criticism. Unlike the Turing test, it does not refer to human intelligence in any way. Thus, there is no need to discuss if it is \"real\" vs \"simulated\" intelligence (i.e., \"synthetic\" vs \"artificial\" intelligence) and does not indicate that such a machine has a mind, consciousness or true understanding (i.e., it does not imply John Searle's \"strong AI hypothesis\"). It also doesn't attempt to draw a sharp dividing line between behaviors that are \"intelligent\" and behaviors that are \"unintelligent\"—programs need only be measured in terms of their objective function.\nMore importantly, it has a number of practical advantages that have helped move AI research forward. It provides a reliable and scientific way to test programs; researchers can directly compare or even combine different approaches to isolated problems, by asking which agent is best at maximizing a given \"goal function\". It also gives them a common language to communicate with other fields—such as mathematical optimization (which is defined in terms of \"goals\") or economics (which uses the same definition of a \"rational agent\").\nObjective function.\nAn agent that is assigned an explicit \"goal function\" is considered more intelligent if it consistently takes actions that successfully maximize its programmed goal function. The goal can be simple (\"1 if the IA wins a game of Go, 0 otherwise\") or complex (\"Perform actions mathematically similar to ones that succeeded in the past\"). The \"goal function\" encapsulates all of the goals the agent is driven to act on; in the case of rational agents, the function also encapsulates the acceptable trade-offs between accomplishing conflicting goals. (Terminology varies; for example, some agents seek to maximize or minimize a \"utility function\", \"objective function\", or \"loss function\".)\nGoals can be explicitly defined or induced. If the AI is programmed for \"reinforcement learning\", it has a \"reward function\" that encourages some types of behavior and punishes others. Alternatively, an evolutionary system can induce goals by using a \"fitness function\" to mutate and preferentially replicate high-scoring AI systems, similar to how animals evolved to innately desire certain goals such as finding food. Some AI systems, such as nearest-neighbor, instead of reason by analogy, these systems are not generally given goals, except to the degree that goals are implicit in their training data. Such systems can still be benchmarked if the non-goal system is framed as a system whose \"goal\" is to accomplish its narrow classification task.\nSystems that are not traditionally considered agents, such as knowledge-representation systems, are sometimes subsumed into the paradigm by framing them as agents that have a goal of (for example) answering questions as accurately as possible; the concept of an \"action\" is here extended to encompass the \"act\" of giving an answer to a question. As an additional extension, mimicry-driven systems can be framed as agents who are optimizing a \"goal function\" based on how closely the IA succeeds in mimicking the desired behavior. In the generative adversarial networks of the 2010s, an \"encoder\"/\"generator\" component attempts to mimic and improvise human text composition. The generator is attempting to maximize a function encapsulating how well it can fool an antagonistic \"predictor\"/\"discriminator\" component.\nWhile symbolic AI systems often accept an explicit goal function, the paradigm can also be applied to neural networks and to evolutionary computing. Reinforcement learning can generate intelligent agents that appear to act in ways intended to maximize a \"reward function\". Sometimes, rather than setting the reward function to be directly equal to the desired benchmark evaluation function, machine learning programmers will use reward shaping to initially give the machine rewards for incremental progress in learning. Yann LeCun stated in 2018 that \"Most of the learning algorithms that people have come up with essentially consist of minimizing some objective function.\" AlphaZero chess had a simple objective function; each win counted as +1 point, and each loss counted as -1 point. An objective function for a self-driving car would have to be more complicated. Evolutionary computing can evolve intelligent agents that appear to act in ways intended to maximize a \"fitness function\" that influences how many descendants each agent is allowed to leave.\nThe theoretical and uncomputable AIXI design is a maximally intelligent agent in this paradigm; however, in the real world, the IA is constrained by finite time and hardware resources, and scientists compete to produce algorithms that can achieve progressively higher scores on benchmark tests with real-world hardware.\nClasses of intelligent agents.\nRussel and Norvig's classification.\n group agents into five classes based on their degree of perceived intelligence and capability:\nSimple reflex agents.\nSimple reflex agents act only on the basis of the current percept, ignoring the rest of the percept history. The agent function is based on the \"condition-action rule\": \"if condition, then action\".\nThis agent function only succeeds when the environment is fully observable. Some reflex agents can also contain information on their current state which allows them to disregard conditions whose actuators are already triggered.\nInfinite loops are often unavoidable for simple reflex agents operating in partially observable environments. If the agent can randomize its actions, it may be possible to escape from infinite loops.\nModel-based reflex agents.\nA model-based agent can handle partially observable environments. Its current state is stored inside the agent maintaining some kind of structure that describes the part of the world which cannot be seen. This knowledge about \"how the world works\" is called a model of the world, hence the name \"model-based agent\".\nA model-based reflex agent should maintain some sort of internal model that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. Percept history and impact of action on the environment can be determined by using the internal model. It then chooses an action in the same way as reflex agent.\nAn agent may also use models to describe and predict the behaviors of other agents in the environment.\nGoal-based agents.\nGoal-based agents further expand on the capabilities of the model-based agents, by using \"goal\" information. Goal information describes situations that are desirable. This provides the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state. Search and planning are the subfields of artificial intelligence devoted to finding action sequences that achieve the agent's goals.\nUtility-based agents.\nGoal-based agents only distinguish between goal states and non-goal states. It is also possible to define a measure of how desirable a particular state is. This measure can be obtained through the use of a \"utility function\" which maps a state to a measure of the utility of the state. A more general performance measure should allow a comparison of different world states according to how well they satisfied the agent's goals. The term utility can be used to describe how \"happy\" the agent is.\nA rational utility-based agent chooses the action that maximizes the expected utility of the action outcomes - that is, what the agent expects to derive, on average, given the probabilities and utilities of each outcome. A utility-based agent has to model and keep track of its environment, tasks that have involved a great deal of research on perception, representation, reasoning, and learning.\nLearning agents.\nLearning has the advantage that it allows the agents to initially operate in unknown environments and to become more competent than its initial knowledge alone might allow. The most important distinction is between the \"learning element\", which is responsible for making improvements, and the \"performance element\", which is responsible for selecting external actions.\nThe learning element uses feedback from the \"critic\" on how the agent is doing and determines how the performance element, or \"actor\", should be modified to do better in the future. The performance element is what we have previously considered to be the entire agent: it takes in percepts and decides on actions.\nThe last component of the learning agent is the \"problem generator\". It is responsible for suggesting actions that will lead to new and informative experiences.\nWeiss's classification.\n defines four classes of agents:\nOther.\nIn 2013, Alexander Wissner-Gross published a theory pertaining to Freedom and Intelligence for intelligent agents.\nHierarchies of agents.\nTo actively perform their functions, Intelligent Agents today are normally gathered in a hierarchical structure containing many “sub-agents”. Intelligent sub-agents process and perform lower-level functions. Taken together, the intelligent agent and sub-agents create a complete system that can accomplish difficult tasks or goals with behaviors and responses that display a form of intelligence.\nGenerally, an agent can be constructed by separating the body into the sensors and actuators, and so that it operates with a complex perception system that takes the description of the world as input for a controller and outputs commands to the actuator. However, a hierarchy of controller layers is often necessary to balance the immediate reaction desired for low-level tasks and the slow reasoning about complex, high-level goals.\nAgent function.\nA simple agent program can be defined mathematically as a function f (called the \"agent function\") which maps every possible percepts sequence to a possible action the agent can perform or to a coefficient, feedback element, function or constant that affects eventual actions:\nAgent function is an abstract concept as it could incorporate various principles of decision making like calculation of utility of individual options, deduction over logic rules, fuzzy logic, etc.\nThe program agent, instead, maps every possible percept to an action.\nWe use the term percept to refer to the agent's perceptional inputs at any given instant. In the following figures, an agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.\nApplications.\nIntelligent agents are applied as automated online assistants, where they function to perceive the needs of customers in order to perform individualized customer service. Such an agent may basically consist of a dialog system, an avatar, as well an expert system to provide specific expertise to the user. They can also be used to optimize coordination of human groups online. Hallerbach et al. discussed the application of agent-based approaches for the development and validation of automated driving systems via a digital twin of the vehicle-under-test and microscopic traffic simulation based on independent agents. Waymo has created a multi-agent simulation environment Carcraft to test algorithms for self-driving cars. It simulates traffic interactions between human drivers, pedestrians and automated vehicles. People's behavior is imitated by artificial agents based on data of real human behavior. The basic idea of using agent-based modeling to understand self-driving cars was discussed as early as 2003.\nAlternative definitions and uses.\n\"Intelligent agent\" is also often used as a vague marketing term, sometimes synonymous with \"virtual personal assistant\". Some 20th-century definitions characterize an agent as a program that aids a user or that acts on behalf of a user. These examples are known as software agents, and sometimes an \"intelligent software agent\" (that is, a software agent with intelligence) is referred to as an \"intelligent agent\".\nAccording to Nikola Kasabov, IA systems should exhibit the following characteristics:", "Automation-Control": 0.8065060377, "Qwen2": "Yes"} {"id": "21255948", "revid": "43558034", "url": "https://en.wikipedia.org/wiki?curid=21255948", "title": "Data Discovery and Query Builder", "text": "Data Discovery and Query Builder (DDQB) is a data abstraction technology, developed by IBM, that allows users to retrieve information from a data warehouse, in terms of the user's specific area of expertise instead of SQL.\nDDQB serves the user through a web based graphical user interface and configurable data abstraction model (DAM), which contains both an understanding of the user knowledge domain and the database below it.\nDDQB uses a set of Eclipse-based customization tooling and can be deployed as a set of Web Services.", "Automation-Control": 0.7769463062, "Qwen2": "Yes"} {"id": "21290040", "revid": "1133966642", "url": "https://en.wikipedia.org/wiki?curid=21290040", "title": "Arbor milling", "text": "Arbor milling is a cutting process which removes material via a multi-toothed cutter. An arbor mill is a type of milling machine characterized by its ability to rapidly remove material from a variety of materials. This milling process is not only rapid but also versatile.\nProcess Schematic.\nThis process progressively makes a surface to the user's specifications as the material is moved against the milling tool or the workpiece stays stationary while the arbor milling cutter moves across it to provide the desired shape. There are two types of milling that involve the directional movement of the workpiece, conventional and climb. If the workpiece is moving the opposite direction of the tool rotation this is called conventional milling. If the workpiece is moving the same direction as the tool rotation, this is called climb milling.\n\nSetup and Equipment.\nArbor milling is commonly performed on a horizontal milling machine. The tool is mounted on an arbor/mandrel (like an axle) that is suspended between the spindle and arbor support. This type of machine allows the tool to be placed in numerous positions in relation to the workpiece.\nWorkpiece Materials.\nThe workpiece involved in arbor milling can be a flat material or a shaped material: either one can be worked with desirable results. The hardness of the materials milled should be no harder than Rockwell C25(Rockwell scale), but workpieces harder than this can be successfully milled. Materials with good or excellent machinability include aluminum, brass, mild steel, cast iron, and thermoset plastics. Though initially ductile, stainless steel tends to work harden and thus has only a fair compatibility with this milling process (though it is in the feasible range).\nTooling Materials.\nAlthough high speed tool steel has been used in the past it is quickly being replaced by carbide, ceramic, or diamond tooling. Because carbide inserts are long lasting and easily replaced, they lend themselves to high production. Ceramic tools are brittle but can withstand high temperatures. This makes high speed machining possible. Diamond tools are used to achieve a superior surface finish (though they can only be used on non-ferrous materials).\nTolerances and Surface Finish.\nIn most applications, tolerances can be held within ±0.005 in. For precision application, tolerances can be held within ±0.001 in. It is possible to have a surface finish range of 32 to 500 microinches, but typically the range is 63 to 200 microinches. Finish cuts will generate surfaces near 32 to 63 microinches, roughing cuts near 200 microinches.\nTool Styles and Possibilities.\nThe most common tool styles used in arbor milling are: double angle, form relived, plane, and staggered tooth Among many other tool styles. The double angle milling cutter can make a wide variety of V shaped cuts with straight surfaces in the material. A form relieved milling cutter can produce U shaped cuts with curved surfaces, unlike the double angle cutter, into the material. A plane milling cutter can produce surfaces similar to a planer but can make varying contours across the material. A staggered tooth milling cutter can produce a rectangular groove in the material at varying depths and widths. The cutters can be stacked to mill combined profiles. The typical width of cuts made by arbor milling range from 0.25 in to 6 in, and the typical depths range from 0.02 in to 0.05 in.\nEffects on Work Material Properties.\nMechanical properties of the workpiece may be affected with a built-up edge or dull tool. Arbor Milling can create an untempered martensitic layer on the surface of heat-treated alloy steels, about 0.001 in thick. Other materials are affected very little by arbor milling.\nProcess Conditions.\nShown are the suggested ranges for cutting speeds and feed rates using high speed tool steel under dry cutting conditions at a 0.015 in depth of cut. Generally cutting speeds are lower for hard materials, higher for soft materials. Both cutting speeds and feed rates can be substantially increased when coolants are used and carbide tooling is substituted for steel tooling.\nTypical Speeds and Feeds\nLubrication and Cooling.\nDue to high cutting speeds a cutting fluid is required to lubricate and cool the tool and workpiece. The fluids can increase tool life, cutting speeds, and the quality of the finished surface. There are three common cutting fluids: mineral, synthetic, and water-soluble oils. These fluids can be applied by spraying, misting, or flooding the workpiece.", "Automation-Control": 0.9866781831, "Qwen2": "Yes"} {"id": "180855", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=180855", "title": "Kalman filter", "text": "For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. Kálmán, who was one of the primary developers of its theory.\nThis digital filter is sometimes termed the \"Stratonovich–Kalman–Bucy filter\" because it is a special case of a more general, nonlinear filter developed somewhat earlier by the Soviet mathematician Ruslan Stratonovich. In fact, some of the special case linear filter's equations appeared in papers by Stratonovich that were published before summer 1961, when Kalman met with Stratonovich during a conference in Moscow.\nKalman filtering has numerous technological applications. A common application is for guidance, navigation, and control of vehicles, particularly aircraft, spacecraft and ships positioned dynamically. Furthermore, Kalman filtering is a concept much applied in time series analysis used for topics such as signal processing and econometrics. Kalman filtering is also one of the main topics of robotic motion planning and control and can be used for trajectory optimization. Kalman filtering also works for modeling the central nervous system's control of movement. Due to the time delay between issuing motor commands and receiving sensory feedback, the use of Kalman filters provides a realistic model for making estimates of the current state of a motor system and issuing updated commands.\nThe algorithm works by a two-phase process. For the prediction phase, the Kalman filter produces estimates of the current state variables, along with their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some error, including random noise) is observed, these estimates are updated using a weighted average, with more weight being given to estimates with greater certainty. The algorithm is recursive. It can operate in real time, using only the present input measurements and the state calculated previously and its uncertainty matrix; no additional past information is required.\nOptimality of Kalman filtering assumes that errors have a normal (Gaussian) distribution. In the words of Rudolf E. Kálmán: \"In summary, the following assumptions are made about random processes: Physical random phenomena may be thought of as due to primary random sources exciting dynamic systems. The primary sources are assumed to be independent gaussian random processes with zero mean; the dynamic systems will be linear.\" Though regardless of Gaussianity, if the process and measurement covariances are known, the Kalman filter is the best possible \"linear\" estimator in the minimum mean-square-error sense.\n It is a common misconception (perpetuated in the literature) that the Kalman filter cannot be rigorously applied unless all noise processes are assumed to be Gaussian.\nExtensions and generalizations of the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on nonlinear systems. The basis is a hidden Markov model such that the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions. Kalman filtering has been used successfully in multi-sensor fusion, and distributed sensor networks to develop distributed or consensus Kalman filtering.\nHistory.\nThe filtering method is named for Hungarian émigré Rudolf E. Kálmán, although Thorvald Nicolai Thiele and Peter Swerling developed a similar algorithm earlier. Richard S. Bucy of the Johns Hopkins Applied Physics Laboratory contributed to the theory, causing it to be known sometimes as Kalman–Bucy filtering.\nStanley F. Schmidt is generally credited with developing the first implementation of a Kalman filter. He realized that the filter could be divided into two distinct parts, with one part for time periods between sensor outputs and another part for incorporating measurements. It was during a visit by Kálmán to the NASA Ames Research Center that Schmidt saw the applicability of Kálmán's ideas to the nonlinear problem of trajectory estimation for the Apollo program resulting in its incorporation in the Apollo navigation computer.\nThis Kalman filtering was first described and developed partially in technical papers by Swerling (1958), Kalman (1960) and Kalman and Bucy (1961).\nKalman filters have been vital in the implementation of the navigation systems of U.S. Navy nuclear ballistic missile submarines, and in the guidance and navigation systems of cruise missiles such as the U.S. Navy's Tomahawk missile and the U.S. Air Force's Air Launched Cruise Missile. They are also used in the guidance and navigation systems of reusable launch vehicles and the attitude control and navigation systems of spacecraft which dock at the International Space Station.\nOverview of the calculation.\nKalman filtering uses a system's dynamic model (e.g., physical laws of motion), known control inputs to that system, and multiple sequential measurements (such as from sensors) to form an estimate of the system's varying quantities (its state) that is better than the estimate obtained by using only one measurement alone. As such, it is a common sensor fusion and data fusion algorithm.\nNoisy sensor data, approximations in the equations that describe the system evolution, and external factors that are not accounted for, all limit how well it is possible to determine the system's state. The Kalman filter deals effectively with the uncertainty due to noisy sensor data and, to some extent, with random external factors. The Kalman filter produces an estimate of the state of the system as an average of the system's predicted state and of the new measurement using a weighted average. The purpose of the weights is that values with better (i.e., smaller) estimated uncertainty are \"trusted\" more. The weights are calculated from the covariance, a measure of the estimated uncertainty of the prediction of the system's state. The result of the weighted average is a new state estimate that lies between the predicted and measured state, and has a better estimated uncertainty than either alone. This process is repeated at every time step, with the new estimate and its covariance informing the prediction used in the following iteration. This means that Kalman filter works recursively and requires only the last \"best guess\", rather than the entire history, of a system's state to calculate a new state.\nThe measurements' certainty-grading and current-state estimate are important considerations. It is common to discuss the filter's response in terms of the Kalman filter's \"gain\". The Kalman-gain is the weight given to the measurements and current-state estimate, and can be \"tuned\" to achieve a particular performance. With a high gain, the filter places more weight on the most recent measurements, and thus conforms to them more responsively. With a low gain, the filter conforms to the model predictions more closely. At the extremes, a high gain close to one will result in a more jumpy estimated trajectory, while a low gain close to zero will smooth out noise but decrease the responsiveness.\nWhen performing the actual calculations for the filter (as discussed below), the state estimate and covariances are coded into matrices because of the multiple dimensions involved in a single set of calculations. This allows for a representation of linear relationships between different state variables (such as position, velocity, and acceleration) in any of the transition models or covariances.\nExample application.\nAs an example application, consider the problem of determining the precise location of a truck. The truck can be equipped with a GPS unit that provides an estimate of the position within a few meters. The GPS estimate is likely to be noisy; readings 'jump around' rapidly, though remaining within a few meters of the real position. In addition, since the truck is expected to follow the laws of physics, its position can also be estimated by integrating its velocity over time, determined by keeping track of wheel revolutions and the angle of the steering wheel. This is a technique known as dead reckoning. Typically, the dead reckoning will provide a very smooth estimate of the truck's position, but it will drift over time as small errors accumulate.\nFor this example, the Kalman filter can be thought of as operating in two distinct phases: predict and update. In the prediction phase, the truck's old position will be modified according to the physical laws of motion (the dynamic or \"state transition\" model). Not only will a new position estimate be calculated, but also a new covariance will be calculated as well. Perhaps the covariance is proportional to the speed of the truck because we are more uncertain about the accuracy of the dead reckoning position estimate at high speeds but very certain about the position estimate at low speeds. Next, in the update phase, a measurement of the truck's position is taken from the GPS unit. Along with this measurement comes some amount of uncertainty, and its covariance relative to that of the prediction from the previous phase determines how much the new measurement will affect the updated prediction. Ideally, as the dead reckoning estimates tend to drift away from the real position, the GPS measurement should pull the position estimate back toward the real position but not disturb it to the point of becoming noisy and rapidly jumping.\nTechnical description and context.\nThe Kalman filter is an efficient recursive filter estimating the internal state of a linear dynamic system from a series of noisy measurements. It is used in a wide range of engineering and econometric applications from radar and computer vision to estimation of structural macroeconomic models, and is an important topic in control theory and control systems engineering. Together with the linear-quadratic regulator (LQR), the Kalman filter solves the linear–quadratic–Gaussian control problem (LQG). The Kalman filter, the linear-quadratic regulator, and the linear–quadratic–Gaussian controller are solutions to what arguably are the most fundamental problems of control theory.\nIn most applications, the internal state is much larger (has more degrees of freedom) than the few \"observable\" parameters which are measured. However, by combining a series of measurements, the Kalman filter can estimate the entire internal state.\nFor the Dempster–Shafer theory, each state equation or observation is considered a special case of a linear belief function and the Kalman filtering is a special case of combining linear belief functions on a join-tree or Markov tree. Additional methods include belief filtering which use Bayes or evidential updates to the state equations.\nA wide variety of Kalman filters exists by now, from Kalman's original formulation - now termed the \"simple\" Kalman filter, the Kalman–Bucy filter, Schmidt's \"extended\" filter, the information filter, and a variety of \"square-root\" filters that were developed by Bierman, Thornton, and many others. Perhaps the most commonly used type of very simple Kalman filter is the phase-locked loop, which is now ubiquitous in radios, especially frequency modulation (FM) radios, television sets, satellite communications receivers, outer space communications systems, and nearly any other electronic communications equipment.\nUnderlying dynamic system model.\nKalman filtering is based on linear dynamic systems discretized in the time domain. They are modeled on a Markov chain built on linear operators perturbed by errors that may include Gaussian noise. The state of the target system refers to the ground truth (yet hidden) system configuration of interest, which is represented as a vector of real numbers. At each discrete time increment, a linear operator is applied to the state to generate the new state, with some noise mixed in, and optionally some information from the controls on the system if they are known. Then, another linear operator mixed with more noise generates the measurable outputs (i.e., observation) from the true (\"hidden\") state. The Kalman filter may be regarded as analogous to the hidden Markov model, with the difference that the hidden state variables have values in a continuous space as opposed to a discrete state space as for the hidden Markov model. There is a strong analogy between the equations of a Kalman Filter and those of the hidden Markov model. A review of this and other models is given in Roweis and Ghahramani (1999) and Hamilton (1994), Chapter 13.\nIn order to use the Kalman filter to estimate the internal state of a process given only a sequence of noisy observations, one must model the process in accordance with the following framework. This means specifying the matrices, for each time-step \"k\", following:\nThe Kalman filter model assumes the true state at time \"k\" is evolved from the state at (\"k\" − 1) according to\nwhere\nAt time \"k\" an observation (or measurement) z\"k\" of the true state x\"k\" is made according to\nwhere\nThe initial state, and the noise vectors at each step {x0, w1, ..., w\"k\", v1, ... ,v\"k\"} are all assumed to be mutually independent.\nMany real-time dynamic systems do not exactly conform to this model. In fact, unmodeled dynamics can seriously degrade the filter performance, even when it was supposed to work with unknown stochastic signals as inputs. The reason for this is that the effect of unmodeled dynamics depends on the input, and, therefore, can bring the estimation algorithm to instability (it diverges). On the other hand, independent white noise signals will not make the algorithm diverge. The problem of distinguishing between measurement noise and unmodeled dynamics is a difficult one and is treated as a problem of control theory using robust control.\nDetails.\nThe Kalman filter is a recursive estimator. This means that only the estimated state from the previous time step and the current measurement are needed to compute the estimate for the current state. In contrast to batch estimation techniques, no history of observations and/or estimates is required. In what follows, the notation formula_6 represents the estimate of formula_7 at time \"n\" given observations up to and including at time .\nThe state of the filter is represented by two variables:\nThe algorithm structure of the Kalman filter resembles that of Alpha beta filter. The Kalman filter can be written as a single equation; however, it is most often conceptualized as two distinct phases: \"Predict\" and \"Update\". The predict phase uses the state estimate from the previous timestep to produce an estimate of the state at the current timestep. This predicted state estimate is also known as the \"a priori\" state estimate because, although it is an estimate of the state at the current timestep, it does not include observation information from the current timestep. In the update phase, the innovation (the pre-fit residual), i.e. the difference between the current \"a priori\" prediction and the current observation information, is multiplied by the optimal Kalman gain and combined with the previous state estimate to refine the state estimate. This improved estimate based on the current observation is termed the \"a posteriori\" state estimate.\nTypically, the two phases alternate, with the prediction advancing the state until the next scheduled observation, and the update incorporating the observation. However, this is not necessary; if an observation is unavailable for some reason, the update may be skipped and multiple prediction procedures performed. Likewise, if multiple independent observations are available at the same time, multiple update procedures may be performed (typically with different observation matrices H\"k\").\nUpdate.\nThe formula for the updated (\"a posteriori\") estimate covariance above is valid for the optimal Kk gain that minimizes the residual error, in which form it is most widely used in applications. Proof of the formulae is found in the \"derivations\" section, where the formula valid for any Kk is also shown.\nA more intuitive way to express the updated state estimate (formula_10) is:\nThis expression reminds us of a linear interpolation, formula_12 for formula_13 between [0,1]. \nIn our case:\nThis expression also resembles the alpha beta filter update step.\nInvariants.\nIf the model is accurate, and the values for formula_20 and formula_21 accurately reflect the distribution of the initial state values, then the following invariants are preserved:\nwhere formula_23 is the expected value of formula_24. That is, all estimates have a mean error of zero.\nAlso:\nso covariance matrices accurately reflect the covariance of estimates.\nEstimation of the noise covariances Q\"k\" and R\"k\".\nPractical implementation of a Kalman Filter is often difficult due to the difficulty of getting a good estimate of the noise covariance matrices Q\"k\" and R\"k\". Extensive research has been done to estimate these covariances from data. One practical method of doing this is the \"autocovariance least-squares (ALS)\" technique that uses the time-lagged autocovariances of routine operating data to estimate the covariances. The GNU Octave and Matlab code used to calculate the noise covariance matrices using the ALS technique is available online using the GNU General Public License. Field Kalman Filter (FKF), a Bayesian algorithm, which allows simultaneous estimation of the state, parameters and noise covariance has been proposed. The FKF algorithm has a recursive formulation, good observed convergence, and relatively low complexity, thus suggesting that the FKF algorithm may possibly be a worthwhile alternative to the Autocovariance Least-Squares methods.\nOptimality and performance.\nIt follows from theory that the Kalman filter provides an optimal state estimation in cases where a) the model matches the real system perfectly, b) the entering noise is \"white\" (uncorrelated) and c) the covariances of the noise are known exactly. Correlated noise can also be treated using Kalman filters. \nSeveral methods for the noise covariance estimation have been proposed during past decades, including ALS, mentioned in the section above. After the covariances are estimated, it is useful to evaluate the performance of the filter; i.e., whether it is possible to improve the state estimation quality. If the Kalman filter works optimally, the innovation sequence (the output prediction error) is a white noise, therefore the whiteness property of the innovations measures filter performance. Several different methods can be used for this purpose. If the noise terms are distributed in a non-Gaussian manner, methods for assessing performance of the filter estimate, which use probability inequalities or large-sample theory, are known in the literature.\nExample application, technical.\nConsider a truck on frictionless, straight rails. Initially, the truck is stationary at position 0, but it is buffeted this way and that by random uncontrolled forces. We measure the position of the truck every Δ\"t\" seconds, but these measurements are imprecise; we want to maintain a model of the truck's position and velocity. We show here how we derive the model from which we create our Kalman filter.\nSince formula_26 are constant, their time indices are dropped.\nThe position and velocity of the truck are described by the linear state space\nwhere formula_28 is the velocity, that is, the derivative of position with respect to time.\nWe assume that between the (\"k\" − 1) and \"k\" timestep, uncontrolled forces cause a constant acceleration of \"a\"\"k\" that is normally distributed with mean 0 and standard deviation \"σ\"\"a\". From Newton's laws of motion we conclude that\n(there is no formula_30 term since there are no known control inputs. Instead, \"a\"\"k\" is the effect of an unknown input and formula_31 applies that effect to the state vector) where\nso that\nwhere\nThe matrix formula_35 is not full rank (it is of rank one if formula_36). Hence, the distribution formula_37 is not absolutely continuous and has no probability density function. Another way to express this, avoiding explicit degenerate distributions is given by\nAt each time phase, a noisy measurement of the true position of the truck is made. Let us suppose the measurement noise \"v\"\"k\" is also distributed normally, with mean 0 and standard deviation \"σ\"\"z\".\nwhere\nand\nWe know the initial starting state of the truck with perfect precision, so we initialize\nand to tell the filter that we know the exact position and velocity, we give it a zero covariance matrix:\nIf the initial position and velocity are not known perfectly, the covariance matrix should be initialized with suitable variances on its diagonal:\nThe filter will then prefer the information from the first measurements over the information already in the model.\nAsymptotic form.\nFor simplicity, assume that the control input formula_45. Then the Kalman filter may be written:\nA similar equation holds if we include a non-zero control input. Gain matrices formula_15 evolve independently of the measurements formula_48. From above, the four equations needed for updating the Kalman gain are as follows:\nSince the gain matrices depend only on the model, and not the measurements, they may be computed offline. Convergence of the gain matrices formula_15 to an asymptotic matrix formula_51 applies for conditions established in Walrand and Dimakis. Simulations establish the number of steps to convergence. For the moving truck example described above, with formula_52. and formula_53, simulation shows convergence in formula_54 iterations.\nUsing the asymptotic gain, and assuming formula_55 and formula_56 are independent of formula_57, the Kalman filter becomes a linear time-invariant filter:\nThe asymptotic gain formula_51, if it exists, can be computed by first solving the following discrete Riccati equation for the asymptotic state covariance formula_60:\nThe asymptotic gain is then computed as before.\nAdditionally, a form of the asymptotic Kalman filter more commonly used in control theory is given by\nwhere\nThis leads to an estimator of the form\nDerivations.\nThe Kalman filter can be derived as a generalized least squares method operating on previous data.\nDeriving the \"posteriori\" estimate covariance matrix.\nStarting with our invariant on the error covariance P\"k\" | \"k\" as above\nsubstitute in the definition of formula_67\nand substitute formula_69\nand formula_48\nand by collecting the error vectors we get\nSince the measurement error v\"k\" is uncorrelated with the other terms, this becomes\nby the properties of vector covariance this becomes\nwhich, using our invariant on P\"k\" | \"k\"−1 and the definition of R\"k\" becomes\nThis formula (sometimes known as the Joseph form of the covariance update equation) is valid for any value of K\"k\". It turns out that if K\"k\" is the optimal Kalman gain, this can be simplified further as shown below.\nKalman gain derivation.\nThe Kalman filter is a minimum mean-square error estimator. The error in the \"a posteriori\" state estimation is\nWe seek to minimize the expected value of the square of the magnitude of this vector, formula_78. This is equivalent to minimizing the trace of the \"a posteriori\" estimate covariance matrix formula_79. By expanding out the terms in the equation above and collecting, we get:\nThe trace is minimized when its matrix derivative with respect to the gain matrix is zero. Using the gradient matrix rules and the symmetry of the matrices involved we find that\nSolving this for K\"k\" yields the Kalman gain:\nThis gain, which is known as the \"optimal Kalman gain\", is the one that yields MMSE estimates when used.\nSimplification of the \"posteriori\" error covariance formula.\nThe formula used to calculate the \"a posteriori\" error covariance can be simplified when the Kalman gain equals the optimal value derived above. Multiplying both sides of our Kalman gain formula on the right by S\"k\"K\"k\"T, it follows that\nReferring back to our expanded formula for the \"a posteriori\" error covariance,\nwe find the last two terms cancel out, giving\nThis formula is computationally cheaper and thus nearly always used in practice, but is only correct for the optimal gain. If arithmetic precision is unusually low causing problems with numerical stability, or if a non-optimal Kalman gain is deliberately used, this simplification cannot be applied; the \"a posteriori\" error covariance formula as derived above (Joseph form) must be used.\nSensitivity analysis.\nThe Kalman filtering equations provide an estimate of the state formula_10 and its error covariance formula_9 recursively. The estimate and its quality depend on the system parameters and the noise statistics fed as inputs to the estimator. This section analyzes the effect of uncertainties in the statistical inputs to the filter. In the absence of reliable statistics or the true values of noise covariance matrices formula_88 and formula_89, the expression\nno longer provides the actual error covariance. In other words, formula_91. In most real-time applications, the covariance matrices that are used in designing the Kalman filter are different from the actual (true) noise covariances matrices. This sensitivity analysis describes the behavior of the estimation error covariance when the noise covariances as well as the system matrices formula_56 and formula_55 that are fed as inputs to the filter are incorrect. Thus, the sensitivity analysis describes the robustness (or sensitivity) of the estimator to misspecified statistical and parametric inputs to the estimator.\nThis discussion is limited to the error sensitivity analysis for the case of statistical uncertainties. Here the actual noise covariances are denoted by formula_94 and formula_95 respectively, whereas the design values used in the estimator are formula_96 and formula_89 respectively. The actual error covariance is denoted by formula_98 and formula_99 as computed by the Kalman filter is referred to as the Riccati variable. When formula_100 and formula_101, this means that formula_102. While computing the actual error covariance using formula_103, substituting for formula_104 and using the fact that formula_105 and formula_106, results in the following recursive equations for formula_98 :\nand\nWhile computing formula_99, by design the filter implicitly assumes that formula_111 and formula_112. The recursive expressions for formula_98 and formula_99 are identical except for the presence of formula_115 and formula_116 in place of the design values formula_96 and formula_89 respectively. Researches have been done to analyze Kalman filter system's robustness.\nSquare root form.\nOne problem with the Kalman filter is its numerical stability. If the process noise covariance Q\"k\" is small, round-off error often causes a small positive eigenvalue of the state covariance matrix P to be computed as a negative number. This renders the numerical representation of P indefinite, while its true form is positive-definite.\nPositive definite matrices have the property that they have a triangular matrix square root P = S·ST. This can be computed efficiently using the Cholesky factorization algorithm, but more importantly, if the covariance is kept in this form, it can never have a negative diagonal or become asymmetric. An equivalent form, which avoids many of the square root operations required by the matrix square root yet preserves the desirable numerical properties, is the U-D decomposition form, P = U·D·UT, where U is a unit triangular matrix (with unit diagonal), and D is a diagonal matrix.\nBetween the two, the U-D factorization uses the same amount of storage, and somewhat less computation, and is the most commonly used square root form. (Early literature on the relative efficiency is somewhat misleading, as it assumed that square roots were much more time-consuming than divisions, while on 21st-century computers they are only slightly more expensive.)\nEfficient algorithms for the Kalman prediction and update steps in the square root form were developed by G. J. Bierman and C. L. Thornton.\nThe L·D·LT decomposition of the innovation covariance matrix Sk is the basis for another type of numerically efficient and robust square root filter. The algorithm starts with the LU decomposition as implemented in the Linear Algebra PACKage (LAPACK). These results are further factored into the L·D·LT structure with methods given by Golub and Van Loan (algorithm 4.1.2) for a symmetric nonsingular matrix. Any singular covariance matrix is pivoted so that the first diagonal partition is nonsingular and well-conditioned. The pivoting algorithm must retain any portion of the innovation covariance matrix directly corresponding to observed state-variables Hk·xk|k-1 that are associated with auxiliary observations in\nyk. The l·d·lt square-root filter requires orthogonalization of the observation vector. This may be done with the inverse square-root of the covariance matrix for the auxiliary variables using Method 2 in Higham (2002, p. 263).\nParallel form.\nThe Kalman filter is efficient for sequential data processing on central processing units (CPUs), but in its original form it is inefficient on parallel architectures such as graphics processing units (GPUs). It is however possible to express the filter-update routine in terms of an associative operator using the formulation in Särkkä (2021). The filter solution can then be retrieved by the use of a prefix sum algorithm which can be efficiently implemented on GPU. This reduces the computational complexity from formula_119 in the number of time steps to formula_120.\nRelationship to recursive Bayesian estimation.\nThe Kalman filter can be presented as one of the simplest dynamic Bayesian networks. The Kalman filter calculates estimates of the true values of states recursively over time using incoming measurements and a mathematical process model. Similarly, recursive Bayesian estimation calculates estimates of an unknown probability density function (PDF) recursively over time using incoming measurements and a mathematical process model.\nIn recursive Bayesian estimation, the true state is assumed to be an unobserved Markov process, and the measurements are the observed states of a hidden Markov model (HMM).\nBecause of the Markov assumption, the true state is conditionally independent of all earlier states given the immediately previous state.\nSimilarly, the measurement at the \"k\"-th timestep is dependent only upon the current state and is conditionally independent of all other states given the current state.\nUsing these assumptions the probability distribution over all states of the hidden Markov model can be written simply as:\nHowever, when a Kalman filter is used to estimate the state x, the probability distribution of interest is that associated with the current states conditioned on the measurements up to the current timestep. This is achieved by marginalizing out the previous states and dividing by the probability of the measurement set.\nThis results in the \"predict\" and \"update\" phases of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is the sum (integral) of the products of the probability distribution associated with the transition from the (\"k\" − 1)-th timestep to the \"k\"-th and the probability distribution associated with the previous state, over all possible formula_124.\nThe measurement set up to time \"t\" is\nThe probability distribution of the update is proportional to the product of the measurement likelihood and the predicted state.\nThe denominator\nis a normalization term.\nThe remaining probability density functions are\nThe PDF at the previous timestep is assumed inductively to be the estimated state and covariance. This is justified because, as an optimal estimator, the Kalman filter makes best use of the measurements, therefore the PDF for formula_130 given the measurements formula_131 is the Kalman filter estimate.\nMarginal likelihood.\nRelated to the recursive Bayesian interpretation described above, the Kalman filter can be viewed as a generative model, i.e., a process for \"generating\" a stream of random observations z = (z0, z1, z2, ...). Specifically, the process is\nThis process has identical structure to the hidden Markov model, except that the discrete state and observations are replaced with continuous variables sampled from Gaussian distributions.\nIn some applications, it is useful to compute the \"probability\" that a Kalman filter with a given set of parameters (prior distribution, transition and observation models, and control inputs) would generate a particular observed signal. This probability is known as the marginal likelihood because it integrates over (\"marginalizes out\") the values of the hidden state variables, so it can be computed using only the observed signal. The marginal likelihood can be useful to evaluate different parameter choices, or to compare the Kalman filter against other models using Bayesian model comparison.\nIt is straightforward to compute the marginal likelihood as a side effect of the recursive filtering computation. By the chain rule, the likelihood can be factored as the product of the probability of each observation given previous observations,\nand because the Kalman filter describes a Markov process, all relevant information from previous observations is contained in the current state estimate formula_142 Thus the marginal likelihood is given by\ni.e., a product of Gaussian densities, each corresponding to the density of one observation z\"k\" under the current filtering distribution formula_144. This can easily be computed as a simple recursive update; however, to avoid numeric underflow, in a practical implementation it is usually desirable to compute the \"log\" marginal likelihood formula_145 instead. Adopting the convention formula_146, this can be done via the recursive update rule\nwhere formula_148 is the dimension of the measurement vector.\nAn important application where such a (log) likelihood of the observations (given the filter parameters) is used is multi-target tracking. For example, consider an object tracking scenario where a stream of observations is the input, however, it is unknown how many objects are in the scene (or, the number of objects is known but is greater than one). For such a scenario, it can be unknown apriori which observations/measurements were generated by which object. A multiple hypothesis tracker (MHT) typically will form different track association hypotheses, where each hypothesis can be considered as a Kalman filter (for the linear Gaussian case) with a specific set of parameters associated with the hypothesized object. Thus, it is important to compute the likelihood of the observations for the different hypotheses under consideration, such that the most-likely one can be found.\nInformation filter.\nIn cases where the dimension of the observation vector y is bigger than the dimension of the state space vector x, the information filter can avoid the inversion of a bigger matrix in the Kalman gain calculation at the price of inverting a smaller matrix in the prediction step, thus saving computing time. In the information filter, or inverse covariance filter, the estimated covariance and estimated state are replaced by the information matrix and information vector respectively. These are defined as:\nSimilarly the predicted covariance and state have equivalent information forms, defined as:\nas have the measurement covariance and measurement vector, which are defined as:\nThe information update now becomes a trivial sum.\nThe main advantage of the information filter is that \"N\" measurements can be filtered at each time step simply by summing their information matrices and vectors.\nTo predict the information filter the information matrix and vector can be converted back to their state space equivalents, or alternatively the information space prediction can be used.\nFixed-lag smoother.\nThe optimal fixed-lag smoother provides the optimal estimate of formula_155 for a given fixed-lag formula_156 using the measurements from formula_157 to formula_48. It can be derived using the previous theory via an augmented state, and the main equation of the filter is the following:\nwhere:\nIf the estimation error covariance is defined so that\nthen we have that the improvement on the estimation of formula_170 is given by:\nFixed-interval smoothers.\nThe optimal fixed-interval smoother provides the optimal estimate of formula_172 (formula_173) using the measurements from a fixed interval formula_157 to formula_175. This is also called \"Kalman Smoothing\". There are several smoothing algorithms in common use.\nRauch–Tung–Striebel.\nThe Rauch–Tung–Striebel (RTS) smoother is an efficient two-pass algorithm for fixed interval smoothing.\nThe forward pass is the same as the regular Kalman filter algorithm. These \"filtered\" a-priori and a-posteriori state estimates formula_176, formula_67 and covariances formula_178, formula_99 are saved for use in the backward pass (for retrodiction).\nIn the backward pass, we compute the \"smoothed\" state estimates formula_180 and covariances formula_181. We start at the last time step and proceed backward in time using the following recursive equations:\nwhere\nformula_184 is the a-posteriori state estimate of timestep formula_57 and formula_186 is the a-priori state estimate of timestep formula_187. The same notation applies to the covariance.\nModified Bryson–Frazier smoother.\nAn alternative to the RTS algorithm is the modified Bryson–Frazier (MBF) fixed interval smoother developed by Bierman. This also uses a backward pass that processes data saved from the Kalman filter forward pass. The equations for the backward pass involve the recursive\ncomputation of data which are used at each observation time to compute the smoothed state and covariance.\nThe recursive equations are\nwhere formula_189 is the residual covariance and formula_190. The smoothed state and covariance can then be found by substitution in the equations\nor\nAn important advantage of the MBF is that it does not require finding the inverse of the covariance matrix.\nMinimum-variance smoother.\nThe minimum-variance smoother can attain the best-possible error performance, provided that the models are linear, their parameters and the noise statistics are known precisely. This smoother is a time-varying state-space generalization of the optimal non-causal Wiener filter.\nThe smoother calculations are done in two passes. The forward calculations involve a one-step-ahead predictor and are given by\nThe above system is known as the inverse Wiener-Hopf factor. The backward recursion is the adjoint of the above forward system. The result of the backward pass formula_194 may be calculated by operating the forward equations on the time-reversed formula_195 and time reversing the result. In the case of output estimation, the smoothed estimate is given by\nTaking the causal part of this minimum-variance smoother yields\nwhich is identical to the minimum-variance Kalman filter. The above solutions minimize the variance of the output estimation error. Note that the Rauch–Tung–Striebel smoother derivation assumes that the underlying distributions are Gaussian, whereas the minimum-variance solutions do not. Optimal smoothers for state estimation and input estimation can be constructed similarly.\nA continuous-time version of the above smoother is described in.\nExpectation–maximization algorithms may be employed to calculate approximate maximum likelihood estimates of unknown state-space parameters within minimum-variance filters and smoothers. Often uncertainties remain within problem assumptions. A smoother that accommodates uncertainties can be designed by adding a positive definite term to the Riccati equation.\nIn cases where the models are nonlinear, step-wise linearizations may be within the minimum-variance filter and smoother recursions (extended Kalman filtering).\nFrequency-weighted Kalman filters.\nPioneering research on the perception of sounds at different frequencies was conducted by Fletcher and Munson in the 1930s. Their work led to a standard way of weighting measured sound levels within investigations of industrial noise and hearing loss. Frequency weightings have since been used within filter and controller designs to manage performance within bands of interest.\nTypically, a frequency shaping function is used to weight the average power of the error spectral density in a specified frequency band. Let formula_198 denote the output estimation error exhibited by a conventional Kalman filter. Also, let formula_199 denote a causal frequency weighting transfer function. The optimum solution which minimizes the variance of formula_200 arises by simply constructing formula_201.\nThe design of formula_199 remains an open question. One way of proceeding is to identify a system which generates the estimation error and setting formula_199 equal to the inverse of that system. This procedure may be iterated to obtain mean-square error improvement at the cost of increased filter order. The same technique can be applied to smoothers.\nNonlinear filters.\nThe basic Kalman filter is limited to a linear assumption. More complex systems, however, can be nonlinear. The nonlinearity can be associated either with the process model or with the observation model or with both.\nThe most common variants of Kalman filters for non-linear systems are the Extended Kalman Filter and Unscented Kalman filter. The suitability of which filter to use depends on the non-linearity indices of the process and observation model.\nExtended Kalman filter.\nIn the extended Kalman filter (EKF), the state transition and observation models need not be linear functions of the state but may instead be nonlinear functions. These functions are of differentiable type.\nThe function \"f\" can be used to compute the predicted state from the previous estimate and similarly the function \"h\" can be used to compute the predicted measurement from the predicted state. However, \"f\" and \"h\" cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian) is computed.\nAt each timestep the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearizes the nonlinear function around the current estimate.\nUnscented Kalman filter.\nWhen the state transition and observation models—that is, the predict and update functions formula_205 and formula_206—are highly nonlinear, the extended Kalman filter can give particularly poor performance.\n This is because the covariance is propagated through linearization of the underlying nonlinear model. The unscented Kalman filter (UKF)  uses a deterministic sampling technique known as the unscented transformation (UT) to pick a minimal set of sample points (called sigma points) around the mean. The sigma points are then propagated through the nonlinear functions, from which a new mean and covariance estimate are then formed. The resulting filter depends on how the transformed statistics of the UT are calculated and which set of sigma points are used. It should be remarked that it is always possible to construct new UKFs in a consistent way. For certain systems, the resulting UKF more accurately estimates the true mean and covariance. This can be verified with Monte Carlo sampling or Taylor series expansion of the posterior statistics. In addition, this technique removes the requirement to explicitly calculate Jacobians, which for complex functions can be a difficult task in itself (i.e., requiring complicated derivatives if done analytically or being computationally costly if done numerically), if not impossible (if those functions are not differentiable).\nSigma points.\nFor a random vector formula_207, sigma points are any set of vectors\nattributed with\nA simple choice of sigma points and weights for formula_216 in the UKF algorithm is\nwhere formula_218 is the mean estimate of formula_216. The vector formula_220 is the \"j\"th column of formula_221 where formula_222. Typically, formula_221 is obtained via Cholesky decomposition of formula_224. With some care the filter equations can be expressed in such a way that formula_221 is evaluated directly without intermediate calculations of formula_224. This is referred to as the \"square-root unscented Kalman filter\".\nThe weight of the mean value, formula_227, can be chosen arbitrarily.\nAnother popular parameterization (which generalizes the above) is \nformula_229 and formula_230 control the spread of the sigma points. formula_231 is related to the distribution of formula_232.\nAppropriate values depend on the problem at hand, but a typical recommendation is formula_233, formula_234, and formula_235. However, a larger value of formula_229 (e.g., formula_237) may be beneficial in order to better capture the spread of the distribution and possible nonlinearities. If the true distribution of formula_232 is Gaussian, formula_235 is optimal.\nPredict.\nAs with the EKF, the UKF prediction can be used independently from the UKF update, in combination with a linear (or indeed EKF) update, or vice versa.\nGiven estimates of the mean and covariance, formula_240 and formula_224, one obtains formula_242 sigma points as described in the section above. The sigma points are propagated through the transition function \"f\".\nThe propagated sigma points are weighed to produce the predicted mean and covariance. \nwhere formula_245 are the first-order weights of the original sigma points, and formula_246 are the second-order weights. The matrix formula_247 is the covariance of the transition noise, formula_248.\nUpdate.\nGiven prediction estimates formula_176 and formula_178, a new set of formula_251 sigma points formula_252 with corresponding first-order weights formula_253 and second-order weights formula_254 is calculated. These sigma points are transformed through the measurement function formula_206.\nThen the empirical mean and covariance of the transformed points are calculated.\nwhere formula_89 is the covariance matrix of the observation noise, formula_259. Additionally, the cross covariance matrix is also needed\nThe Kalman gain is\nThe updated mean and covariance estimates are\nDiscriminative Kalman filter.\nWhen the observation model formula_263 is highly non-linear and/or non-Gaussian, it may prove advantageous to apply Bayes' rule and estimate\nwhere formula_265 for nonlinear functions formula_266. This replaces the generative specification of the standard Kalman filter with a discriminative model for the latent states given observations.\nUnder a stationary state model\nwhere formula_268, if \nthen given a new observation formula_48, it follows that\nwhere\nNote that this approximation requires formula_273 to be positive-definite; in the case that it is not, \nis used instead. Such an approach proves particularly useful when the dimensionality of the observations is much greater than that of the latent states and can be used build filters that are particularly robust to nonstationarities in the observation model.\nAdaptive Kalman filter.\nAdaptive Kalman filters allow to adapt for process dynamics which are not modeled in the process model formula_275, which happens for example in the context of a maneuvering target when a constant velocity (reduced order) Kalman filter is employed for tracking.\nKalman–Bucy filter.\nKalman–Bucy filtering (named for Richard Snowden Bucy) is a continuous time version of Kalman filtering.\nIt is based on the state space model\nwhere formula_277 and formula_278 represent the intensities (or, more accurately: the Power Spectral Density - PSD - matrices) of the two white noise terms formula_279 and formula_280, respectively.\nThe filter consists of two differential equations, one for the state estimate and one for the covariance:\nwhere the Kalman gain is given by\nNote that in this expression for formula_283 the covariance of the observation noise formula_278 represents at the same time the covariance of the prediction error (or \"innovation\") formula_285; these covariances are equal only in the case of continuous time.\nThe distinction between the prediction and update steps of discrete-time Kalman filtering does not exist in continuous time.\nThe second differential equation, for the covariance, is an example of a Riccati equation. Nonlinear generalizations to Kalman–Bucy filters include continuous time extended Kalman filter.\nHybrid Kalman filter.\nMost physical systems are represented as continuous-time models while discrete-time measurements are made frequently for state estimation via a digital processor. Therefore, the system model and measurement model are given by\nwhere\nPredict.\nThe prediction equations are derived from those of continuous-time Kalman filter without update from measurements, i.e., formula_290. The predicted state and covariance are calculated respectively by solving a set of differential equations with the initial value equal to the estimate at the previous step.\nFor the case of linear time invariant systems, the continuous time dynamics can be exactly discretized into a discrete time system using matrix exponentials.\nUpdate.\nThe update equations are identical to those of the discrete-time Kalman filter.\nVariants for the recovery of sparse signals.\nThe traditional Kalman filter has also been employed for the recovery of sparse, possibly dynamic, signals from noisy observations. Recent works utilize notions from the theory of compressed sensing/sampling, such as the restricted isometry property and related probabilistic recovery arguments, for sequentially estimating the sparse state in intrinsically low-dimensional systems.\nRelation to Gaussian processes.\nSince linear Gaussian state-space models lead to Gaussian processes, Kalman filters can be viewed as sequential solvers for Gaussian process regression.", "Automation-Control": 0.8977296948, "Qwen2": "Yes"} {"id": "45390860", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=45390860", "title": "Domain adaptation", "text": "Domain adaptation is a field associated with machine learning and transfer learning. This scenario arises when we aim at learning from a source data distribution a well performing model on a different (but related) target data distribution. For instance, one of the tasks of the common spam filtering problem consists in adapting a model from one user (the source distribution) to a new user who receives significantly different emails (the target distribution). Domain adaptation has also been shown to be beneficial for learning unrelated sources.\nNote that, when more than one source distribution is available the problem is referred to as multi-source domain adaptation.\nOverview.\nDomain adaptation is the ability to apply an algorithm trained in one or more \"source domains\" to a different (but related) \"target domain\". Domain adaptation is a subcategory of transfer learning. In domain adaptation, the source and target domains all have the same feature space (but different distributions); in contrast, transfer learning includes cases where the target domain's feature space is different from the source feature space or spaces.\nDomain shift.\nA domain shift, or distributional shift, is a change in the data distribution between an algorithm's training dataset, and a dataset it encounters when deployed. These domain shifts are common in practical applications of artificial intelligence. Conventional machine-learning algorithms often adapt poorly to domain shifts. The modern machine-learning community has many different strategies to attempt to gain better domain adaptation.\nExamples.\nOther applications include wifi localization detection and many aspects of computer vision.\nFormalization.\nLet formula_1 be the input space (or description space) and let formula_2 be the output space (or label space). The objective of a machine learning algorithm is to learn a mathematical model (a hypothesis) formula_3 able to attach a label from formula_2 to an example from formula_1. This model is learned from a learning sample formula_6.\nUsually in supervised learning (without domain adaptation), we suppose that the examples formula_7 are drawn i.i.d. from a distribution formula_8 of support formula_9 (unknown and fixed). The objective is then to learn formula_10 (from formula_11) such that it commits the least error possible for labelling new examples coming from the distribution formula_8.\nThe main difference between supervised learning and domain adaptation is that in the latter situation we study two different (but related) distributions formula_8 and formula_14 on formula_9. The domain adaptation task then consists of the transfer of knowledge from the source domain formula_8 to the target one formula_14. The goal is then to learn formula_10 (from labeled or unlabelled samples coming from the two domains) such that it commits as little error as possible on the target domain formula_14.\nThe major issue is the following: if a model is learned from a source domain, what is its capacity to correctly label data coming from the target domain?\nThe different types of domain adaptation.\nThere are several contexts of domain adaptation. They differ in the information considered for the target task.\nFour algorithmic principles.\nReweighting algorithms.\nThe objective is to reweight the source labeled sample such that it \"looks like\" the target sample (in terms of the error measure considered).\nIterative algorithms.\nA method for adapting consists in iteratively \"auto-labeling\" the target examples. The principle is simple:\nNote that there exist other iterative approaches, but they usually need target labeled examples.\nSearch of a common representation space.\nThe goal is to find or construct a common representation space for the two domains. The objective is to obtain a space in which the domains are close to each other while keeping good performances on the source labeling task.\nThis can be achieved through the use of Adversarial machine learning techniques where feature representations from samples in different domains are encouraged to be indistinguishable.\nHierarchical Bayesian Model.\nThe goal is to construct a Bayesian hierarchical model formula_22, which is essentially a factorization model for counts formula_23, to derive domain-dependent latent representations allowing both domain-specific and globally shared latent factors.\nSoftwares.\nSeveral compilations of domain adaptation and transfer learning algorithms have been implemented over the past decades:", "Automation-Control": 0.7800170779, "Qwen2": "Yes"} {"id": "14439274", "revid": "40498013", "url": "https://en.wikipedia.org/wiki?curid=14439274", "title": "Petri Net Markup Language", "text": "Petri Net Markup Language (PNML) is an interchange format aimed at enabling Petri net tools to exchange Petri net models. PNML is an XML-based syntax for high-level Petri nets, which is being designed as a standard interchange format for Petri net tools.\nIt will end up being the second part of the ISO standard ISO/IEC 15909.\nPNML grammar is publicly available on its reference site.\nThe first part of this international standard, provides the mathematical definitions for high-level Petri nets.\nThese definitions are called the semantic model.\nIt also provides the graphical form definition, known as\nHigh-level Petri Net Graph (HLPNG), and its mapping to the semantic model.\nAs of December 2004, the first part is an international standard.", "Automation-Control": 0.8093780875, "Qwen2": "Yes"} {"id": "3152055", "revid": "16809467", "url": "https://en.wikipedia.org/wiki?curid=3152055", "title": "Frank–Wolfe algorithm", "text": "The Frank–Wolfe algorithm is an iterative first-order optimization algorithm for constrained convex optimization. Also known as the conditional gradient method, reduced gradient algorithm and the convex combination algorithm, the method was originally proposed by Marguerite Frank and Philip Wolfe in 1956. In each iteration, the Frank–Wolfe algorithm considers a linear approximation of the objective function, and moves towards a minimizer of this linear function (taken over the same domain).\nProblem statement.\nSuppose formula_1 is a compact convex set in a vector space and formula_2 is a convex, differentiable real-valued function. The Frank–Wolfe algorithm solves the optimization problem\nProperties.\nWhile competing methods such as gradient descent for constrained optimization require a projection step back to the feasible set in each iteration, the Frank–Wolfe algorithm only needs the solution of a linear problem over the same set in each iteration, and automatically stays in the feasible set.\nThe convergence of the Frank–Wolfe algorithm is sublinear in general: the error in the objective function to the optimum is formula_20 after \"k\" iterations, so long as the gradient is Lipschitz continuous with respect to some norm. The same convergence rate can also be shown if the sub-problems are only solved approximately.\nThe iterates of the algorithm can always be represented as a sparse convex combination of the extreme points of the feasible set, which has helped to the popularity of the algorithm for sparse greedy optimization in machine learning and signal processing problems, as well as for example the optimization of minimum–cost flows in transportation networks.\nIf the feasible set is given by a set of linear constraints, then the subproblem to be solved in each iteration becomes a linear program.\nWhile the worst-case convergence rate with formula_20 can not be improved in general, faster convergence can be obtained for special problem classes, such as some strongly convex problems.\nLower bounds on the solution value, and primal-dual analysis.\nSince formula_11 is convex, for any two points formula_23 we have:\nThis also holds for the (unknown) optimal solution formula_25. That is, formula_26. The best lower bound with respect to a given point formula_27 is given by\nThe latter optimization problem is solved in every iteration of the Frank–Wolfe algorithm, therefore the solution formula_8 of the direction-finding subproblem of the formula_30-th iteration can be used to determine increasing lower bounds formula_31 during each iteration by setting formula_32 and\nSuch lower bounds on the unknown optimal value are important in practice because they can be used as a stopping criterion, and give an efficient certificate of the approximation quality in every iteration, since always formula_34.\nIt has been shown that this corresponding duality gap, that is the difference between formula_35 and the lower bound formula_31, decreases with the same convergence rate, i.e.\nformula_37", "Automation-Control": 0.9335990548, "Qwen2": "Yes"} {"id": "554087", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=554087", "title": "Causal system", "text": "In control theory, a causal system (also known as a physical or nonanticipative system) is a system where the output depends on past and\ncurrent inputs but not future inputs—i.e., the output formula_1 depends only on the input formula_2 for values of formula_3.\nThe idea that the output of a function at any time depends only on past and present values of input is defined by the property commonly referred to as causality. A system that has \"some\" dependence on input values from the future (in addition to possible dependence on past or current input values) is termed a non-causal or acausal system, and a system that depends \"solely\" on future input values is an anticausal system. Note that some authors have defined an anticausal system as one that depends solely on future \"and present\" input values or, more simply, as a system that does not depend on past input values. \nClassically, nature or physical reality has been considered to be a causal system. Physics involving special relativity or general relativity require more careful definitions of causality, as described elaborately in Causality (physics).\nThe causality of systems also plays an important role in digital signal processing, where filters are constructed so that they are causal, sometimes by altering a non-causal formulation to remove the lack of causality so that it is realizable. For more information, see causal filter.\nFor a causal system, the impulse response of the system must use only the present and past values of the input to determine the output. This requirement is a necessary and sufficient condition for a system to be causal, regardless of linearity. Note that similar rules apply to either discrete or continuous cases. By this definition of requiring no future input values, systems must be causal to process signals in real time.\nMathematical definitions.\nDefinition 1: A system mapping formula_4 to formula_5 is causal if and only if, for any pair of input signals formula_6, formula_7 and any choice of formula_8, such that\nthe corresponding outputs satisfy\nDefinition 2: Suppose formula_11 is the impulse response of any system formula_12 described by a linear constant coefficient differential equation. The system formula_12 is causal if and only if\notherwise it is non-causal.\nExamples.\nThe following examples are for systems with an input formula_4 and output formula_5.", "Automation-Control": 0.9945821762, "Qwen2": "Yes"} {"id": "1632831", "revid": "46136", "url": "https://en.wikipedia.org/wiki?curid=1632831", "title": "Hurwitz matrix", "text": "In mathematics, a Hurwitz matrix, or Routh–Hurwitz matrix, in engineering stability matrix, is a structured real square matrix constructed with coefficients of a real polynomial.\nHurwitz matrix and the Hurwitz stability criterion.\nNamely, given a real polynomial\nthe formula_2 square matrix\nis called Hurwitz matrix corresponding to the polynomial formula_4. It was established by Adolf Hurwitz in 1895 that a real polynomial with formula_5 is stable\n(that is, all its roots have strictly negative real part) if and only if all the leading principal minors of the matrix formula_6 are positive:\nand so on. The minors formula_8 are called the Hurwitz determinants. Similarly, if formula_9 then the polynomial is stable if and only if the principal minors have alternating signs starting with a negative one.\nHurwitz stable matrices.\nIn engineering and stability theory, a square matrix formula_10 is called a stable matrix (or sometimes a Hurwitz matrix) if every eigenvalue of formula_10 has strictly negative real part, that is,\nfor each eigenvalue formula_13. formula_10 is also called a stability matrix, because then the differential equation\nis asymptotically stable, that is, formula_16 as formula_17\nIf formula_18 is a (matrix-valued) transfer function, then formula_19 is called Hurwitz if the poles of all elements of formula_19 have negative real part. Note that it is not necessary that formula_21 for a specific argument formula_22 be a Hurwitz matrix — it need not even be square. The connection is that if formula_10 is a Hurwitz matrix, then the dynamical system\nhas a Hurwitz transfer function.\nAny hyperbolic fixed point (or equilibrium point) of a continuous dynamical system is locally asymptotically stable if and only if the Jacobian of the dynamical system is Hurwitz stable at the fixed point.\nThe Hurwitz stability matrix is a crucial part of control theory. A system is \"stable\" if its control matrix is a Hurwitz matrix. The negative real components of the eigenvalues of the matrix represent negative feedback. Similarly, a system is inherently \"unstable\" if any of the eigenvalues have positive real components, representing positive feedback.", "Automation-Control": 0.9959613681, "Qwen2": "Yes"} {"id": "1949447", "revid": "1161861911", "url": "https://en.wikipedia.org/wiki?curid=1949447", "title": "Motion control", "text": "Motion control is a sub-field of automation, encompassing the systems or sub-systems involved in moving parts of machines in a controlled manner. Motion control systems are extensively used in a variety of fields for automation purposes, including precision engineering, micromanufacturing, biotechnology, and nanotechnology. The main components involved typically include a motion controller, an energy amplifier, and one or more prime movers or actuators. Motion control may be open loop or closed loop. In open loop systems, the controller sends a command through the amplifier to the prime mover or actuator, and does not know if the desired motion was actually achieved. Typical systems include stepper motor or fan control. For tighter control with more precision, a measuring device may be added to the system (usually near the end motion). When the measurement is converted to a signal that is sent back to the controller, and the controller compensates for any error, it becomes a Closed loop System.\nTypically the position or velocity of machines are controlled using some type of device such as a hydraulic pump, linear actuator, or electric motor, generally a servo. Motion control is an important part of robotics and CNC machine tools, however in these instances it is more complex than when used with specialized machines, where the kinematics are usually simpler. The latter is often called General Motion Control (GMC). Motion control is widely used in the packaging, printing, textile, semiconductor production, and assembly industries.\nMotion Control encompasses every technology related to the movement of objects. It covers every motion system from micro-sized systems such as silicon-type micro induction actuators to micro-siml systems such as a space platform. But, these days, the focus of motion control is the special control technology of motion systems with electric actuators such as dc/ac servo motors. Control of robotic manipulators is also included in the field of motion control because most of robotic manipulators are driven by electrical servo motors and the key objective is the control of motion.\nOverview.\nThe basic architecture of a motion control system contains:\nThe interface between the motion controller and drives it control is very critical when coordinated motion is required, as it must provide tight synchronization. Historically the only open interface was an analog signal, until open interfaces were developed that satisfied the requirements of coordinated motion control, the first being SERCOS in 1991 which is now enhanced to SERCOS III. Later interfaces capable of motion control include Ethernet/IP, Profinet IRT, Ethernet Powerlink, and EtherCAT.\nCommon control functions include:", "Automation-Control": 0.9983317256, "Qwen2": "Yes"} {"id": "31062931", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=31062931", "title": "Chebyshev pseudospectral method", "text": "The Chebyshev pseudospectral method for optimal control problems is based on Chebyshev polynomials of the first kind. It is part of the larger theory of pseudospectral optimal control, a term coined by Ross. Unlike the Legendre pseudospectral method, the Chebyshev pseudospectral (PS) method does not immediately offer high-accuracy quadrature solutions. Consequently, two different versions of the method have been proposed: one by Elnagar et al., and another by Fahroo and Ross. The two versions differ in their quadrature techniques. The Fahroo–Ross method is more commonly used today due to the ease in implementation of the Clenshaw–Curtis quadrature technique (in contrast to Elnagar–Kazemi's cell-averaging method). In 2008, Trefethen showed that the Clenshaw–Curtis method was nearly as accurate as Gauss quadrature.\n This breakthrough result opened the door for a covector mapping theorem for Chebyshev PS methods. A complete mathematical theory for Chebyshev PS methods was finally developed in 2009 by Gong, Ross and Fahroo.\nOther Chebyshev methods.\nThe Chebyshev PS method is frequently confused with other Chebyshev methods. Prior to the advent of PS methods, many authors proposed using Chebyshev polynomials to solve optimal control problems; however, none of these methods belong to the class of pseudospectral methods.", "Automation-Control": 0.7640796304, "Qwen2": "Yes"} {"id": "18207705", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=18207705", "title": "Aizerman's conjecture", "text": "In nonlinear control, Aizerman's conjecture or Aizerman problem states that a linear system in feedback with a sector nonlinearity would be stable if the linear system is stable for any linear gain of the sector. This conjecture was proven false but led to the (valid) sufficient criteria on absolute stability.\nMathematical statement of Aizerman's conjecture (Aizerman problem).\n\"Consider a system with one scalar nonlinearity\"\n\"where P is a constant n×n-matrix, q, r are constant n-dimensional vectors, ∗ is an operation of transposition, f(e) is scalar function, and f(0)=0. Suppose that the nonlinearity f is sector bounded, meaning that for some real\" formula_2 and formula_3 with formula_4, the function formula_5 satisfies\n\"Then Aizerman's conjecture is that the system is stable in large (i.e. unique stationary point is global attractor) if all linear systems with f(e)=ke, k ∈(k1,k2) are asymptotically stable.\"\nThere are counterexamples to Aizerman's conjecture such that nonlinearity belongs to the sector of linear stability and unique stable equilibrium coexists with a stable periodic solution, i.e. a hidden oscillation. However, under stronger assumptions on the system, such as positivity, Aizerman's conjecture is known to hold true.", "Automation-Control": 0.9907389879, "Qwen2": "Yes"} {"id": "12635200", "revid": "43264365", "url": "https://en.wikipedia.org/wiki?curid=12635200", "title": "Lazy linear hybrid automaton", "text": "Lazy linear hybrid automata model the discrete time behavior of control systems containing finite-precision sensors and actuators interacting with their environment under bounded inertial delays. The model permits only linear flow constraints but the invariants and guards can be any computable function.\nThis computational model was proposed by Manindra Agrawal and P. S. Thiagarajan. This model is more realistic and also computationally amenable than the currently popular modeling paradigm of linear hybrid automaton.", "Automation-Control": 0.9132928252, "Qwen2": "Yes"} {"id": "29821539", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=29821539", "title": "Cladding (metalworking)", "text": "Cladding is the bonding together of dissimilar metals. It is different from fusion welding or gluing as a method to fasten the metals together. Cladding is often achieved by extruding two metals through a die as well as pressing or rolling sheets together under high pressure.\nThe United States Mint uses cladding to manufacture coins from different metals. This allows a cheaper metal to be used as a filler. For example, dimes and quarters struck since 1965 have cores made from pure copper, with a clad layer consisting of 75% copper and 25% nickel added during production. Half dollars struck from 1965 to 1969 for circulation and in 1970 for collectors also incorporated cladding, albeit in the case of those coins, the core was a mixture of 20.9% silver and 79.1% copper, and its clad layer was 80% silver and 20% copper. Half dollars struck since 1971 are produced identically to the dimes and quarters.\nLaser cladding is an additive manufacturing approach for metal coatings or precise piece restorations by using high power multi-mode optical fiber laser.\nRoll bonding.\nIn roll bonding, two or more layers of different metals are thoroughly cleaned and passed through a pair of rollers under sufficient pressure to bond the layers. The pressure is high enough to deform the metals and reduce the combined thickness of the clad material. Heat may be applied, especially when metals are not ductile enough. As an example of application, bonding of the sheets can be controlled by painting a pattern on one sheet; only the bare metal surfaces bond, and the un-bonded portion can be inflated if the sheet is heated and the coating vaporizes. This is used to make heat exchangers for refrigeration equipment.\nExplosive welding.\nIn explosive welding, the pressure to bond the two layers is provided by detonation of a sheet of chemical explosive. No heat-affected zone is produced in the bond between metals. The explosion propagates across the sheet, which tends to expel impurities and oxides from between the sheets. Pieces up to 4 x 16 metres can be manufactured. The process is useful for cladding metal sheets with a corrosion-resistant layer.\nLaser cladding.\n\"Laser cladding\" is a method of depositing material by which a powdered or wire feedstock material is melted and consolidated by use of a laser in order to coat part of a substrate or fabricate a near-net shape part (additive manufacturing technology) .\nIt is often used to improve mechanical properties or increase corrosion resistance, repair worn out parts, and fabricate metal matrix composites. Surface material may be laser cladded directly onto a highly stressed component, i.e. to make a self-lubricating surface. However, such a modification requires further industrialization of the cladding process to adapt it for efficient mass production. Further research on the detailed effects from surface topography, material composition of the laser cladded material and the composition of the additive package in the lubricants on the tribological properties and performance are preferably studied with tribometric testing.\nProcess.\nA laser is used to melt metallic powder dropped on a substrate to be coated. The melted metal forms a pool on the substrate; moving the substrate allows the melt pool to solidify in a track of solid metal. Some processes involve moving the laser and powder nozzle assembly over a stationary substrate to produce solidified tracks. The motion of the substrate is guided by a CAM system which interpolates solid objects into a set of tracks, thus producing the desired part at the end of the trajectory.\nAutomatic laser cladding machines are the subject of ongoing research and development. Many of the process parameters must be manually set, such as laser power, laser focal point, substrate velocity, powder injection rate, etc., and thus require the attention of a specialized technician to ensure proper results. By use of sensors to monitor the deposited track height and width, metallurgical properties , and temperature, constant observation from a technician is no longer required to produce a final product. Further research has been directed to forward processing where system parameters are developed around specific metallurgical properties for user defined applications (such as microstructure, internal stresses, dilution zone gradients, and clad contact angle).", "Automation-Control": 0.9779828787, "Qwen2": "Yes"} {"id": "29837448", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=29837448", "title": "Fluid dynamic gauge", "text": "A fluid dynamic gauge (FDG) is a measurement technique used to study the behaviour of soft deposit layers in a liquid environment. It employs fluid mechanics to determine the thickness of the layer, and can also be used to obtain a measure of its strength. It was inspired by the technique of pneumatic gauging, which relies on a flow of air rather than the process liquid. Fluid dynamic gauging can be conducted as an in-line measuring technique, but is more commonly used as a research tool. \nThe technique was originally developed to measure the buildup or removal of the fouling layers commonly encountered in the process industry (such as in the heat treatment of dairy products). More recently, it has been applied to study cake buildup on porous membrane surfaces. Scanning versions can determine the topology of a solid/soft-solid surface immersed in a liquid environment, in an analogous manner to an atomic force microscope, but exploiting the principles of fluid mechanics.\nKey features of the technique are that it can study soft deposit layers without touching them, relies on relatively simple operating principles, can be used in a completely opaque liquid, and does not rely on knowledge of the fluid or deposit properties.", "Automation-Control": 0.7337864637, "Qwen2": "Yes"} {"id": "7968480", "revid": "7611264", "url": "https://en.wikipedia.org/wiki?curid=7968480", "title": "Clutch control", "text": "Clutch control refers to the act of controlling the speed of a vehicle with a manual transmission by partially engaging the clutch plate, using the clutch pedal instead of (or in conjunction with) the accelerator pedal. The purpose of a clutch is in part to allow such control; in particular, a clutch provides transfer of torque between shafts spinning at different speeds. In the extreme, clutch control is used in performance driving, such as starting from a dead stop with the engine producing maximum torque at high RPM.\nOverview.\nWith the clutch pedal completely pressed or a motorcycle's lever pulled entirely towards the driver, there is no direct link between the engine and the driveshaft, so no power can pass from the engine to the driveshaft and wheels. With the pedal entirely released, there is full contact between the engine and the driveshaft, via the clutch plate, which means that the engine can apply power directly to the driveshaft. However, it is possible to have the clutch plate partially engaged, allowing the clutch to slip. As a result, only a fraction of the power from the engine reaches the driveshaft, which is commonly known as half clutch.\nBenefits.\nThere are benefits to the use of clutch control in specific circumstances:\nLow gear and low speed.\nWhen a car is in first gear, small variations in engine speed translate to large changes in acceleration and engine braking. However, with a combination of clutch control and careful use of engine speed, a much smoother ride can be achieved by allowing the clutch to slip. Variations in engine revs are not immediately translated into changes in drive shaft rotation speed, but rather the friction on the clutch plate allows the drive shaft to gradually equalize with the speed of the engine.\nMoving off from a standstill\nAt a certain point while gently lifting the clutch, the car will begin to move as the clutch starts to slip, referred to as the biting point. Here, the accelerator pedal should be gently depressed to slowly increase the car's speed. Once the car reaches a suitable speed, the clutch can be fully engaged and speed can then be controlled either by varying the engine speed or by partially disengaging the clutch again if necessary.\nThis particular use of clutch control is frequently taught to learning drivers as a way to control acceleration when pulling away from a complete stop or when driving at very slow speeds while minimizing the chance of stalling the engine.\nCreeping\nCreeping generally refers to moving slowly, and is generally analogous to a parking situation or very slow moving traffic. Creeping is usually done in either reverse or first gear, like when reversing out of a parking space or pulling into a driveway. While moving at low speeds like these, it is often not necessary to use the accelerator pedal as an engine's idle speed should provide enough power to do so, given a driver is skilled with clutch control. Revving the engine higher than about 2000 RPM while moving at low speeds and the clutch not fully engaged is very bad and causes extreme wear to the clutch material, greatly reducing its usable lifespan. This is mitigated in most motorcycles by the use of a wet clutch.\nUphill start.\nWhen pulling away on an uphill slope, the chance of stalling the engine is greater, and so it can be beneficial to engage the clutch more slowly than normal while revving higher than normal.\nAdverse road conditions.\nIn adverse road conditions, notably snow or ice, it is recommended to pull away in as high a gear as possible (usually second) to minimize torque on the wheels and thereby maintain traction with the road. Pulling away requires progressively slower engagement of the clutch as the gear increases, and in a high gear it is necessary to engage the clutch slowly to avoid the increased risk of stalling the engine, or, in the case of adverse weather conditions, spinning the wheels.\nBalancing the clutch.\nNormally, when a vehicle is stationary on an uphill slope it is necessary to use the handbrake in conjunction with clutch control to prevent the vehicle from rolling backwards when pulling away. However, in situations where the vehicle must be stopped briefly, for example in slow moving traffic, the clutch can be used to balance the uphill force from the engine with the downhill force of gravity. In very few instances this may be useful, but generally should be avoided as doing this habitually will cause excessive wear on the clutch. \nDeceleration.\nTypically with motorcycles and in motor sport, the clutch is often used to facilitate the use of resistance from the engine spinning at high speeds to decelerate the vehicle more quickly, often accompanied with normal braking. This can be achieved by placing the vehicle in a gear that would ordinarily be too low for the current speed and momentum of the vehicle and by partly engaging the clutch. When this happens, momentum energy from the inertia of the vehicle is taken away to spin the engine as close as possible to its maximum capability. As the vehicle is decelerating, the clutch can be further released to transfer more energy to keep the engine spinning as quickly as possible. This method causes excessive clutch wear, however, and it could cause severe engine damage or wheel lockup if the clutch were to be released suddenly.\nA better method is to downshift to a lower gear that would spin the engine within its RPM limit and use the throttle to \"rev match\" the engine to the road speed before releasing the clutch fully. Effective engine braking is still achieved with little or no excessive clutch wear.\nOnce the clutch is entirely released and the vehicle has decelerated some, this cycle proceeds downwards through the gears to further assist deceleration. If the clutch is controlled improperly while this is being attempted, damage or extra wear to the engine and gears is possible, as well as the risk of wheels locking up and a subsequent loss of proper vehicle control.\nProblems.\nEven normal use of clutch control increases the wear (and decreases the lifespan) of the clutch. Excessive use of clutch control or \"riding the clutch\" will cause further damage.\nProlonged use.\nWhile the use of clutch control at low speed can be used to obtain greater control of acceleration and engine braking, once a car has picked up sufficient speed the clutch should be fully engaged (pedal released).\nExcessive engine revolutions.\nExcessively revving the engine while using clutch control, or keeping the clutch partially engaged while accelerating with the gas pedal, can cause unnecessary damage to the clutch.\nSlipping the clutch.\nSlipping the clutch (sometimes referred to as feathering the clutch) is a term used by automotive enthusiasts to describe when the driver alternately applies and releases the clutch to achieve some movement of the car. It's called \"slipping\" because the clutch plate will slip against the flywheel surface when such an action is performed. Slipping the clutch is known to be hard on the clutch surface due to the sliding friction created.\nDrivers can frequently be observed slipping the clutch when they are trying to stay stationary on a hill without using neutral and the brake. They apply the clutch to climb a bit, then release to roll back, then apply again, etc. so that the car stays in about the same place. With enough practice, alternating is no longer needed. Applying the correct amount of clutch pressure and throttle causes just enough force from the engine to counter gravity and keep the vehicle stationary (see balancing the clutch). The alternative to this technique of staying stationary on a hill would be to put the vehicle in neutral and apply the brake.\nSlipping the clutch is a popular term in drag racing culture and is done when launching a car, usually in a drag race. Some contend that slipping the clutch is the best way to launch a front-wheel drive (FWD) car as it prevents torque steering that many FWD cars experience when too much power is put to the front wheels.\nRiding the clutch.\nIn a vehicle with a manual transmission, riding the clutch refers to the practice of needlessly keeping the clutch partially disengaged. This results in the clutch being unable to fully engage with the flywheel and so causes premature wear on the disc and flywheel.\nA common example of riding the clutch is to keep slight continual pressure on the clutch pedal whilst driving, as when a driver habitually rests his/her foot on the clutch pedal instead of on the floorboard or dead pedal. Although this slight pressure is not enough to allow the clutch disc itself to slip, it is enough to keep the release bearing against the release springs. This causes the bearing to remain spinning, which leads to premature bearing failure.\nWhen shifting properly, the driver \"shifts\" to another gear and then releases pressure on the clutch pedal to re-engage the engine to the driveshaft. If the pedal is released quickly, a definite lurch can be felt as the engine and driveshaft re-engage and their speeds equalize. However, if the clutch is released slowly the clutch disc will \"slip\" against the flywheel; this friction permits the engine a smoother transition to its new rotation speed. Such routine slippage causes wear on the clutch analogous to the wear-and-tear on a brake pad when stopping. Some amount of wear is unavoidable, but with better clutching/shifting technique it can be minimized by releasing the clutch as close to the correct engine speed for the gear and vehicle speed as possible. When upshifting, this will involve allowing the engine speed to fall. Conversely, when downshifting, increasing the engine speed with the accelerator prior to releasing clutch will result in a smoother transition and minimal clutch wear.\nRiding the clutch occurs when the driver does not fully release the clutch pedal. This results in the clutch disc slipping against the flywheel and some engine power not being transferred to the drive train and wheels. While inefficient, most drivers routinely use this technique effectively when driving in reverse (as fully engaging the reverse gear results in velocity too great for the short distance traveled) or in stop-and-go traffic (as it is easier to control the throttle and acceleration at very slow speeds).\nRiding the clutch should not be confused with \"freewheeling\" or \"coasting\", where the clutch is pressed down fully allowing the car to roll either downhill or from inertia. While this is not damaging to the car, it can be considered a dangerous way to drive since one forgoes the ability to quickly accelerate if needed. It is, however, a common practice to roll into a parking space or over speed bumps via momentum.", "Automation-Control": 0.6176228523, "Qwen2": "Yes"} {"id": "4539079", "revid": "7611264", "url": "https://en.wikipedia.org/wiki?curid=4539079", "title": "Underactuation", "text": "Underactuation is a technical term used in robotics and control theory to describe mechanical systems that cannot be commanded to follow arbitrary trajectories in configuration space. This condition can occur for a number of reasons, the simplest of which is when the system has a lower number of actuators than degrees of freedom. In this case, the system is said to be \"trivially underactuated\".\nThe class of underactuated mechanical systems is very rich and includes such diverse members as automobiles, airplanes, and even animals.\nDefinition.\nTo understand the mathematical conditions which lead to underactuation, one must examine the dynamics that govern the systems in question. Newton's laws of motion dictate that the dynamics of mechanical systems are inherently second order. In general, these dynamics can be described by a second order differential equation:\nformula_1\nWhere:\nformula_2 is the position state vector formula_3 is the vector of control inputs formula_4 is time.\nFurthermore, in many cases the dynamics for these systems can be rewritten to be affine in the control inputs:\nformula_5\nWhen expressed in this form, the system is said to be underactuated if:\nformula_6\nWhen this condition is met, there are acceleration directions that can not be produced no matter what the control vector is.\nNote that formula_7 does not explicitly represent the number of actuators present in the system. Indeed, there may be more actuators than degrees of freedom and the system may still be underactuated. Also worth noting is the dependence of formula_7 on the state formula_9. That is, there may exist states in which an otherwise fully actuated system becomes underactuated.\nExamples.\nThe classic inverted pendulum is an example of a trivially underactuated system: it has two degrees of freedom (one for its support's motion in the horizontal plane, and one for the angular motion of the pendulum), but only one of them (the cart position) is actuated, and the other is only indirectly controlled. Although naturally extremely unstable, this underactuated system is still controllable.\nA standard automobile is underactuated due to the nonholonomic constraints imposed by the wheels. That is, a car cannot accelerate in a direction perpendicular to the direction the wheels are facing. A similar argument can be made for boats, planes and most other vehicles.", "Automation-Control": 0.7938579321, "Qwen2": "Yes"} {"id": "847122", "revid": "1161046308", "url": "https://en.wikipedia.org/wiki?curid=847122", "title": "OPC Foundation", "text": "The OPC Foundation (Open Platform Communications, formerly Object Linking and Embedding for Process Control) is an industry consortium that creates and maintains standards for open connectivity of industrial automation devices and systems, such as industrial control systems and process control generally. The OPC standards specify the communication of industrial process data, alarms and events, historical data and batch process data between sensors, instruments, controllers, software systems, and notification devices.\nThe OPC Foundation started in 1994, as a task force comprising five industrial automation vendors (Fisher-Rosemount, Rockwell Automation, Opto 22, Intellution, and Intuitive Technology), with the purpose of creating a basic OLE for Process Control specification. OLE is a technology developed by Microsoft Corporation for the MS Windows operating system. The task force released the OPC standard in August 1996. The OPC Foundation was chartered to continue development of interoperability specifications and includes manufacturers and users of devices instruments, controllers, software and enterprise systems. \nThe OPC Foundation cooperates with other organizations, such as MTConnect, who share similar missions.\nOPC standards and specification groups.\nThe OPC Foundation enhanced the range for certification of OPC product to meet the increasing demand for reliable functionality and insured interoperability. The self-certification using the ComplianceTestTool (CTT) and the participation at an Interoperability Workshop (IOP) are now enhanced with a Certification in an independent test facility. According to the test specification here not only the OPC Data Access (DA2/3) interface is tested but also the overall behavior of the product in a real world environment is verified. OPC Certification", "Automation-Control": 0.9803643227, "Qwen2": "Yes"} {"id": "791347", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=791347", "title": "Nesting (process)", "text": "In manufacturing industry, nesting refers to the process of laying out cutting patterns to minimize the raw material waste. Examples include manufacturing parts from flat raw material such as sheet metal, glass sheets, cloth rolls, cutting parts from steel bars, etc.\nSuch process can also be applied to additive manufacturing, such as 3D printing. Here the advantages sought can include minimizing tool movement that is not producing product, or maximizing how many pieces can be fabricated in one build session. One difference from nesting of cut pieces is that 3D parts often have a cross section that changes with height, which can cause interference between adjacent parts as they are built up.\nTypes.\nThe nesting process differs for different types of parts:\nProcess.\nTo minimize the amount of scrap raw material produced during cutting, companies use nesting software. It automates the calculation of ideal distribution of the cutting patterns to avoid waste. The process involves the analyses the parts (shapes) to be produced at a particular time. Using algorithms, it then determines how to lay these parts out in such a way as to produce the required quantities of parts, while minimizing the amount of raw material (or space) wasted. \nOff-the-shelf nesting software packages address the optimization needs. While some cater only to rectangular nesting, others offer profile or shape nesting where the parts required can be any odd shape. These irregular parts can be created using popular computer-aided design (CAD) tools. Here, the nesting software may be utilized as the connection between CAD drawings and the cut output. \nMost of the profile nesting software can read IGES or DXF profile files automatically, a few of them work with built-in converters. An important consideration in shape nesting is to verify that the software in question actually performs true profile nesting and not just block nesting (rectangular). In block nesting an imaginary rectangle is drawn around the shape and then the rectangles are laid side-by side which actually is not profile nesting. There remains scope for waste reduction.\nNesting software must take into account the limitations and features of the material and machining technology in use, such as:\nNesting software may also have to take into account material characteristics, such as:\nMany machine manufacturers offer their own custom nesting software designed to offer ease of use and take full advantage of the features of their specific machines.\nIf a fabricator operates machines from more than one vendor, they may prefer to use an off-the-shelf nesting software package from a third-party vendor. They then have the potential to run jobs on any available machine, and their staff should not have to learn several different software packages.\nSee also.\nMaterial may be cut using off-line blanking dies, lasers, plasma, punches, shear blades, ultrasonic knives and water jet cutters.", "Automation-Control": 0.7982701063, "Qwen2": "Yes"} {"id": "37551375", "revid": "5662528", "url": "https://en.wikipedia.org/wiki?curid=37551375", "title": "Lowell Corporation", "text": "Lowell Corporation is a manufacturing company based in West Boylston, Massachusetts. The company was originally based in Worcester, Massachusetts and called the Lowell Wrench Company.\nLowell Corporation produces ratchet wrenches and other hand tools used for High Line and Pipeline Utility installation and repair. Lowell also makes ratchets as handles and clutches for inclusion in original industrial and commercial equipment. Through its Porter-Ferguson division, Lowell Corp. also produces portable hydraulic units and repair clamps for the automotive body and frame repair industry.", "Automation-Control": 0.9659776688, "Qwen2": "Yes"} {"id": "365765", "revid": "46064651", "url": "https://en.wikipedia.org/wiki?curid=365765", "title": "Machining", "text": "Machining is a process in which a material (often metal) is cut to a desired final shape and size by a controlled material-removal process. The methods that have this common theme are collectively called subtractive manufacturing, which utilizes machine tools, in contrast to \"additive manufacturing\" (3D printing), which uses controlled addition of material.\nMachining is a part of the manufacture of many metal products, but it can also be used on other materials such as wood, plastic, ceramic, and composite material. A person who specializes in machining is called a machinist. A room, building, or company where machining is done is called a machine shop. Much of modern-day machining is carried out by computer numerical control (CNC), in which computers are used to control the movement and operation of mills, lathes, and other cutting machines. This increases efficiency, as the CNC machine runs unmanned, reducing labor costs for machine shops.\nHistory and terminology.\nThe precise meaning of the term \"machining\" has evolved over the past one and a half centuries as technology has advanced. In the 18th century, the word \"machinist\" meant a person who built or repaired machines. This person's work was primarily done by hand, using processes such as the carving of wood and the writing-forging and hand-filing of metal. At the time, millwrights and builders of new kinds of \"engines\" (meaning, more or less, machines of any kind), such as James Watt or John Wilkinson, would fit the definition. The noun \"machine tool\" and the verb \"to machine\" (\"machined, machining\") did not yet exist.\nAround the middle of the 20th century, the latter words were coined as the concepts they described evolved into widespread existence. Therefore, during the Machine Age, \"machining\" referred to (what we today might call) the \"traditional\" machining processes, such as turning, boring, drilling, milling, broaching, sawing, shaping, planing, abrasive cutting, reaming, and tapping. In these \"traditional\" or \"conventional\" machining processes, machine tools, such as lathes, milling machines, drill presses, or others, are used with a sharp cutting tool to remove material to achieve a desired geometry.\nSince the advent of new technologies in the post–World War II era, such as electrical discharge machining, electrochemical machining, electron beam machining, photochemical machining, and ultrasonic machining, the retronym \"conventional machining\" can be used to differentiate those classic technologies from the newer ones. Currently, \"machining\" without qualification usually implies the traditional machining processes.\nIn the decades of the 2000s and 2010s, as additive manufacturing (AM) evolved beyond its earlier laboratory and rapid prototyping contexts and began to become standard throughout all phases of manufacturing, the term \"subtractive manufacturing\" became common retronymously in logical contrast with AM, covering essentially any removal processes also previously covered by the term \"machining\". The two terms are effectively synonymous, although the long-established usage of the term \"machining\" continues. This is comparable to the idea that evolved because of the proliferation of ways to contact someone (telephone, email, IM, SMS, and so on) but did not entirely replace the earlier terms such as \"call\", \"talk to\", or \"write to\".\nMachining operations.\nThe three principal machining processes are classified as turning, drilling and milling. Other operations falling into miscellaneous categories include shaping, planing, boring, broaching, and sawing.\nAn unfinished workpiece requiring machining must have some material cut away to create a finished product. A finished product would be a workpiece that meets the specifications set out for that workpiece by engineering drawings or blueprints. For example, a workpiece may require a specific outside diameter. A lathe is a machine tool that can create that diameter by rotating a metal workpiece so that a cutting tool can cut metal away, creating a smooth, round surface matching the required diameter and surface finish. A drill can remove the metal in the shape of a cylindrical hole. Other tools that may be used for metal removal are milling machines, saws, and grinding machines. Many of these same techniques are used in woodworking.\nAs a commercial venture, machining is generally performed in a machine shop, which consists of one or more workrooms containing primary machine tools. Although a machine shop can be a stand-alone operation, many businesses maintain internal machine shops that support the business's specialized needs.\nMachining requires attention to many details for a workpiece to meet the specifications in the engineering drawings or blueprints. Besides the obvious problems related to correct dimensions, there is the problem of achieving the right finish or surface smoothness on the workpiece. The inferior finish found on the machined surface of a workpiece may be caused by incorrect clamping, a dull tool, or inappropriate presentation of a device. Frequently, this poor surface finish, known as chatter, is evident by an undulating or irregular finish and waves on the machined surfaces of the workpiece.\nOverview of machining technology.\nMachining is any process in which a cutting tool removes small chips of material from the workpiece (the workpiece is often called the \"work\"). Relative motion is required between the device and the work to perform the operation. This relative motion is achieved in most machining operations using a primary activity called \"cutting speed\" and a secondary movement called \"feed\". The shape of the tool and its penetration into the work surface, combined with these motions, produce the desired shape of the resulting work surface.\nMachining operations.\nThere are many kinds of machining operations, each of which is capable of generating a specific part geometry and surface texture.\nIn turning, a cutting tool with a single cutting edge removes material from a rotating workpiece to generate a cylindrical shape. The primary motion is provided by rotating the workpiece, and the feed motion is achieved by moving the cutting tool slowly in a direction parallel to the workpiece's rotation axis.\nDrilling is used to create a round hole. It is accomplished by a rotating tool that typically has two or four helical cutting edges. The device is fed in a direction parallel to its axis of rotation into the workpiece to form the round hole.\nIn boring, a tool with a single bent pointed tip is advanced into a roughly made hole in a spinning workpiece to enlarge the gap and improve its accuracy slightly. It is a fine-finishing operation used in the final stages of product manufacture.\nReaming is one of the sizing operations that removes a small amount of metal from a drilled hole.\nIn milling, a rotating tool with multiple cutting edges is moved slowly relative to the material to generate a plane or straight surface. The direction of the feed motion is perpendicular to the tool's axis of rotation. The rotating milling cutter provides speed motion. The two primary forms of milling are:\nOther conventional machining operations include shaping, planing, broaching, and sawing. Also, grinding and similar abrasive operations are often included within the category of machining.\nCutting tool.\nA cutting tool has one or more sharp cutting edges and is made of a harder material than the work material. The cutting edge serves to separate the chip from the parent work material. Connected to the cutting edge are the two surfaces of the tool:\nThe rake face, which directs the flow of the newly formed chip, is oriented at a certain angle and is called the rake angle \"α.\" It is measured relative to the plane perpendicular to the work surface. The rake angle can be positive or negative. The flank of the tool provides a clearance between the tool and the newly formed work surface, thus protecting the surface from abrasion, which would degrade the finish. This angle between the work and flank surfaces is called the relief angle. There are two basic types of cutting tools:\nA single-point tool has one cutting edge for turning, boring, and planning. During machining, the device's point penetrates below the work part's original work surface. The fact is sometimes rounded to a certain radius, called the nose radius.\nMultiple cutting-edge tools have more than one cutting edge and usually achieve their motion relative to the work part by rotating. Drilling and milling use turning multiple-cutting-edge tools. Although the shapes of these tools are different from a single-point device, many elements of tool geometry are similar.\nCutting conditions.\nRelative motion is required between the tool and work to perform a machining operation. The primary action is at a specific cutting speed. In addition, the device must be moved laterally across the work. This is a much slower motion called the feed. The remaining dimension of the cut is the penetration of the cutting tool below the original work surface, reaching the cut's depth. Speed, feed, and depth of cut are called the cutting conditions. They form the three dimensions of the machining process, and for certain operations, their product can be used to obtain the material removal rate for the process:\nwhere\nStages in metal cutting.\nMachining operations usually divide into two categories, distinguished by purpose and cutting conditions:\nRoughing cuts are used to remove a large amount of material from the starting work part as rapidly as possible, i.e., with a significant Material Removal Rate (MRR), to produce a shape close to the desired form but leaving some material on the piece for a subsequent finishing operation.\nFinishing cuts complete the part and achieve the final dimension, tolerances, and surface finish. In production machining jobs, one or more roughing cuts are usually performed on the work, followed by one or two finishing cuts. Roughing operations are done at high feeds and depths – feeds of 0.4–1.25  mm/rev (0.015–0.050 in/rev) and depths of 2.5–20 mm (0.100–0.750 in) are typical, but actual values depend on the workpiece materials. Finishing operations are carried out at low feeds and depths – dinners of 0.0125–0.04  mm/rev (0.0005–0.0015 in/rev) and depths of 0.75–2.0 mm (0.030–0.075 in) are typical. Cutting speeds are lower in roughing than in finishing.\nA cutting fluid is often applied to the machining operation to cool and lubricate the cutting tool. Determining whether a cutting fluid should be used and, if so, choosing the proper cutting fluid is usually included within the scope of the cutting condition.\nToday other forms of metal cutting are becoming increasingly popular. An example of this is water jet cutting. Water jet cutting involves pressurized water over 620 MPa (90 000 psi) and can cut metal and have a finished product. This process is called cold cutting, which eliminates the damage caused by a heat-affected zone, as opposed to laser and plasma cutting.\nRelationship of subtractive and additive techniques.\nWith the recent proliferation of additive manufacturing technologies, conventional machining has been retronymously classified, in thought and language, as a subtractive manufacturing method. In narrow contexts, additive and subtractive methods may compete with each other. In the broad context of entire industries, their Relationship is complementary. Each method has its advantages over the other. While additive manufacturing methods can produce very intricate prototype designs impossible to replicate by machining, strength and material selection may be limited.", "Automation-Control": 0.9366420507, "Qwen2": "Yes"} {"id": "367107", "revid": "46293081", "url": "https://en.wikipedia.org/wiki?curid=367107", "title": "List of CAx companies", "text": "This is a list of computer-aided technologies (CAx) companies and their software products. Software using computer-aided technologies (CAx) has been produced since the 1970s for a variety of computer platforms. This software may include applications for computer-aided design (CAD), computer-aided engineering (CAE), computer-aided manufacturing (CAM) and product data management (PDM).\nThe list is far from complete or representative as the CAD business landscape is very dynamic: almost every month new companies appear, old companies go out of business, and companies split and merge. Sometimes some names disappear and reappear again.\nPast CAD Brands.\nAcquired, orphaned, failed or rebranded.\nIn-house CAD software.\nDeveloped by companies for their own use. Some are no longer used as the organizations are now using commercial systems.", "Automation-Control": 0.9941271544, "Qwen2": "Yes"} {"id": "19714020", "revid": "9676078", "url": "https://en.wikipedia.org/wiki?curid=19714020", "title": "Digital test controller", "text": "Digital test controllers are devices (usually computer based) that provide motion control by processing digital signals. Typically a controller has inputs connected to sensors on the device they control, which measure the feedback, its current state (for example the current position), and process this signal to provide an output to a hydraulical, electrical or other type of servomechanism control of the controlled device, with the aim of matching a control signal.\nA good example is an elevator. The control signal is the button selects the floor the passenger wants to go. The controller of the elevator looks at which floor the elevator currently is (current position), at the floor selected (by the button) and by comparing them to each other derives a signal to control a servo (either hydraulic or electric) that makes the elevator move until the right floor is reached.\nIn the older days test controllers were usually analog, but with the rapid developments in digital signal processing and computer technology, test controllers are almost exclusively digital devices. This offers many advantages, because it allows the user to execute all kinds of additional operations on the digital signals, in addition to the standard PID controller. Digital test controllers offered by Moog, provide novel advantages for this type of system control.", "Automation-Control": 0.603902638, "Qwen2": "Yes"} {"id": "20500066", "revid": "10289486", "url": "https://en.wikipedia.org/wiki?curid=20500066", "title": "Variable structure control", "text": "Variable structure control (VSC) is a form of discontinuous nonlinear control. The method alters the dynamics of a nonlinear system by application of a high-frequency \"switching control\". The state-feedback control law is \"not\" a continuous function of time; it \"switches\" from one smooth condition to another. So the \"structure\" of the control law \"varies\" based on the position of the state trajectory; the method switches from one smooth control law to another and possibly very fast speeds (e.g., for a countably infinite number of times in a finite time interval). VSC and associated sliding mode behaviour was first investigated in early 1950s in the Soviet Union by Emelyanov and several coresearchers.\nThe main mode of VSC operation is sliding mode control (SMC). The strengths of SMC include:\nThe weaknesses of SMC include:\nHowever, the evolution of VSC is an active area of research.", "Automation-Control": 0.8374348283, "Qwen2": "Yes"} {"id": "71002788", "revid": "36055642", "url": "https://en.wikipedia.org/wiki?curid=71002788", "title": "2022–23 UEFA Europa Conference League qualifying phase and play-off round (Main Path)", "text": "This page summarises the Main Path matches of the 2022–23 UEFA Europa Conference League qualifying phase and play-off round.\nTimes are CEST , as listed by UEFA (local times, if different, are in parentheses).\nFirst qualifying round.\nSummary.\n\n\nMatches.\n\"Ħamrun Spartans won 4–2 on aggregate.\"\n\"Lechia Gdańsk won 6–2 on aggregate.\"\n\"Drita won 3–1 on aggregate.\"\n\"4–4 on aggregate. Paide Linnameeskond won 6–5 on penalties.\"\n\"Milsami Orhei won 2–0 on aggregate.\"\n\"Laçi won 1–0 on aggregate.\"\n\"Liepāja won 3–2 on aggregate.\"\n\"Mura won 4–2 on aggregate.\"\n\"KuPS won 2–0 on aggregate.\"\n\"Ružomberok won 2–0 on aggregate.\"\n\"Budućnost Podgorica won 4–2 on aggregate.\"\n\"Gżira United won 2–1 on aggregate.\"\n\"3–3 on aggregate. B36 Tórshavn won 4–3 on penalties.\"\n\"Olimpija Ljubljana won 3–2 on aggregate.\"\n\"St Joseph's won 1–0 on aggregate.\"\n\"Breiðablik won 5–1 on aggregate.\"\n\"DAC Dunajská Streda won 5–1 on aggregate.\"\n\"Víkingur won 3–1 on aggregate.\"\n\"2–2 on aggregate. Sligo Rovers won 4–3 on penalties.\"\n\"Tre Fiori won 4–1 on aggregate.\"\n\"Dinamo Minsk won 3–2 on aggregate.\"\n\"Tuzla City won 8–0 on aggregate.\"\n\"1–1 on aggregate. Saburtalo Tbilisi won 5–4 on penalties.\"\n\"Shkëndija won 4–2 on aggregate.\"\n\"Petrocub Hîncești won 1–0 on aggregate.\"\n\"Pogoń Szczecin won 4–2 on aggregate.\"\n\"2–2 on aggregate. Newtown won 4–2 on penalties.\"\n\"Crusaders won 4–3 on aggregate.\"\n\"SJK won 4–3 on aggregate.\"\n\"Riga won 4–0 on aggregate.\"\nSecond qualifying round.\nSummary.\n\n\n\nMatches.\n\"5–5 on aggregate. Gżira United won 3–1 on penalties.\"\n\"Aris won 7–2 on aggregate.\"\n\"APOEL won 2–0 on aggregate.\"\n\"Fehérvár won 5–3 on aggregate.\"\n\"İstanbul Başakşehir won 2–1 on aggregate.\"\n\"Neftçi Baku won 3–2 on aggregate.\"\n\"Ħamrun Spartans won 2–0 on aggregate.\"\n\"FCSB won 4–3 on aggregate.\"\n\"CSKA Sofia won 4–0 on aggregate.\"\n\"Hapoel Be'er Sheva won 3–1 on aggregate.\"\n\"Maccabi Tel Aviv won 3–0 on aggregate.\"\n\"Universitatea Craiova won 4–1 on aggregate.\"\n\"0–0 on aggregate. Paide Linnameeskond won 5–3 on penalties.\"\n\"Kisvárda won 2–0 on aggregate.\"\n\"Konyaspor won 5–0 on aggregate.\"\n\"3–3 on aggregate. Sepsi Sfântu Gheorghe won 4–2 on penalties.\"\n\"Kyzylzhar won 3–2 on aggregate.\"\n\"Young Boys won 4–0 on aggregate.\"\n\"Rapid Wien won 2–1 on aggregate.\"\n\"Lillestrøm won 6–2 on aggregate.\"\n\"Breiðablik won 3–2 on aggregate.\"\n\"1–1 on aggregate. St Patrick's Athletic won 6–5 on penalties.\"\n\"Slavia Prague won 11–0 on aggregate.\"\n\"Spartak Trnava won 6–2 on aggregate.\"\n\"Viborg won 2–0 on aggregate.\"\n\"DAC Dunajská Streda won 4–0 on aggregate.\"\n\"Brøndby won 5–1 on aggregate.\"\n\"AZ won 5–0 on aggregate.\"\n\"Sligo Rovers won 3–0 on aggregate.\"\n\"Molde won 6–2 on aggregate.\" \n\"Vaduz won 2–1 on aggregate.\"\n\"B36 Tórshavn won 1–0 on aggregate.\"\n\"Riga won 5–1 on aggregate.\"\n\"Basel won 3–1 on aggregate.\"\n\"Antwerp won 2–0 on aggregate.\"\n\"Petrocub Hîncești won 4–1 on aggregate.\"\n\"Čukarički won 8–1 on aggregate.\"\n\"Levski Sofia won 3–1 on aggregate.\"\n\"Vitória de Guimarães won 3–0 on aggregate.\"\n\"Djurgårdens IF won 4–1 on aggregate.\"\n\"AIK won 4–3 on aggregate.\"\n\"Shkëndija won 5–2 on aggregate.\"\n\"Raków Częstochowa won 6–0 on aggregate.\"\n\"KuPS won 6–3 on aggregate.\"\n\"Viking won 2–1 on aggregate.\"\nThird qualifying round.\nSummary.\n\n\n\nMatches.\n\"Raków Częstochowa won 3–0 on aggregate.\"\n\"2–2 on aggregate. AIK won 3–2 on penalties.\"\n\"Viking won 5–2 on aggregate.\"\n\"İstanbul Başakşehir won 6–1 on aggregate.\"\n\"Young Boys won 5–0 on aggregate.\"\n\"Anderlecht won 5–0 on aggregate.\"\n\"Viborg won 5–1 on aggregate.\"\n\"Hajduk Split won 3–2 on aggregate.\"\n\"2–2 on aggregate. Basel won 3–1 on penalties.\"\n\"Antwerp won 5–1 on aggregate.\"\n\"CSKA Sofia won 2–1 on aggregate.\"\n\"AZ won 7–1 on aggregate.\"\n\"APOEL won 1–0 on aggregate.\"\n\"FCSB won 2–0 on aggregate.\"\n\"Gil Vicente won 5–1 on aggregate.\"\n\"Wolfsberger AC won 4–0 on aggregate.\"\n\"Maccabi Tel Aviv won 3–2 on aggregate.\"\n\"Molde won 4–2 on aggregate.\"\n\"Rapid Wien won 3–2 on aggregate.\"\n\"Hapoel Be'er Sheva won 5–1 on aggregate.\"\n\"2–2 on aggregate. Ħamrun Spartans won 4–1 on penalties.\"\n\"Twente won 7–2 on aggregate.\"\n\"Universitatea Craiova won 3–1 on aggregate.\"\n\"Vaduz won 5–3 on aggregate.\"\n\"Djurgårdens IF won 6–2 on aggregate.\"\n\"Fehérvár won 7–1 on aggregate.\"\n\"Slavia Prague won 3–1 on aggregate.\"\nPlay-off round.\nSummary.\n\n\n\nMatches.\n\"Basel won 2–1 on aggregate.\"\n\"Vaduz won 2–1 on aggregate.\"\n\"Slavia Prague won 3–2 on aggregate.\"\n\"Djurgårdens IF won 5–3 on aggregate.\"\n\"Nice won 2–1 on aggregate.\"\n\"2–2 on aggregate. Hapoel Be'er Sheva won 4–3 on penalties.\"\n\"İstanbul Başakşehir won 4–2 on aggregate.\"\n\"FCSB won 4–3 on aggregate.\"\n\"Partizan won 7–4 on aggregate.\"\n\"Fiorentina won 2–1 on aggregate.\"\n\"Villarreal won 6–2 on aggregate.\"\n\"1. FC Köln won 4–2 on aggregate.\"\n\"West Ham United won 6–1 on aggregate.\"\n\"1–1 on aggregate. Anderlecht won 3–1 on penalties.\"\n\"Slovácko won 4–0 on aggregate.\"\n\"Molde won 4–1 on aggregate.\"\n\"AZ won 6–1 on aggregate.\"", "Automation-Control": 0.7279177904, "Qwen2": "Yes"} {"id": "8980593", "revid": "20543045", "url": "https://en.wikipedia.org/wiki?curid=8980593", "title": "Nonlinear conjugate gradient method", "text": "In numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function formula_1\nthe minimum of formula_3 is obtained when the gradient is 0:\nWhereas linear conjugate gradient seeks a solution to the linear equation \nformula_5, the nonlinear conjugate gradient method is generally \nused to find the local minimum of a nonlinear function \nusing its gradient formula_6 alone. It works when the function is approximately quadratic near the minimum, which is the case when the function is twice differentiable at the minimum and the second derivative is non-singular there.\nGiven a function formula_1 of formula_8 variables to minimize, its gradient formula_6 indicates the direction of maximum increase.\nOne simply starts in the opposite (steepest descent) direction:\nwith an adjustable step length formula_11 and performs a line search in this direction until it reaches the minimum of formula_12:\nAfter this first iteration in the steepest direction formula_15, the following steps constitute one iteration of moving along a subsequent conjugate direction formula_16, where formula_17:\nWith a pure quadratic function the minimum is reached within \"N\" iterations (excepting roundoff error), but a non-quadratic function will make slower progress. Subsequent search directions lose conjugacy requiring the search direction to be reset to the steepest descent direction at least every \"N\" iterations, or sooner if progress stops. However, resetting every iteration turns the method into steepest descent. The algorithm stops when it finds the minimum, determined when no progress is made after a direction reset (i.e. in the steepest descent direction), or when some tolerance criterion is reached.\nWithin a linear approximation, the parameters formula_11 and formula_24 are the same as in the\nlinear conjugate gradient method but have been obtained with line searches.\nThe conjugate gradient method can follow narrow (ill-conditioned) valleys, where the steepest descent method slows down and follows a criss-cross pattern.\nFour of the best known formulas for formula_19 are named after their developers:\nThese formulas are equivalent for a quadratic function, but for nonlinear optimization the preferred formula is a matter of heuristics or taste. A popular choice is formula_30, which provides a direction reset automatically.\nAlgorithms based on Newton's method potentially converge much faster. There, both step direction and length are computed from the gradient as the solution of a linear system of equations, with the coefficient matrix being the exact Hessian matrix (for Newton's method proper) or an estimate thereof (in the quasi-Newton methods, where the observed change in the gradient during the iterations is used to update the Hessian estimate). For high-dimensional problems, the exact computation of the Hessian is usually prohibitively expensive, and even its storage can be problematic, requiring formula_31 memory (but see the limited-memory L-BFGS quasi-Newton method).\nThe conjugate gradient method can also be derived using optimal control theory. In this accelerated optimization theory, the conjugate gradient method falls out as a nonlinear optimal feedback controller,\nformula_32for the double integrator system,\nformula_33\nThe quantities formula_34 and formula_35 are variable feedback gains.", "Automation-Control": 0.9460254908, "Qwen2": "Yes"} {"id": "8984619", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=8984619", "title": "Feature-oriented scanning", "text": "Feature-oriented scanning (FOS) is a method of precision measurement of surface topography with a scanning probe microscope in which surface features (objects) are used as reference points for microscope probe attachment. With FOS method, by passing from one surface feature to another located nearby, the relative distance between the features and the feature neighborhood topographies are measured. This approach allows to scan an intended area of a surface by parts and then reconstruct the whole image from the obtained fragments. Beside the mentioned, it is acceptable to use another name for the method – object-oriented scanning (OOS).\nTopography.\nAny topography element that looks like a hill or a pit in wide sense may be taken as a surface feature. Examples of surface features (objects) are: atoms, interstices, molecules, grains, nanoparticles, clusters, crystallites, quantum dots, nanoislets, pillars, pores, short nanowires, short nanorods, short nanotubes, viruses, bacteria, organelles, cells, etc.\nFOS is designed for high-precision measurement of surface topography (see Fig.) as well as other surface properties and characteristics. Moreover, in comparison with the conventional scanning, FOS allows obtaining a higher spatial resolution. Thanks to a number of techniques embedded in FOS, the distortions caused by thermal drifts and creeps are practically eliminated.\nApplications.\nFOS has the following fields of application: surface metrology, precise probe positioning, automatic surface characterization, automatic surface modification/stimulation, automatic manipulation of nanoobjects, nanotechnological processes of “bottom-up” assembly, coordinated control of analytical and technological probes in multiprobe instruments, control of atomic/molecular assemblers, control of probe nanolithographs, etc.\nReferences.\n1. (Russian translation is available).\n2. (Russian translation is available).\n3. \n4. (in Russian).\n5. \n6. \n7. \n8. \n9. \n10. \n11. \n12. ", "Automation-Control": 0.8683314919, "Qwen2": "Yes"} {"id": "44031786", "revid": "44560121", "url": "https://en.wikipedia.org/wiki?curid=44031786", "title": "Classical control theory", "text": "Classical control theory is a branch of control theory that deals with the behavior of dynamical systems with inputs, and how their behavior is modified by feedback, using the Laplace transform as a basic tool to model such systems.\nThe usual objective of control theory is to control a system, often called the \"plant\", so its output follows a desired control signal, called the \"reference\", which may be a fixed or changing value. To do this a \"controller\" is designed, which monitors the output and compares it with the reference. The difference between actual and desired output, called the \"error\" signal, is applied as feedback to the input of the system, to bring the actual output closer to the reference.\nClassical control theory deals with linear time-invariant (LTI) single-input single-output (SISO) systems. The Laplace transform of the input and output signal of such systems can be calculated. The transfer function relates the Laplace transform of the input and the output.\nFeedback.\nTo overcome the limitations of the open-loop controller, classical control theory introduces feedback. A closed-loop controller uses feedback to control states or outputs of a dynamical system. Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is \"fed back\" as input to the process, closing the loop.\nClosed-loop controllers have the following advantages over open-loop controllers:\nIn some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed feedforward and serves to further improve reference tracking performance.\nA common closed-loop controller architecture is the PID controller.\nClassical vs modern.\nA Physical system can be modeled in the \"time domain\", where the response of a given system is a function of the various inputs, the previous system values, and time. As time progresses, the state of the system and its response change. However, time-domain models for systems are frequently modeled using high-order differential equations which can become impossibly difficult for humans to solve and some of which can even become impossible for modern computer systems to solve efficiently.\nTo counteract this problem, classical control theory uses the Laplace transform to change an Ordinary Differential Equation (ODE) in the time domain into a regular algebraic polynomial in the frequency domain. Once a given system has been converted into the frequency domain it can be manipulated with greater ease.\nModern control theory, instead of changing domains to avoid the complexities of time-domain ODE mathematics, converts the differential equations into a system of lower-order time domain equations called state equations, which can then be manipulated using techniques from linear algebra.\nLaplace transform.\nClassical control theory uses the Laplace transform to model the systems and signals. The Laplace transform is a frequency-domain approach for continuous time signals irrespective of whether the system is stable or unstable. The Laplace transform of a function , defined for all real numbers , is the function , which is a unilateral transform defined by\nwhere \"s\" is a complex number frequency parameter\nClosed-loop transfer function.\nA common feedback control architecture is the servo loop, in which the output of the system \"y(t)\" is measured using a sensor \"F\" and subtracted from the reference value \"r(t)\" to form the servo error \"e\". The controller \"C\" then uses the servo error \"e\" to adjust the input \"u\" to the plant (system being controlled) \"P\" in order to drive the output of the plant toward the reference. This is shown in the block diagram below. This kind of controller is a closed-loop controller or feedback controller.\nThis is called a single-input-single-output (\"SISO\") control system; \"MIMO\" (i.e., Multi-Input-Multi-Output) systems, with more than one input/output, are common. In such cases variables are represented through vectors instead of simple scalar values. For some distributed parameter systems the vectors may be infinite-dimensional (typically functions).\nIf we assume the controller \"C\", the plant \"P\", and the sensor \"F\" are linear and time-invariant (i.e., elements of their transfer function \"C(s)\", \"P(s)\", and \"F(s)\" do not depend on time), the systems above can be analysed using the Laplace transform on the variables. This gives the following relations:\nSolving for \"Y\"(\"s\") in terms of \"R\"(\"s\") gives\nThe expression formula_7 is referred to as the \"closed-loop transfer function\" of the system. The numerator is the forward (open-loop) gain from formula_8 to formula_9, and the denominator is one plus the gain in going around the feedback loop, the so-called loop gain. If formula_10, i.e., it has a large norm with each value of \"s\", and if formula_11, then formula_12 is approximately equal to formula_13 and the output closely tracks the reference input.\nformula_14PID controller.\nThe PID controller is probably the most-used (alongside much cruder Bang-bang control) feedback control design. \"PID\" is an initialism for \"Proportional-Integral-Derivative\", referring to the three terms operating on the error signal to produce a control signal. If is the control signal sent to the system, formula_15 is the measured output and formula_16 is the desired output, and tracking error formula_17, a PID controller has the general form\nThe desired closed loop dynamics is obtained by adjusting the three parameters formula_19, formula_20 and formula_21, often iteratively by \"tuning\" and without specific knowledge of a plant model. Stability can often be ensured using only the proportional term. The integral term permits the rejection of a step disturbance (often a striking specification in process control). The derivative term is used to provide damping or shaping of the response. PID controllers are the most well established class of control systems: however, they cannot be used in several more complicated cases, especially if multiple-input multiple-output systems (MIMO) systems are considered.\nApplying Laplace transformation results in the transformed PID controller equation\nwith the PID controller transfer function\nThere exists a nice example of the closed-loop system discussed above. If we take\nPID controller transfer function in series form\n1st order filter in feedback loop\nlinear actuator with filtered input\nand insert all this into expression for closed-loop transfer function formula_29, then tuning is very easy: simply put\nand get formula_31 identically.\nFor practical PID controllers, a pure differentiator is neither physically realisable nor desirable due to amplification of noise and resonant modes in the system. Therefore, a phase-lead compensator type approach is used instead, or a differentiator with low-pass roll-off.\nTools.\nClassical control theory uses an array of tools to analyze systems and design controllers for such systems. Tools include the root locus, the Nyquist stability criterion, the Bode plot, the gain margin and phase margin. More advanced tools include Bode integrals to assess performance limitations and trade-offs, and describing functions to analyze nonlinearities in the frequency domain.", "Automation-Control": 0.9888837934, "Qwen2": "Yes"} {"id": "15502859", "revid": "1535071", "url": "https://en.wikipedia.org/wiki?curid=15502859", "title": "EICASLAB", "text": "EICASLAB is a software suite providing a laboratory for automatic control design and time-series forecasting developed as final output of the European ACODUASIS Project IPS-2001-42068 funded by the European Community within the Innovation Programme. The Project - during its lifetime - aimed at delivering in the robotic field the scientific breakthrough of a new methodology for the automatic control design.\nTo facilitate such a knowledge transfer, EICASLAB was equipped with an “automated algorithm and code generation” software engine, that allows to obtain a control algorithm algorithm even without a deep knowledge of the theory and the methodology that are otherwise normally required with traditional control design methodologies.\nEICASLAB has been and is actually adopted in other European Research Projects dealing with robotics (ARFLEX IST-NMP2-016880 and PISA Project NMP2-CT-2006-026697) and automotive (HI-CEPS Project TIP5-CT-2006-031373 and ERSEC Project FP7 247955). EICASLAB is used in European industries, research institutes and academia to design control systems and time series forecasting documented in the scientific and technical literature.\nEICASLAB includes tools for modelling plants, designing and testing embedded control systems, assisting the phases of the design process of the control strategy, from system concept to generation of the control software code for the final target.\nSoftware organisation.\nEICASLAB is a software suite composed by a main program, called MASTER, able to assist and manage all the control design steps by means a set of tools, respectively:\nFeatures to support to control design phases.\nSupport to system concept.\nEICASLAB includes the following features to support the system concept:\nHardware architectures including multi-processors and software architectures including multi-level hierarchical control are considered. The control software is subdivided into functions allocated by the designer to the different processors. Each control function has its own sampling frequency and a time window for its execution, which are scheduled by the designer by means of the EICASLAB scheduler.\nData can be exchanged among the control functions allocated to the same processor and among the different processors belonging to the plant control system. The delay time in the data transmission is considered.\nThe final “application software” generated in C is subdivided into files each one related to a specific processor.\nSupport to system simulation.\nEICASLAB includes specific working areas for developing, optimizing and testing algorithms and software related to the “plant controller”, including both the “automatic control” and the “trajectory generation” and the \"disturbances\" acting on the plant. To perform such a task three different working areas are available as follows. \nSupport to control algorithm design.\nEICASLAB includes the following tools and features to support the control algorithm design:\nThe Automatic Algorithm Generation tool, starting from the “plant simplified model” and from the \"control required performance\" generates the control algorithm. On the basis of the plant design data, the applied control design methodology allows design of controllers with guaranteed performance without requiring any tuning in field in spite of the unavoidable uncertainty which always exists between any mathematical model built on the basis of plant design data and the plant actual performance (for fundamentals on control in presence of uncertainty see ).\nThe designer can choose among three control basic schemes and for each one he has the option of selecting control algorithms at different level of complexity. \nIn synthesis, the automatically generated control is performed by the resultant of three actions:\nThe plant's state observer task may be extended to estimate and predict the disturbance acting on the plant. The plant disturbance prediction and compensation is an original control feature, which allows significant reduction of control error. \nModel Parameter Identification is a tool which allows the identification of the most appropriate values of the simplified model parameters from recorded experimental data or simulated trials performed by using the “plant fine model”. The parameter's \"true\" value does not exist: the model is an approximated description of the plant and then, the parameter's \"best\" value depends on the cost function adopted to evaluate the difference between model and plant. The identification method estimates the best values of the simplified model parameters from the point of view of the closed loop control design.\nControl Parameter Optimization is a tool which performs control parameter tuning in simulated environment. The optimization is performed numerically over a predefined simulated trial, that is for a given mission (host command sequence and disturbance acting on the plant and any other potential event related to the plant performance) and for a given functional cost associated to the plant control performance.\nSupport to code generation for the final target.\nThe EICASLAB Automatic Code Generation tool provides the ANSI C source code related to the control algorithm developed.\nThe final result of the designer work is the “application software” in ANSI C, debugged and tested, ready to be compiled and linked in the plant control processors. The “application software” includes the software related to the “automatic control” and the “trajectory generation” functions. The simulated control functions are strictly the same one that the designer can transfer in field in the actual plant controller.\nSupport to control tuning.\nEICASLAB includes the following tools to support the control tuning:\nSlow Motion View is a tool to be used in the phase of setting up of the plant control, providing a variable by variable analysis of the control software performance during experimental trials performed by means of the actual plant.\nThe plant input and output and the host commands sent to the controller are recorded during experimental trials and then they can be processed by EICASLAB as follows. The recorded plant input and output variables are used in the Plant Area inside of the input and output variables obtained by the plant simulation. The recorded host commands are used in the Control Mission area inside of the host command generated by the Control Mission function.\nThen, when a simulated trial is performed, the control function receives the recorded outputs of the actual plant and the related recorded host commands inside of the simulated ones. Because the control function running in the EICASLAB is strictly the same one, which is running in the actual plant controller, then, the commands resulting from the simulated control function and sent from the simulated control to the simulated plant should be strictly the same of the recorded plant inputs (unless there are numerical errors depending on the differences between the processor where the EICASLAB is running and the one used in the actual plant controller, but the experience has shown that the effects of such differences are negligible).\nThen, the recorded experimental trial performed by the actual plant controller is completely repeated in the EICASLAB, with the difference that now the process can be performed in slow-motion and, if useful, step by step by using a debugger program.\nAutomatic Code Generation tool can be used to insert the controller code in a Linux Real-time operating system (RTOS) (in two available versions, namely, Linux RTAI and Linux RT with kernel preemption), in order to test the control algorithm in the PC environment instead of the final target hardware, performing Rapid Control Prototyping (RCP) tests. EICASLAB RCP includes a real-time scheduler based on multithreading programming techniques and able to run on a multi-core processor.\nAutomatic Code Generation tool can be used to insert the controller code in the final Hardware Target.\nOnce performed such operation, Hardware In the Loop (HIL) tests may be performed, consisting in piloting – instead of the actual plant - the plant simulated in EICASLAB and running on your PC, suitable configured and connected through the necessary hardware interfaces with the final Hardware Target.", "Automation-Control": 0.8740437627, "Qwen2": "Yes"} {"id": "15504805", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=15504805", "title": "Indexing (motion)", "text": "Indexing in reference to motion is moving (or being moved) into a new position or location quickly and easily but also precisely. When indexing a machine part, its new location is known to within a few hundredths of a millimeter (thousandths of an inch), or often even to within a few thousandths of a millimeter (ten-thousandths of an inch), despite the fact that no elaborate measuring or layout was needed to establish that location. In reference to multi-edge cutting inserts, indexing is the process of exposing a new cutting edge for use. Indexing is a necessary kind of motion in many areas of mechanical engineering and machining. An object that indexes, or can be indexed, is said to be indexable.\nUsually when the word \"indexing\" is used, it refers specifically to rotation. That is, indexing is most often the quick and easy but precise rotation of a machine part through a certain known number of degrees. For example, \"Machinery's Handbook\", 25th edition, in its section on milling machine indexing, says, \"Positioning a workpiece at a precise angle or interval of rotation for a machining operation is called indexing.\" In addition to that most classic sense of the word, the swapping of one part for another, or other controlled movements, are also sometimes referred to as \"indexing\", even if rotation is not the focus.\nExamples from everyday life.\nThere are various examples of indexing that laypersons (non-engineers and non-machinists) can find in everyday life. These motions are not always called by the name \"indexing\", but the idea is essentially similar:\nManufacturing applications.\nIndexing is vital in manufacturing, especially mass production, where a well-defined cycle of motions must be repeated quickly and easily—but precisely—for each interchangeable part that is made. Without indexing capability, all manufacturing would have to be done on a craft basis, and interchangeable parts would have very high unit cost because of the time and skill needed to produce each unit. In fact, the evolution of modern technologies depended on the shift in methods from crafts (in which toolpath is controlled via operator skill) to indexing-capable toolpath control. A prime example of this theme was the development of the turret lathe, whose turret indexes tool positions, one after another, to allow successive tools to move into place, take precisely placed cuts, then make way for the next tool.\nHow indexing is achieved in manufacturing.\nIndexing capability is provided in two fundamental ways: with or without Information technology (IT).\nNon-IT-assisted physical guidance.\nNon-IT-assisted physical guidance was the first means of providing indexing capability, via purely mechanical means. It allowed the Industrial Revolution to progress into the Machine Age. It is achieved by jigs, fixtures, and machine tool parts and accessories, which control toolpath by the very nature of their shape, physically limiting the path for motion. Some archetypal examples, developed to perfection before the advent of the IT era, are drill jigs, the turrets on manual turret lathes, indexing heads for manual milling machines, rotary tables, and various indexing fixtures and blocks that are simpler and less expensive than indexing heads, and serve quite well for most indexing needs in small shops. Although indexing heads of the pre-CNC era are now mostly obsolete in commercial manufacturing, the principle of purely mechanical indexing is still a vital part of current technology, in concert with IT, even as it has been extended to newer uses, such as the indexing of CNC milling machine toolholders or of indexable cutter inserts, whose precisely controlled size and shape allows them to be rotated or replaced quickly and easily without changing overall tool geometry.\nIT-assisted physical guidance.\nIT-assisted physical guidance (for example, via NC, CNC, or robotics) has been developed since the World War II era and uses electromechanical and electrohydraulic servomechanisms to translate digital information into position control. These systems also ultimately physically limit the path for motion, as jigs and other purely mechanical means do; but they do it not simply through their own shape, but rather using changeable information.", "Automation-Control": 0.6596595645, "Qwen2": "Yes"} {"id": "4052453", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=4052453", "title": "Behavioral modeling", "text": "The behavioral approach to systems theory and control theory was initiated in the late-1970s by J. C. Willems as a result of resolving inconsistencies present in classical approaches based on state-space, transfer function, and convolution representations. This approach is also motivated by the aim of obtaining a general framework for system analysis and control that respects the underlying physics.\nThe main object in the behavioral setting is the behavior – the set of all signals compatible with the system. An important feature of the behavioral approach is that it does not distinguish a priority between input and output variables. Apart from putting system theory and control on a rigorous basis, the behavioral approach unified the existing approaches and brought new results on controllability for nD systems, control via interconnection, and system identification.\nDynamical system as a set of signals.\nIn the behavioral setting, a dynamical system is a triple \nwhere\nformula_8 means that formula_9 is a trajectory of the system, while formula_10 means that the laws of the system forbid the trajectory formula_9 to happen. Before the phenomenon is modeled, every signal in formula_5 is deemed possible, while after modeling, only the outcomes in formula_13 remain as possibilities.\nSpecial cases:\nLinear time-invariant differential systems.\nSystem properties are defined in terms of the behavior. The system formula_1 is said to be \nwhere formula_24 denotes the formula_25-shift, defined by \nIn these definitions linearity articulates the superposition law, while time-invariance articulates that the time-shift of a legal trajectory is in its turn a legal trajectory.\nA \"linear time-invariant differential system\" is a dynamical system formula_27 whose behavior formula_13 is the solution set of a system of constant coefficient linear ordinary differential equations formula_29, where formula_30 is a matrix of polynomials with real coefficients. The coefficients of formula_30 are the parameters of the model. In order to define the corresponding behavior, we need to specify when we consider a signal formula_32 to be a solution of formula_29. For ease of exposition, often infinite differentiable solutions are considered. There are other possibilities, as taking distributional solutions, or solutions in formula_34, and with the ordinary differential equations interpreted in the sense of distributions. The behavior defined is\nThis particular way of representing the system is called \"kernel representation\" of the corresponding dynamical system. There are many other useful representations of the same behavior, including transfer function, state space, and convolution.\nFor accessible sources regarding the behavioral approach, see \nObservability of latent variables.\nA key question of the behavioral approach is whether a quantity w1 can be deduced given an observed quantity w2 and a model. If w1 can be deduced given w2 and the model, w2 is said to be observable. In terms of mathematical modeling, the to-be-deduced quantity or variable is often referred to as the latent variable and the observed variable is the manifest variable. Such a system is then called an observable (latent variable) system.", "Automation-Control": 0.8867452145, "Qwen2": "Yes"} {"id": "19148519", "revid": "35498457", "url": "https://en.wikipedia.org/wiki?curid=19148519", "title": "Glossary of robotics", "text": "Robotics is the branch of technology that deals with the design, construction, operation, structural disposition, manufacture and application of robots. Robotics is related to the sciences of electronics, engineering, mechanics, and software.\nThe following is a list of common definitions related to the Robotics field.\nExternal links.\nOnline Robotics glossary repositories:", "Automation-Control": 0.772313118, "Qwen2": "Yes"} {"id": "36711208", "revid": "14013403", "url": "https://en.wikipedia.org/wiki?curid=36711208", "title": "Vehicle fire suppression system", "text": "A vehicle fire suppression system is a pre-engineered fire suppression system safety accessory permanently mounted on any type of vehicle. These systems are especially prevalent in the mobile heavy equipment segment and are designed to protect equipment assets from fire damage and related losses. Vehicle fire suppression systems have become a vital safety feature to several industries and are most commonly used in the mining, forestry, landfill, and mass transit industries.\nParts of a Typical System.\nA typical vehicle fire suppression system has five key components:\nTo mitigate a fire as soon as it happens, fire-detecting linear wire or sensors are strategically placed around the machine. When the high heat of a fire penetrates the linear wire or is detected by the sensors, a signal is sent to the control panel in the vehicle cab.\nThe control panel alarms and alerts the driver to quickly evacuate the machine. At the same time, the panel automatically initiates the actuator, which discharges the fire-fighting agent inside the onboard tanks and sends it through a distribution network composed of stainless steel tubing and/or hydraulic hosing. An actuator can also activate the system when pressed manually by the operator. \nAt the end of the distribution network, the agent is disbursed into the equipment’s protected areas via nozzles aimed at the machine's high-hazard components, like turbochargers, starters, fuel filters, batteries, alternators, and transmissions to extinguish the fire quickly and efficiently.", "Automation-Control": 0.7880394459, "Qwen2": "Yes"} {"id": "30234468", "revid": "4034676", "url": "https://en.wikipedia.org/wiki?curid=30234468", "title": "Formability", "text": "Formability is the ability of a given metal workpiece to undergo plastic deformation without being damaged. The plastic deformation capacity of metallic materials, however, is limited to a certain extent, at which point, the material could experience tearing or fracture (breakage).\nProcesses affected by the formability of a material include: rolling, extrusion, forging, rollforming, stamping, and hydroforming.\nFracture strain.\nA general parameter that indicates the formability and ductility of a material is the fracture strain which is determined by a uniaxial tensile test (see also fracture toughness). The strain identified by this test is defined by elongation with respect to a reference length. For example, a length of is used for the standardized uniaxial test of flat specimens, pursuant to EN 10002. It is important to note that deformation is homogeneous up to uniform elongation. Strain subsequently localizes until fracture occurs. Fracture strain is not an engineering strain since distribution of the deformation is inhomogeneous within the reference length. Fracture strain is nevertheless a rough indicator of the formability of a material. Typical values of the fracture strain are: 7% for ultra-high-strength material, and over 50% for mild-strength steel.\nForming limits for sheet forming.\nOne main failure mode is caused by tearing of the material. This is typical for sheet-forming applications.\nA neck may appear at a certain forming stage. This is an indication of localized plastic deformation. Whereas more or less homogeneous deformation takes place in and around the subsequent neck location in the early stable deformation stage, almost all deformation is concentrated in the neck zone during the quasi-stable and unstable deformation phase. This leads to material failure manifested by tearing. Forming-limit curves depict the extreme, but still possible, deformation which a sheet material may undergo during any stage of the stamping process. These limits depend on the deformation mode and the ratio of the surface strains. The major surface strain has a minimum value when plane strain deformation occurs, which means that the corresponding minor surface strain is zero. Forming limits are a specific material property. Typical plane strain values range from 10% for high-strength grades and 50% or above for mild-strength materials and those with very good formability.\nForming limit diagrams are often used to graphically or mathematically represent formability. It is recognized by many authors that the nature of fracture and therefore the Forming limit diagrams are intrinsically non-deterministic since large variations might be observed even within a single experimental campaign.\nDeep drawability.\nA classic form of sheetforming is deep drawing, which is done by drawing a sheet by means of a punch tool pressing on the inner region of the sheet, whereas the side material held by a blankholder can be drawn toward the center. It has been observed that materials with outstanding deep drawability behave anisotropically (see: anisotropy). Plastic deformation in the surface is much more pronounced than in the thickness. The lankford coefficient (r) is a specific material property indicating the ratio between width deformation and thickness deformation in the uniaxial tensile test. Materials with very good deep drawability have an \"r\" value of 2 or below. The positive aspect of formability with respect to the forming limit curve (forming limit diagram) is seen in the deformation paths of the material that are concentrated in the extreme left of the diagram, where the forming limits become very large.\nDuctility.\nAnother failure mode that may occur without any tearing is ductile fracture after plastic deformation (ductility). This may occur as a result of bending or shear deformation (inplane or through the thickness). The failure mechanism may be due to void nucleation and expansion on a microscopic level. Microcracks and subsequent macrocracks may appear when deformation of the material between the voids has exceeded the limit. Extensive research has focused in recent years on understanding and modeling ductile fracture. The approach has been to identify ductile forming limits using various small-scale tests that show different strain ratios or stress triaxialities. An effective measure of this type of forming limit is the minimum radius in roll-forming applications (half the sheet thickness for materials with good and three times the sheet thickness for materials with low formability).\nUse of formability parameters.\nKnowledge of the material formability is very important to the layout and design of any industrial forming process. Simulations using the finite-element method and use of formability criteria such as the forming limit curve (forming limit diagram) enhance and, in some cases, are indispensable to certain tool design processes (also see: Sheet metal forming simulation and Sheet metal forming analysis).\nIDDRG.\nOne major objective of the International Deep Drawing Research Group (IDDRG, from 1957) is the investigation, exchange and dissemination of knowledge and experience about the formability of sheet materials.", "Automation-Control": 0.7750146985, "Qwen2": "Yes"} {"id": "66200866", "revid": "829949", "url": "https://en.wikipedia.org/wiki?curid=66200866", "title": "Micromax IN Note 1", "text": "The Micromax In Note 1 is an Android smartphone developed by the Indian Smartphone manufacturer Micromax Informatics. Announced on November 3, 2020, and released on November 24, 2020, the In Note 1 marks the re-entry of the company into the Indian smartphone market.\nSpecifications.\nHardware.\nThe In Note 1 is powered by a Mediatek Helio G85 SoC including an octa-core 2.0 GHz CPU, an ARM G52 MC2 GPU. The internal storage is of 64 GB or 128 GB .\nThe In Note 1 features a 6.67-inch IPS LCD punch hole display with a 1080× 2400 FHD+ pixel display resolution and a pixel density of  395 ppi. The rear camera has a quad camera setup of 48MP+5MP+2MP+2MP sensors. Camera modes include night mode, HDR, Panorama, AI Scene Detection, Beauty, Pro, GIF, Time lapse, Slow-Motion and Portrait mode. The front camera features 16 MP and f/2.0 aperture.\nSoftware.\nThe In Note 1 is originally shipped with Android 10 and runs Stock Android. Micromax officially promised for software upgrade but nothing is happening from past 1.5 year except a security patch provided Recently.\nCurrent security patch - 05/07/22\nIt's seems to be fake promise of Android Update done by the co-founder of Micromax Rahul Sharma.", "Automation-Control": 1.0000007153, "Qwen2": "Yes"} {"id": "32366841", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=32366841", "title": "Parallel parking problem", "text": "The parallel parking problem is a motion planning problem in control theory and mechanics to determine the path a car must take to parallel park into a parking space. The front wheels of a car are permitted to turn, but the rear wheels must stay aligned. When a car is initially adjacent to a parking space, to move into the space it would need to move in a direction perpendicular to the allowed path of motion of the rear wheels. The admissible motions of the car in its configuration space are an example of a nonholonomic system.", "Automation-Control": 0.6963716149, "Qwen2": "Yes"} {"id": "24972551", "revid": "18126639", "url": "https://en.wikipedia.org/wiki?curid=24972551", "title": "Comparison of structured storage software", "text": "Structured storage is computer storage for structured data, often in the form of a distributed database. Computer software formally known as structured storage systems include Apache Cassandra, Google's Bigtable and Apache HBase.\nComparison.\nThe following is a comparison of notable structured storage systems.", "Automation-Control": 0.9824936986, "Qwen2": "Yes"} {"id": "54361643", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=54361643", "title": "Hyperparameter optimization", "text": "In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters (typically node weights) are learned.\nThe same kind of machine learning model can require different constraints, weights or learning rates to generalize different data patterns. These measures are called hyperparameters, and have to be tuned so that the model can optimally solve the machine learning problem. Hyperparameter optimization finds a tuple of hyperparameters that yields an optimal model which minimizes a predefined loss function on given independent data. The objective function takes a tuple of hyperparameters and returns the associated loss. Cross-validation is often used to estimate this generalization performance, and therefore choose the set of values for hyperparameters that maximize it.\nApproaches.\nGrid search.\nThe traditional way of performing hyperparameter optimization has been \"grid search\", or a \"parameter sweep\", which is simply an exhaustive searching through a manually specified subset of the hyperparameter space of a learning algorithm. A grid search algorithm must be guided by some performance metric, typically measured by cross-validation on the training set\nor evaluation on a hold-out validation set.\nSince the parameter space of a machine learner may include real-valued or unbounded value spaces for certain parameters, manually set bounds and discretization may be necessary before applying grid search.\nFor example, a typical soft-margin SVM classifier equipped with an RBF kernel has at least two hyperparameters that need to be tuned for good performance on unseen data: a regularization constant \"C\" and a kernel hyperparameter γ. Both parameters are continuous, so to perform grid search, one selects a finite set of \"reasonable\" values for each, say\nGrid search then trains an SVM with each pair (\"C\", γ) in the Cartesian product of these two sets and evaluates their performance on a held-out validation set (or by internal cross-validation on the training set, in which case multiple SVMs are trained per pair). Finally, the grid search algorithm outputs the settings that achieved the highest score in the validation procedure.\nGrid search suffers from the curse of dimensionality, but is often embarrassingly parallel because the hyperparameter settings it evaluates are typically independent of each other.\nRandom search.\nRandom Search replaces the exhaustive enumeration of all combinations by selecting them randomly. This can be simply applied to the discrete setting described above, but also generalizes to continuous and mixed spaces. It can outperform Grid search, especially when only a small number of hyperparameters affects the final performance of the machine learning algorithm. In this case, the optimization problem is said to have a low intrinsic dimensionality. Random Search is also embarrassingly parallel, and additionally allows the inclusion of prior knowledge by specifying the distribution from which to sample. Despite its simplicity, random search remains one of the important base-lines against which to compare the performance of new hyperparameter optimization methods.\nBayesian optimization.\nBayesian optimization is a global optimization method for noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization builds a probabilistic model of the function mapping from hyperparameter values to the objective evaluated on a validation set. By iteratively evaluating a promising hyperparameter configuration based on the current model, and then updating it, Bayesian optimization aims to gather observations revealing as much information as possible about this function and, in particular, the location of the optimum. It tries to balance exploration (hyperparameters for which the outcome is most uncertain) and exploitation (hyperparameters expected close to the optimum). In practice, Bayesian optimization has been shown to obtain better results in fewer evaluations compared to grid search and random search, due to the ability to reason about the quality of experiments before they are run.\nGradient-based optimization.\nFor specific learning algorithms, it is possible to compute the gradient with respect to hyperparameters and then optimize the hyperparameters using gradient descent. The first usage of these techniques was focused on neural networks. Since then, these methods have been extended to other models such as support vector machines or logistic regression.\nA different approach in order to obtain a gradient with respect to hyperparameters consists in differentiating the steps of an iterative optimization algorithm using automatic differentiation. A more recent work along this direction uses the implicit function theorem to calculate hypergradients and proposes a stable approximation of the inverse Hessian. The method scales to millions of hyperparameters and requires constant memory.\nIn a different approach, a hypernetwork is trained to approximate the best response function. One of the advantages of this method is that it can handle discrete hyperparameters as well. Self-tuning networks offer a memory efficient version of this approach by choosing a compact representation for the hypernetwork. More recently, Δ-STN has improved this method further by a slight reparameterization of the hypernetwork which speeds up training. Δ-STN also yields a better approximation of the best-response Jacobian by linearizing the network in the weights, hence removing unnecessary nonlinear effects of large changes in the weights.\nApart from hypernetwork approaches, gradient-based methods can be used to optimize discrete hyperparameters also by adopting a continuous relaxation of the parameters. Such methods have been extensively used for the optimization of architecture hyperparameters in neural architecture search.\nEvolutionary optimization.\nEvolutionary optimization is a methodology for the global optimization of noisy black-box functions. In hyperparameter optimization, evolutionary optimization uses evolutionary algorithms to search the space of hyperparameters for a given algorithm. Evolutionary hyperparameter optimization follows a process inspired by the biological concept of evolution:\nEvolutionary optimization has been used in hyperparameter optimization for statistical machine learning algorithms, automated machine learning, typical neural network and deep neural network architecture search, as well as training of the weights in deep neural networks.\nPopulation-based.\nPopulation Based Training (PBT) learns both hyperparameter values and network weights. Multiple learning processes operate independently, using different hyperparameters. As with evolutionary methods, poorly performing models are iteratively replaced with models that adopt modified hyperparameter values and weights based on the better performers. This replacement model warm starting is the primary differentiator between PBT and other evolutionary methods. PBT thus allows the hyperparameters to evolve and eliminates the need for manual hypertuning. The process makes no assumptions regarding model architecture, loss functions or training procedures.\nPBT and its variants are adaptive methods: they update hyperparameters during the training of the models. On the contrary, non-adaptive methods have the sub-optimal strategy to assign a constant set of hyperparameters for the whole training.\nEarly stopping-based.\nA class of early stopping-based hyperparameter optimization algorithms is purpose built for large search spaces of continuous and discrete hyperparameters, particularly when the computational cost to evaluate the performance of a set of hyperparameters is high. Irace implements the iterated racing algorithm, that focuses the search around the most promising configurations, using statistical tests to discard the ones that perform poorly.\nAnother early stopping hyperparameter optimization algorithm is successive halving (SHA), which begins as a random search but periodically prunes low-performing models, thereby focusing computational resources on more promising models. Asynchronous successive halving (ASHA) further improves upon SHA's resource utilization profile by removing the need to synchronously evaluate and prune low-performing models. Hyperband is a higher level early stopping-based algorithm that invokes SHA or ASHA multiple times with varying levels of pruning aggressiveness, in order to be more widely applicable and with fewer required inputs.\nOthers.\nRBF and spectral approaches have also been developed.\nIssues with hyperparameter optimization.\nWhen hyperparameter optimization is done, the set of hyperparameters are often fitted on a training set and selected based on the generalization performance, or score, of a validation set. However, this procedure is at risk of overfitting the hyperparameters to the validation set. Therefore, the generalization performance score of the validation set (which can be several sets in the case of a cross-validation procedure) cannot be used to simultaneously estimate the generalization performance of the final model. In order to do so, the generalization performance has to be evaluated on a set independent (which has no intersection) of the set (or sets) used for the optimization of the hyperparameters, otherwise the performance might give a value which is too optimistic (too large). This can be done on a second test set, or through an outer cross-validation procedure called nested cross-validation, which allows an unbiased estimation of the generalization performance of the model, taking into account the bias due to the hyperparameter optimization.", "Automation-Control": 0.9356612563, "Qwen2": "Yes"} {"id": "531911", "revid": "6908984", "url": "https://en.wikipedia.org/wiki?curid=531911", "title": "Laser cutting", "text": "Laser cutting is a technology that uses a laser to vaporize materials, resulting in a cut edge. While typically used for industrial manufacturing applications, it is now used by schools, small businesses, architecture, and hobbyists. Laser cutting works by directing the output of a high-power laser most commonly through optics. The laser optics and CNC (computer numerical control) are used to direct the laser beam to the material. A commercial laser for cutting materials uses a motion control system to follow a CNC or G-code of the pattern to be cut onto the material. The focused laser beam is directed at the material, which then either melts, burns, vaporizes away, or is blown away by a jet of gas, leaving an edge with a high-quality surface finish.\nHistory.\nIn 1965, the first production laser cutting machine was used to drill holes in diamond dies. This machine was made by the Western Electric Engineering Research Center. In 1967, the British pioneered laser-assisted oxygen jet cutting for metals. In the early 1970s, this technology was put into production to cut titanium for aerospace applications. At the same time, CO2 lasers were adapted to cut non-metals, such as textiles, because, at the time, CO2 lasers were not powerful enough to overcome the thermal conductivity of metals.\nProcess.\nThe laser beam is generally focused using a high-quality lens on the work zone. The quality of the beam has a direct impact on the focused spot size. The narrowest part of the focused beam is generally less than in diameter. Depending upon the material thickness, kerf widths as small as are possible. In order to be able to start cutting from somewhere other than the edge, a pierce is done before every cut. Piercing usually involves a high-power pulsed laser beam which slowly makes a hole in the material, taking around 5–15 seconds for stainless steel, for example.\nThe parallel rays of coherent light from the laser source often fall in the range between in diameter. This beam is normally focused and intensified by a lens or a mirror to a very small spot of about to create a very intense laser beam. In order to achieve the smoothest possible finish during contour cutting, the direction of the beam polarization must be rotated as it goes around the periphery of a contoured workpiece. For sheet metal cutting, the focal length is usually .\nAdvantages of laser cutting over mechanical cutting include easier work holding and reduced contamination of workpiece (since there is no cutting edge which can become contaminated by the material or contaminate the material). Precision may be better since the laser beam does not wear during the process. There is also a reduced chance of warping the material that is being cut, as laser systems have a small heat-affected zone. Some materials are also very difficult or impossible to cut by more traditional means.\nLaser cutting for metals has the advantage over plasma cutting of being more precise and using less energy when cutting sheet metal; however, most industrial lasers cannot cut through the greater metal thickness that plasma can. Newer laser machines operating at higher power (6000 watts, as contrasted with early laser cutting machines' 1500-watt ratings) are approaching plasma machines in their ability to cut through thick materials, but the capital cost of such machines is much higher than that of plasma cutting machines capable of cutting thick materials like steel plate.\nTypes.\nThere are three main types of lasers used in laser cutting. The laser is suited for cutting, boring, and engraving. The neodymium (Nd) and neodymium yttrium-aluminium-garnet lasers are identical in style and differ only in the application. Nd is used for boring and where high energy but low repetition are required. The Nd:YAG laser is used where very high power is needed and for boring and engraving. Both and Nd/Nd:YAG lasers can be used for welding.\n lasers are commonly \"pumped\" by passing a current through the gas mix (DC-excited) or using radio frequency energy (RF-excited). The RF method is newer and has become more popular. Since DC designs require electrodes inside the cavity, they can encounter electrode erosion and plating of electrode material on glassware and optics. Since RF resonators have external electrodes they are not prone to those problems.\n lasers are used for the industrial cutting of many materials including titanium, stainless steel, mild steel, aluminium, plastic, wood, engineered wood, wax, fabrics, and paper. YAG lasers are primarily used for cutting and scribing metals and ceramics.\nIn addition to the power source, the type of gas flow can affect performance as well. Common variants of lasers include fast axial flow, slow axial flow, transverse flow, and slab. In a fast axial flow resonator, the mixture of carbon dioxide, helium, and nitrogen is circulated at high velocity by a turbine or blower. Transverse flow lasers circulate the gas mix at a lower velocity, requiring a simpler blower. Slab or diffusion-cooled resonators have a static gas field that requires no pressurization or glassware, leading to savings on replacement turbines and glassware.\nThe laser generator and external optics (including the focus lens) require cooling. Depending on system size and configuration, waste heat may be transferred by a coolant or directly to air. Water is a commonly used coolant, usually circulated through a chiller or heat transfer system.\nA \"laser microjet\" is a water-jet-guided laser in which a pulsed laser beam is coupled into a low-pressure water jet. This is used to perform laser cutting functions while using the water jet to guide the laser beam, much like an optical fiber, through total internal reflection. The advantages of this are that the water also removes debris and cools the material. Additional advantages over traditional \"dry\" laser cutting are high dicing speeds, parallel kerf, and omnidirectional cutting.\nFiber lasers are a type of solid-state laser that is rapidly growing within the metal cutting industry. Unlike CO2, Fiber technology utilizes a solid gain medium, as opposed to a gas or liquid. The “seed laser” produces the laser beam and is then amplified within a glass fiber. With a wavelength of only 1064 nanometers fiber lasers produce an extremely small spot size (up to 100 times smaller compared to the CO2) making it ideal for cutting reflective metal material. This is one of the main advantages of Fiber compared to CO2.\nFibre laser cutter benefits include:-\nMethods.\nThere are many different methods of cutting using lasers, with different types used to cut different materials. Some of the methods are vaporization, melt and blow, melt blow and burn, thermal stress cracking, scribing, cold cutting, and burning stabilized laser cutting.\nVaporization cutting.\nIn vaporization cutting, the focused beam heats the surface of the material to a flashpoint and generates a keyhole. The keyhole leads to a sudden increase in absorptivity quickly deepening the hole. As the hole deepens and the material boils, vapor generated erodes the molten walls blowing ejection out and further enlarging the hole. Nonmelting materials such as wood, carbon, and thermoset plastics are usually cut by this method.\nMelt and blow.\nMelt and blow or fusion cutting uses high-pressure gas to blow molten material from the cutting area, greatly decreasing the power requirement. First, the material is heated to melting point then a gas jet blows the molten material out of the kerf avoiding the need to raise the temperature of the material any further. Materials cut with this process are usually metals.\nThermal stress cracking.\nBrittle materials are particularly sensitive to thermal fracture, a feature exploited in thermal stress cracking. A beam is focused on the surface causing localized heating and thermal expansion. This results in a crack that can then be guided by moving the beam. The crack can be moved in order of m/s. It is usually used in the cutting of glass.\nStealth dicing of silicon wafers.\nThe separation of microelectronic chips as prepared in semiconductor device fabrication from silicon wafers may be performed by the so-called stealth dicing process, which operates with a pulsed , the wavelength of which (1064 nm) is well adapted to the electronic band gap of silicon (1.11 eV or 1117 nm).\nReactive cutting.\nReactive cutting is also called \"burning stabilized laser gas cutting\" and \"flame cutting\". Reactive cutting is like oxygen torch cutting but with a laser beam as the ignition source. Mostly used for cutting carbon steel in thicknesses over 1 mm. This process can be used to cut very thick steel plates with relatively little laser power.\nTolerances and surface finish.\nLaser cutters have a positioning accuracy of 10 micrometers and repeatability of 5 micrometers.\nStandard roughness Rz increases with the sheet thickness, but decreases with laser power and cutting speed. When cutting low carbon steel with laser power of 800 W, standard roughness Rz is 10 μm for sheet thickness of 1 mm, 20 μm for 3 mm, and 25 μm for 6 mm.\nformula_1\nWhere: formula_2 steel sheet thickness in mm; formula_3 laser power in kW (some new laser cutters have laser power of 4 kW); formula_4 cutting speed in meters per minute.\nThis process is capable of holding quite close tolerances, often to within 0.001 inch (0.025 mm). Part geometry and the mechanical soundness of the machine have much to do with tolerance capabilities. The typical surface finish resulting from laser beam cutting may range from 125 to 250 micro-inches (0.003 mm to 0.006 mm).\nMachine configurations.\nThere are generally three different configurations of industrial laser cutting machines: moving material, hybrid, and flying optics systems. These refer to the way that the laser beam is moved over the material to be cut or processed. For all of these, the axes of motion are typically designated X and Y axis. If the cutting head may be controlled, it is designated as the Z-axis.\nMoving material lasers have a stationary cutting head and move the material under it. This method provides a constant distance from the laser generator to the workpiece and a single point from which to remove cutting effluent. It requires fewer optics but requires moving the workpiece. This style of machine tends to have the fewest beam delivery optics but also tends to be the slowest.\nHybrid lasers provide a table that moves in one axis (usually the X-axis) and moves the head along the shorter (Y) axis. This results in a more constant beam delivery path length than a flying optic machine and may permit a simpler beam delivery system. This can result in reduced power loss in the delivery system and more capacity per watt than flying optics machines.\nFlying optics lasers feature a stationary table and a cutting head (with a laser beam) that moves over the workpiece in both of the horizontal dimensions. Flying optics cutters keep the workpiece stationary during processing and often do not require material clamping. The moving mass is constant, so dynamics are not affected by varying the size of the workpiece. Flying optics machines are the fastest type, which is advantageous when cutting thinner workpieces.\nFlying optic machines must use some method to take into account the changing beam length from the near field (close to the resonator) cutting to the far field (far away from the resonator) cutting. Common methods for controlling this include collimation, adaptive optics, or the use of a constant beam length axis.\nFive and six-axis machines also permit cutting formed workpieces. In addition, there are various methods of orienting the laser beam to a shaped workpiece, maintaining a proper focus distance and nozzle standoff, etc.\nPulsing.\nPulsed lasers which provide a high-power burst of energy for a short period are very effective in some laser cutting processes, particularly for piercing, or when very small holes or very low cutting speeds are required, since if a constant laser beam were used, the heat could reach the point of melting the whole piece being cut.\nMost industrial lasers have the ability to pulse or cut CW (continuous wave) under NC (numerical control) program control.\nDouble pulse lasers use a series of pulse pairs to improve material removal rate and hole quality. Essentially, the first pulse removes material from the surface and the second prevents the ejecta from adhering to the side of the hole or cut.\nPower consumption.\nThe main disadvantage of laser cutting is the high power consumption. Industrial laser efficiency may range from 5% to 45%. The power consumption and efficiency of any particular laser will vary depending on output power and operating parameters. This will depend on the type of laser and how well the laser is matched to the work at hand. The amount of laser cutting power required, known as \"heat input\", for a particular job depends on the material type, thickness, process (reactive/inert) used, and desired cutting rate.\nProduction and cutting rates.\nThe maximum cutting rate (production rate) is limited by a number of factors including laser power, material thickness, process type (reactive or inert), and material properties. Common industrial systems (≥1 kW) will cut carbon steel metal from in thickness. For many purposes, a laser can be up to thirty times faster than standard sawing.", "Automation-Control": 0.7052506208, "Qwen2": "Yes"} {"id": "15875500", "revid": "29057650", "url": "https://en.wikipedia.org/wiki?curid=15875500", "title": "Algorithmic mechanism design", "text": "Algorithmic mechanism design (AMD) lies at the intersection of economic game theory, optimization, and computer science. The prototypical problem in mechanism design is to design a system for multiple self-interested participants, such that the participants' self-interested actions at equilibrium lead to good system performance. Typical objectives studied include revenue maximization and social welfare maximization. Algorithmic mechanism design differs from classical economic mechanism design in several respects. It typically employs the analytic tools of theoretical computer science, such as worst case analysis and approximation ratios, in contrast to classical mechanism design in economics which often makes distributional assumptions about the agents. It also considers computational constraints to be of central importance: mechanisms that cannot be efficiently implemented in polynomial time are not considered to be viable solutions to a mechanism design problem. This often, for example, rules out the classic economic mechanism, the Vickrey–Clarke–Groves auction.\nHistory.\nNoam Nisan and Amir Ronen first coined \"Algorithmic mechanism design\" in a research paper published in 1999.", "Automation-Control": 0.9940508604, "Qwen2": "Yes"} {"id": "43475196", "revid": "1640548", "url": "https://en.wikipedia.org/wiki?curid=43475196", "title": "Variable rate application", "text": "In precision agriculture, variable rate application (VRA) refers to the application of a material, such that the rate of application is based on the precise location, or qualities of the area that the material is being applied to. This is different from uniform application, and can be used to save money (using less product), and lessen the environmental impact. \nVariable rate application can be either map based or sensor based.\nApplications of VRA.\nIn precision agriculture, VRA is known to be used in the following areas. \nSeeding.\nPlanters and drills can be made into VRA sensors, by attaching a motor or gear box. With this, you can vary the rate of the seeds. The seeding rates can also be connected to match the application of agrochemicals.\nWeed control.\nFor variable rate weed control you need both a task computer and a system to physically change the flow rate of the agrochemicals.\nFertilizer.\nCrops do not always require a uniform application, as some areas will have different nutrient requirements due to their location (soil properties, sunlight). Variable rate fertilizer spreaders can be used to increase or decrease fertilizer application rate, using a global positioning system (GPS). They can also use \"on-the-go\" sensors, or a combination of the two.", "Automation-Control": 0.8009153008, "Qwen2": "Yes"} {"id": "7599559", "revid": "44277652", "url": "https://en.wikipedia.org/wiki?curid=7599559", "title": "Gear manufacturing", "text": "Gear manufacturing refers to the making of gears. Gears can be manufactured by a variety of processes, including casting, forging, extrusion, powder metallurgy, and blanking. As a general rule, however, machining is applied to achieve the final dimensions, shape and surface finish in the gear. The initial operations that produce a semifinishing part ready for gear machining as referred to as blanking operations; the starting product in gear machining is called a gear blank.\nSelection of materials.\nThe gear material should have the following properties:\nGear manufacturing processes.\nThere are multiple ways in which gear blanks can be shaped through the cutting and finishing processes.\nGear forming.\nIn gear form cutting, the cutting edge of the cutting tool has a shape identical with the shape of the space between the gear teeth. Two machining operations, milling and broaching can be employed to form cut gear teeth.\nForm milling.\nIn form milling, the cutter called a form cutter travels axially along the length of the gear tooth at the appropriate depth to produce the gear tooth. After each tooth is cut, the cutter is withdrawn, the gear blank is rotated, and the cutter proceeds to cut another tooth. The process continues until all teeth are cut\nBroaching.\nBroaching can also be used to produce gear teeth and is particularly applicable to internal teeth. The process is rapid and produces fine surface finish with high dimensional accuracy. However, because broaches are expensive and a separate broach is required for each size of gear, this method is suitable mainly for high-quality production.\nGear generation.\nIn gear generation, the tooth flanks are obtained as an outline of the subsequent positions of the cutter, which resembles in shape the mating gear in the gear pair. There are two machining processes employed shaping and milling. There are several modifications of these processes for different cutting tool used.\nGear hobbing.\nGear hobbing is a machining process in which gear teeth are progressively generated by a series of cuts with a helical cutting tool. All motions in hobbing are rotary, and the hob and gear blank rotate continuously as in two gears meshing until all teeth are cut.\nFinishing operations.\nAs produced by any of the process described, the surface finish and dimensional accuracy may not \nbe accurate enough for certain applications. Several finishing operations are available, including the \nconventional process of shaving, and a number of abrasive operations, including grinding, honing, and lapping.", "Automation-Control": 0.9892514944, "Qwen2": "Yes"} {"id": "15831300", "revid": "1883085", "url": "https://en.wikipedia.org/wiki?curid=15831300", "title": "Tellegen's theorem", "text": "Tellegen's theorem is one of the most powerful theorems in network theory. Most of the energy distribution theorems and extremum principles in network theory can be derived from it. It was published in 1952 by Bernard Tellegen. Fundamentally, Tellegen's theorem gives a simple relation between magnitudes that satisfy Kirchhoff's laws of electrical circuit theory.\nThe Tellegen theorem is applicable to a multitude of network systems. The basic assumptions for the systems are the conservation of flow of extensive quantities (Kirchhoff's current law, KCL) and the uniqueness of the potentials at the network nodes (Kirchhoff's voltage law, KVL). The Tellegen theorem provides a useful tool to analyze complex network systems including electrical circuits, biological and metabolic networks, pipeline transport networks, and chemical process networks.\nThe theorem.\nConsider an arbitrary lumped network that has formula_1 branches and formula_2 nodes. In an electrical network, the branches are two-terminal components and the nodes are points of interconnection. Suppose that to each branch we assign arbitrarily a branch potential difference formula_3 and a branch current formula_4 for formula_5, and suppose that they are measured with respect to arbitrarily picked \"associated\" reference directions. If the branch potential differences formula_6 satisfy all the constraints imposed by KVL and if the branch currents formula_7 satisfy all the constraints imposed by KCL, then\nTellegen's theorem is extremely general; it is valid for any lumped network that contains any elements, \"linear or nonlinear\", \"passive or active\", \"time-varying or time-invariant\". The generality is extended when formula_3 and formula_4 are linear operations on the set of potential differences and on the set of branch currents (respectively) since linear operations don't affect KVL and KCL. For instance, the linear operation may be the average or the Laplace transform. More generally, operators that preserve KVL are called Kirchhoff voltage operators, operators that preserve KCL are called Kirchhoff current operators, and operators that preserve both are simply called Kirchhoff operators. These operators need not necessarily be linear for Tellegen's theorem to hold.\nThe set of currents can also be sampled at a different time from the set of potential differences since KVL and KCL are true at all instants of time. Another extension is when the set of potential differences formula_3 is from one network and the set of currents formula_4 is from an entirely different network, so long as the two networks have the same topology (same incidence matrix) Tellegen's theorem remains true. This extension of Tellegen's Theorem leads to many theorems relating to two-port networks.\nDefinitions.\nWe need to introduce a few necessary network definitions to provide a compact proof.\nIncidence matrix:\nThe formula_13 matrix formula_14 is called node-to-branch incidence matrix for the matrix elements formula_15 being\nA reference or datum node formula_17 is introduced to represent the environment and connected to all dynamic nodes and terminals. The formula_18 matrix formula_19, where the row that contains the elements formula_20 of the reference node formula_21 is eliminated, is called reduced incidence matrix.\nThe conservation laws (KCL) in vector-matrix form:\nThe uniqueness condition for the potentials (KVL) in vector-matrix form:\nwhere formula_24 are the absolute potentials at the nodes to the reference node formula_21.\nProof.\nUsing KVL:\nbecause formula_27 by KCL. So:\nApplications.\nNetwork analogs have been constructed for a wide variety of physical systems, and have proven extremely useful in analyzing their dynamic behavior. The classical application area for network theory and Tellegen's theorem is electrical circuit theory. It is mainly in use to design filters in signal processing applications.\nA more recent application of Tellegen's theorem is in the area of chemical and biological processes. The assumptions for electrical circuits (Kirchhoff laws) are generalized for dynamic systems obeying the laws of irreversible thermodynamics. Topology and structure of reaction networks (reaction mechanisms, metabolic networks) can be analyzed using the Tellegen theorem.\nAnother application of Tellegen's theorem is to determine stability and optimality of complex process systems such as chemical plants or oil production systems. The Tellegen theorem can be formulated for process systems using process nodes, terminals, flow connections and allowing sinks and sources for production or destruction of extensive quantities.\nA formulation for Tellegen's theorem of process systems:\nwhere formula_30 are the production terms, formula_31 are the terminal connections, and formula_32 are the dynamic storage terms for the extensive variables.", "Automation-Control": 0.745385766, "Qwen2": "Yes"} {"id": "56843179", "revid": "11308236", "url": "https://en.wikipedia.org/wiki?curid=56843179", "title": "ApertusVR", "text": "ApertusVR is an embeddable, open-source (MIT), framework-independent, platform-independent, network-topology-independent, distributed, augmented reality/virtual reality/mixed reality engine.\nIt is written in C++, with JavaScript and HTTP Rest API (in Node.js). ApertusVR creates a new abstraction layer over the hardware in order to integrate the virtual and augmented reality technologies into any developments or products.", "Automation-Control": 0.9819077253, "Qwen2": "Yes"} {"id": "14559354", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=14559354", "title": "Artstein's theorem", "text": "Artstein's theorem states that a nonlinear dynamical system in the control-affine form\nformula_1\nhas a differentiable control-Lyapunov function if and only if it admits a regular stabilizing feedback \"u\"(\"x\"), that is a locally Lipschitz function on Rn\\{0}.\nThe original 1983 proof by Zvi Artstein proceeds by a nonconstructive argument. In 1989 Eduardo D. Sontag provided a constructive version of this theorem explicitly exhibiting the feedback.", "Automation-Control": 0.7947800159, "Qwen2": "Yes"} {"id": "14569194", "revid": "3827604", "url": "https://en.wikipedia.org/wiki?curid=14569194", "title": "Sprog (software)", "text": "Sprog is a graphical tool to build Perl programs by plugging parts (called \"gears\" in Sprog terminology) together. Given the available gears are mostly for reading and processing data, this program can be classified as an ETL (Extract-Transform-Load) tool.", "Automation-Control": 0.8532174826, "Qwen2": "Yes"} {"id": "43628228", "revid": "3016749", "url": "https://en.wikipedia.org/wiki?curid=43628228", "title": "LINDO", "text": "LINDO (Linear, Interactive, and Discrete Optimizer) is a software package for linear programming, integer programming, nonlinear programming, stochastic programming and global optimization.\nLindo also creates \"What'sBest!\" which is an add-in for linear, integer and nonlinear optimization. First released for Lotus 1-2-3 and later also for Microsoft Excel.", "Automation-Control": 0.994322598, "Qwen2": "Yes"} {"id": "38229281", "revid": "20841863", "url": "https://en.wikipedia.org/wiki?curid=38229281", "title": "Active valve lift system", "text": "The I-Active Valve Lift System (i stands for intelligence) or i-AVLS is a valvetrain technology implemented by Subaru in the 2.5L naturally aspirated engines SOHC to improve emissions, efficiency and performance. Note that AVLS is different from AVCS used on other Subaru engines. AVLS improves performance and efficiency by changing which camshaft is operating which of the two intake valves. The camshafts on all AVLS Subaru engines have specially designed lobes for intake valves. They feature two different cam profiles: a low/mid lift profile or a high lift profile. The two intake valves in each cylinder are operated by a rocker arm with its own cam lobe. The cam utilized is selected by the Engine Control Unit (ECU). To select different valve lift modes, oil pressure generated by the engine moves a pin which locks the two lobes together. At low engine speeds the low/mid lift camshafts increases the speed of air rushing into the engine thereby increasing torque and efficiency. At higher engine speeds the high lift camshafts fully open the intake valves, reducing resistance to incoming air and improving power. AVLS only operates one of the intake valves in each cylinder as the other is always open to promote swirl.", "Automation-Control": 0.8439772129, "Qwen2": "Yes"} {"id": "49648894", "revid": "46059149", "url": "https://en.wikipedia.org/wiki?curid=49648894", "title": "Simulation-based optimization", "text": "Simulation-based optimization (also known as simply simulation optimization) integrates optimization techniques into simulation modeling and analysis. Because of the complexity of the simulation, the objective function may become difficult and expensive to evaluate. Usually, the underlying simulation model is stochastic, so that the objective function must be estimated using statistical estimation techniques (called output analysis in simulation methodology).\nOnce a system is mathematically modeled, computer-based simulations provide information about its behavior. Parametric simulation methods can be used to improve the performance of a system. In this method, the input of each variable is varied with other parameters remaining constant and the effect on the design objective is observed. This is a time-consuming method and improves the performance partially. To obtain the optimal solution with minimum computation and time, the problem is solved iteratively where in each iteration the solution moves closer to the optimum solution. Such methods are known as ‘numerical optimization’, ‘simulation-based optimization’ or 'simulation-based multi-objective optimization' used when more than one objective is involved.\nIn simulation experiment, the goal is to evaluate the effect of different values of input variables on a system. However, the interest is sometimes in finding the optimal value for input variables in terms of the system outcomes. One way could be running simulation experiments for all possible input variables. However, this approach is not always practical due to several possible situations and it just makes it intractable to run experiments for each scenario. For example, there might be too many possible values for input variables, or the simulation model might be too complicated and expensive to run for a large set of input variable values. In these cases, the goal is to iterative find optimal values for the input variables rather than trying all possible values. This process is called simulation optimization.\nSpecific simulation–based optimization methods can be chosen according to Figure 1 based on the decision variable types.\nOptimization exists in two main branches of operations research:\n\"Optimization parametric (static)\" – The objective is to find the values of the parameters, which are “static” for all states, with the goal of maximizing or minimizing a function. In this case, one can use mathematical programming, such as linear programming. In this scenario, simulation helps when the parameters contain noise or the evaluation of the problem would demand excessive computer time, due to its complexity.\n\"Optimization control (dynamic)\" – This is used largely in computer science and electrical engineering. The optimal control is per state and the results change in each of them. One can use mathematical programming, as well as dynamic programming. In this scenario, simulation can generate random samples and solve complex and large-scale problems.\nSimulation-based optimization methods.\nSome important approaches in simulation optimization are discussed below. \nStatistical ranking and selection methods (R/S).\nRanking and selection methods are designed for problems where the alternatives are fixed and known, and simulation is used to estimate the system performance. \nIn the simulation optimization setting, applicable methods include indifference zone approaches, optimal computing budget allocation, and knowledge gradient algorithms.\nResponse surface methodology (RSM).\nIn response surface methodology, the objective is to find the relationship between the input variables and the response variables. The process starts from trying to fit a linear regression model. If the P-value turns out to be low, then a higher degree polynomial regression, which is usually quadratic, will be implemented. The process of finding a good relationship between input and response variables will be done for each simulation test. In simulation optimization, response surface method can be used to find the best input variables that produce desired outcomes in terms of response variables.\nHeuristic methods.\nHeuristic methods change accuracy by speed. Their goal is to find a good solution faster than the traditional methods, when they are too slow or fail in solving the problem. Usually they find local optimal instead of the optimal value; however, the values are considered close enough of the final solution. Examples of these kinds of methods include tabu search and genetic algorithms.\nMetamodels enable researchers to obtain reliable approximate model outputs without running expensive and time-consuming computer simulations. Therefore, the process of model optimization can take less computation time and cost.\nStochastic approximation.\nStochastic approximation is used when the function cannot be computed directly, only estimated via noisy observations. In these scenarios, this method (or family of methods) looks for the extrema of these function. The objective function would be:\nDerivative-free optimization methods.\nDerivative-free optimization is a subject of mathematical optimization. This method is applied to a certain optimization problem when its derivatives are unavailable or unreliable. Derivative-free methods establish a model based on sample function values or directly draw a sample set of function values without exploiting a detailed model. Since it needs no derivatives, it cannot be compared to derivative-based methods.\nFor unconstrained optimization problems, it has the form:\nThe limitations of derivative-free optimization:\n1. Some methods cannot handle optimization problems with more than a few variables; the results are usually not so accurate. However, there are numerous practical cases where derivative-free methods have been successful in non-trivial simulation optimization problems that include randomness manifesting as \"noise\" in the objective function. See, for example, the following\n2. When confronted with minimizing non-convex functions, it will show its limitation.\n3. Derivative-free optimization methods are relatively simple and easy, but, like most optimization methods, some care is required in practical implementation (e.g., in choosing the algorithm parameters).\nDynamic programming and neuro-dynamic programming.\nDynamic programming.\nDynamic programming deals with situations where decisions are made in stages. The key to this kind of problem is to trade off the present and future costs.\nOne dynamic basic model has two features:\n1) It has a discrete time dynamic system.\n2) The cost function is additive over time.\nFor discrete features, dynamic programming has the form:\nFor the cost function, it has the form:\nformula_14 is the cost at the end of the process.\nAs the cost cannot be optimized meaningfully, it can be used the expect value:\nNeuro-dynamic programming.\nNeuro-dynamic programming is the same as dynamic programming except that the former has the concept of approximation architectures. It combines artificial intelligence, simulation-base algorithms, and functional approach techniques. “Neuro” in this term origins from artificial intelligence community. It means learning how to make improved decisions for the future via built-in mechanism based on the current behavior. The most important part of neuro-dynamic programming is to build a trained neuro network for the optimal problem.\nLimitations.\nSimulation-based optimization has some limitations, such as the difficulty of creating a model that imitates the dynamic behavior of a system in a way that is considered good enough for its representation. Another problem is complexity in the determining uncontrollable parameters of both real-world system and simulation. Moreover, only a statistical estimation of real values can be obtained. It is not easy to determine the objective function, since it is a result of measurements, which can be harmful to the solutions.", "Automation-Control": 0.6707652807, "Qwen2": "Yes"} {"id": "49651909", "revid": "1461430", "url": "https://en.wikipedia.org/wiki?curid=49651909", "title": "Balanced clustering", "text": "Balanced clustering is a special case of clustering where, in the strictest sense, cluster sizes are constrained to formula_1 or formula_2, where formula_3 is the number of points and formula_4 is the number of clusters. A typical algorithm is balanced k-means, which minimizes mean square error (MSE). Another type of balanced clustering called balance-driven clustering has a two-objective cost function that minimizes both the imbalance and the MSE. Typical cost functions are ratio cut and Ncut. Balanced clustering can be used for example in scenarios where freight has to be delivered to formula_3 locations with formula_4 cars. It is then preferred that each car delivers to an equal number of locations.\nSoftware.\nThere exists implementations for balanced k-means and Ncut", "Automation-Control": 0.6951589584, "Qwen2": "Yes"} {"id": "14469299", "revid": "31691822", "url": "https://en.wikipedia.org/wiki?curid=14469299", "title": "Autonomous convergence theorem", "text": "In mathematics, an autonomous convergence theorem is one of a family of related theorems which specify conditions guaranteeing global asymptotic stability of a continuous autonomous dynamical system.\nHistory.\nThe Markus–Yamabe conjecture was formulated as an attempt to give conditions for global stability of continuous dynamical systems in two dimensions. However, the Markus–Yamabe conjecture does not hold for dimensions higher than two, a problem which autonomous convergence theorems attempt to address. The first autonomous convergence theorem was constructed by Russell Smith. This theorem was later refined by Michael Li and James Muldowney.\nAn example autonomous convergence theorem.\nA comparatively simple autonomous convergence theorem is as follows:\nThis autonomous convergence theorem is very closely related to the Banach fixed-point theorem.\nHow autonomous convergence works.\nNote: this is an intuitive description of how autonomous convergence theorems guarantee stability, not a strictly mathematical description.\nThe key point in the example theorem given above is the existence of a negative logarithmic norm, which is derived from a vector norm. The vector norm effectively measures the distance between points in the vector space on which the differential equation is defined, and the negative logarithmic norm means that distances between points, as measured by the corresponding vector norm, are decreasing with time under the action of formula_5. So long as the trajectories of all points in the phase space are bounded, all trajectories must therefore eventually converge to the same point.\nThe autonomous convergence theorems by Russell Smith, Michael Li and James Muldowney work in a similar manner, but they rely on showing that the area of two-dimensional shapes in phase space decrease with time. This means that no periodic orbits can exist, as all closed loops must shrink to a point. If the system is bounded, then according to Pugh's closing lemma there can be no chaotic behaviour either, so all trajectories must eventually reach an equilibrium.\nMichael Li has also developed an extended autonomous convergence theorem which is applicable to dynamical systems containing an invariant manifold.", "Automation-Control": 0.6924055219, "Qwen2": "Yes"} {"id": "41370976", "revid": "1167556094", "url": "https://en.wikipedia.org/wiki?curid=41370976", "title": "Kernel embedding of distributions", "text": "In machine learning, the kernel embedding of distributions (also called the kernel mean or mean map) comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space formula_1 on which a sensible kernel function (measuring similarity between elements of formula_1) may be defined. For example, various kernels have been proposed for learning from data which are: vectors in formula_3, discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in.\nThe analysis of distributions is fundamental in machine learning and statistics, and many algorithms in these fields rely on information theoretic approaches such as entropy, mutual information, or Kullback–Leibler divergence. However, to estimate these quantities, one must first either perform density estimation, or employ sophisticated space-partitioning/bias-correction strategies which are typically infeasible for high-dimensional data. Commonly, methods for modeling complex distributions rely on parametric assumptions that may be unfounded or computationally challenging (e.g. Gaussian mixture models), while nonparametric methods like kernel density estimation (Note: the smoothing kernels in this context have a different interpretation than the kernels discussed here) or characteristic function representation (via the Fourier transform of the distribution) break down in high-dimensional settings.\nMethods based on the kernel embedding of distributions sidestep these problems and also possess the following advantages: \nThus, learning via the kernel embedding of distributions offers a principled drop-in replacement for information theoretic approaches and is a framework which not only subsumes many popular methods in machine learning and statistics as special cases, but also can lead to entirely new learning algorithms.\nDefinitions.\nLet formula_4 denote a random variable with domain formula_5 and distribution formula_6 Given a kernel formula_7 on formula_8 the Moore–Aronszajn theorem asserts the existence of a RKHS formula_9 (a Hilbert space of functions formula_10 equipped with an inner product formula_11 and a norm formula_12) in which the element formula_13 satisfies the reproducing property \nOne may alternatively consider formula_13 an implicit feature mapping formula_16 from formula_5 to formula_18 (which is therefore also called the feature space), so that formula_19 can be viewed as a measure of similarity between points formula_20 While the similarity measure is linear in the feature space, it may be highly nonlinear in the original space depending on the choice of kernel.\nKernel embedding.\nThe kernel embedding of the distribution formula_21 in formula_18 (also called the kernel mean or mean map) is given by:\nIf formula_21 allows a square integrable density formula_25, then formula_26, where formula_27 is the Hilbert–Schmidt integral operator. A kernel is \"characteristic\" if the mean embedding formula_28 is injective. Each distribution can thus be uniquely represented in the RKHS and all statistical features of distributions are preserved by the kernel embedding if a characteristic kernel is used.\nEmpirical kernel embedding.\nGiven formula_29 training examples formula_30 drawn independently and identically distributed (i.i.d.) from formula_31 the kernel embedding of formula_21 can be empirically estimated as\nJoint distribution embedding.\nIf formula_34 denotes another random variable (for simplicity, assume the co-domain of formula_34 is also formula_5 with the same kernel formula_7 which satisfies formula_38), then the joint distribution formula_39 can be mapped into a tensor product feature space formula_40 via \nBy the equivalence between a tensor and a linear map, this joint embedding may be interpreted as an uncentered cross-covariance operator formula_42 from which the cross-covariance of functions formula_43 can be computed as \nGiven formula_29 pairs of training examples formula_46 drawn i.i.d. from formula_21, we can also empirically estimate the joint distribution kernel embedding via\nConditional distribution embedding.\nGiven a conditional distribution formula_49 one can define the corresponding RKHS embedding as \nNote that the embedding of formula_51 thus defines a family of points in the RKHS indexed by the values formula_52 taken by conditioning variable formula_4. By fixing formula_4 to a particular value, we obtain a single element in formula_9, and thus it is natural to define the operator\nwhich given the feature mapping of formula_52 outputs the conditional embedding of formula_34 given formula_59 Assuming that for all formula_60 it can be shown that \nThis assumption is always true for finite domains with characteristic kernels, but may not necessarily hold for continuous domains. Nevertheless, even in cases where the assumption fails, formula_62 may still be used to approximate the conditional kernel embedding formula_63 and in practice, the inversion operator is replaced with a regularized version of itself formula_64 (where formula_65 denotes the identity matrix).\nGiven training examples formula_66 the empirical kernel conditional embedding operator may be estimated as \nwhere formula_68 are implicitly formed feature matrices, formula_69 is the Gram matrix for samples of formula_4, and formula_71 is a regularization parameter needed to avoid overfitting.\nThus, the empirical estimate of the kernel conditional embedding is given by a weighted sum of samples of formula_34 in the feature space:\nwhere formula_74 and formula_75\nRules of probability as operations in the RKHS.\nThis section illustrates how basic probabilistic rules may be reformulated as (multi)linear algebraic operations in the kernel embedding framework and is primarily based on the work of Song et al. The following notation is adopted: \nIn practice, all embeddings are empirically estimated from data formula_132 and it assumed that a set of samples formula_133 may be used to estimate the kernel embedding of the prior distribution formula_134.\nKernel sum rule.\nIn probability theory, the marginal distribution of formula_4 can be computed by integrating out formula_125 from the joint density (including the prior distribution on formula_34)\nThe analog of this rule in the kernel embedding framework states that formula_139 the RKHS embedding of formula_140, can be computed via\nwhere formula_142 is the kernel embedding of formula_143 In practical implementations, the kernel sum rule takes the following form\nwhere \nis the empirical kernel embedding of the prior distribution, formula_146 formula_147, and formula_148 are Gram matrices with entries formula_149 respectively.\nKernel chain rule.\nIn probability theory, a joint distribution can be factorized into a product between conditional and marginal distributions \nThe analog of this rule in the kernel embedding framework states that formula_151 the joint embedding of formula_152 can be factorized as a composition of conditional embedding operator with the auto-covariance operator associated with formula_153\nwhere \nIn practical implementations, the kernel chain rule takes the following form\nKernel Bayes' rule.\nIn probability theory, a posterior distribution can be expressed in terms of a prior distribution and a likelihood function as \nThe analog of this rule in the kernel embedding framework expresses the kernel embedding of the conditional distribution in terms of conditional embedding operators which are modified by the prior distribution\nwhere from the chain rule: \nIn practical implementations, the kernel Bayes' rule takes the following form\nwhere \nTwo regularization parameters are used in this framework: formula_164 for the estimation of formula_165 and formula_166 for the estimation of the final conditional embedding operator \nThe latter regularization is done on square of formula_168 because formula_169 may not be positive definite.\nApplications.\nMeasuring distance between distributions.\nThe maximum mean discrepancy (MMD) is a distance-measure between distributions formula_170 and formula_171 which is defined as the squared distance between their embeddings in the RKHS \nWhile most distance-measures between distributions such as the widely used Kullback–Leibler divergence either require density estimation (either parametrically or nonparametrically) or space partitioning/bias correction strategies, the MMD is easily estimated as an empirical mean which is concentrated around the true value of the MMD. The characterization of this distance as the \"maximum mean discrepancy\" refers to the fact that computing the MMD is equivalent to finding the RKHS function that maximizes the difference in expectations between the two probability distributions \nKernel two-sample test.\nGiven \"n\" training examples from formula_170 and \"m\" samples from formula_171, one can formulate a test statistic based on the empirical estimate of the MMD\nto obtain a two-sample test of the null hypothesis that both samples stem from the same distribution (i.e. formula_177) against the broad alternative formula_178.\nDensity estimation via kernel embeddings.\nAlthough learning algorithms in the kernel embedding framework circumvent the need for intermediate density estimation, one may nonetheless use the empirical embedding to perform density estimation based on \"n\" samples drawn from an underlying distribution formula_179. This can be done by solving the following optimization problem \nwhere the maximization is done over the entire space of distributions on formula_182 Here, formula_183 is the kernel embedding of the proposed density formula_184 and formula_185 is an entropy-like quantity (e.g. Entropy, KL divergence, Bregman divergence). The distribution which solves this optimization may be interpreted as a compromise between fitting the empirical kernel means of the samples well, while still allocating a substantial portion of the probability mass to all regions of the probability space (much of which may not be represented in the training examples). In practice, a good approximate solution of the difficult optimization may be found by restricting the space of candidate densities to a mixture of \"M\" candidate distributions with regularized mixing proportions. Connections between the ideas underlying Gaussian processes and conditional random fields may be drawn with the estimation of conditional probability distributions in this fashion, if one views the feature mappings associated with the kernel as sufficient statistics in generalized (possibly infinite-dimensional) exponential families.\nMeasuring dependence of random variables.\nA measure of the statistical dependence between random variables formula_4 and formula_34 (from any domains on which sensible kernels can be defined) can be formulated based on the Hilbert–Schmidt Independence Criterion \nand can be used as a principled replacement for mutual information, Pearson correlation or any other dependence measure used in learning algorithms. Most notably, HSIC can detect arbitrary dependencies (when a characteristic kernel is used in the embeddings, HSIC is zero if and only if the variables are independent), and can be used to measure dependence between different types of data (e.g. images and text captions). Given \"n\" i.i.d. samples of each random variable, a simple parameter-free unbiased estimator of HSIC which exhibits concentration about the true value can be computed in formula_189 time, where the Gram matrices of the two datasets are approximated using formula_190 with formula_191. The desirable properties of HSIC have led to the formulation of numerous algorithms which utilize this dependence measure for a variety of common machine learning tasks such as: feature selection (BAHSIC ), clustering (CLUHSIC ), and dimensionality reduction (MUHSIC ).\nHSIC can be extended to measure the dependence of multiple random variables. The question of when HSIC captures independence in this case has recently been studied: for \nmore than two variables\nKernel belief propagation.\nBelief propagation is a fundamental algorithm for inference in graphical models in which nodes repeatedly pass and receive messages corresponding to the evaluation of conditional expectations. In the kernel embedding framework, the messages may be represented as RKHS functions and the conditional distribution embeddings can be applied to efficiently compute message updates. Given \"n\" samples of random variables represented by nodes in a Markov random field, the incoming message to node \"t\" from node \"u\" can be expressed as \nif it assumed to lie in the RKHS. The kernel belief propagation update message from \"t\" to node \"s\" is then given by \nwhere formula_195 denotes the element-wise vector product, formula_196 is the set of nodes connected to \"t\" excluding node \"s\", formula_197, formula_198 are the Gram matrices of the samples from variables formula_199, respectively, and formula_200 is the feature matrix for the samples from formula_201.\nThus, if the incoming messages to node \"t\" are linear combinations of feature mapped samples from formula_202, then the outgoing message from this node is also a linear combination of feature mapped samples from formula_203. This RKHS function representation of message-passing updates therefore produces an efficient belief propagation algorithm in which the potentials are nonparametric functions inferred from the data so that arbitrary statistical relationships may be modeled.\nNonparametric filtering in hidden Markov models.\nIn the hidden Markov model (HMM), two key quantities of interest are the transition probabilities between hidden states formula_204 and the emission probabilities formula_205 for observations. Using the kernel conditional distribution embedding framework, these quantities may be expressed in terms of samples from the HMM. A serious limitation of the embedding methods in this domain is the need for training samples containing hidden states, as otherwise inference with arbitrary distributions in the HMM is not possible.\nOne common use of HMMs is filtering in which the goal is to estimate posterior distribution over the hidden state formula_206 at time step \"t\" given a history of previous observations formula_207 from the system. In filtering, a belief state formula_208 is recursively maintained via a prediction step (where updates formula_209 are computed by marginalizing out the previous hidden state) followed by a conditioning step (where updates formula_210 are computed by applying Bayes' rule to condition on a new observation). The RKHS embedding of the belief state at time \"t+1\" can be recursively expressed as \nby computing the embeddings of the prediction step via the kernel sum rule and the embedding of the conditioning step via kernel Bayes' rule. Assuming a training sample formula_212 is given, one can in practice estimate \nand filtering with kernel embeddings is thus implemented recursively using the following updates for the weights formula_214 \nwhere formula_217 denote the Gram matrices of formula_218 and formula_219 respectively, formula_220 is a transfer Gram matrix defined as formula_221 and formula_222\nSupport measure machines.\nThe support measure machine (SMM) is a generalization of the support vector machine (SVM) in which the training examples are probability distributions paired with labels formula_223. SMMs solve the standard SVM dual optimization problem using the following expected kernel\nwhich is computable in closed form for many common specific distributions formula_225 (such as the Gaussian distribution) combined with popular embedding kernels formula_7 (e.g. the Gaussian kernel or polynomial kernel), or can be accurately empirically estimated from i.i.d. samples formula_227 via\nUnder certain choices of the embedding kernel formula_7, the SMM applied to training examples formula_230 is equivalent to a SVM trained on samples formula_231, and thus the SMM can be viewed as a \"flexible\" SVM in which a different data-dependent kernel (specified by the assumed form of the distribution formula_225) may be placed on each training point.\nDomain adaptation under covariate, target, and conditional shift.\nThe goal of domain adaptation is the formulation of learning algorithms which generalize well when the training and test data have different distributions. Given training examples formula_233 and a test set formula_234 where the formula_235 are unknown, three types of differences are commonly assumed between the distribution of the training examples formula_236 and the test distribution formula_237:\nBy utilizing the kernel embedding of marginal and conditional distributions, practical approaches to deal with the presence of these types of differences between training and test domains can be formulated. Covariate shift may be accounted for by reweighting examples via estimates of the ratio formula_244 obtained directly from the kernel embeddings of the marginal distributions of formula_4 in each domain without any need for explicit estimation of the distributions. Target shift, which cannot be similarly dealt with since no samples from formula_34 are available in the test domain, is accounted for by weighting training examples using the vector formula_247 which solves the following optimization problem (where in practice, empirical approximations must be used) \nTo deal with location scale conditional shift, one can perform a LS transformation of the training points to obtain new transformed training data formula_250 (where formula_195 denotes the element-wise vector product). To ensure similar distributions between the new transformed training samples and the test data, formula_252 are estimated by minimizing the following empirical kernel embedding distance \nIn general, the kernel embedding methods for dealing with LS conditional shift and target shift may be combined to find a reweighted transformation of the training data which mimics the test distribution, and these methods may perform well even in the presence of conditional shifts other than location-scale changes.\nDomain generalization via invariant feature representation.\nGiven \"N\" sets of training examples sampled i.i.d. from distributions formula_254, the goal of domain generalization is to formulate learning algorithms which perform well on test examples sampled from a previously unseen domain formula_255 where no data from the test domain is available at training time. If conditional distributions formula_256 are assumed to be relatively similar across all domains, then a learner capable of domain generalization must estimate a functional relationship between the variables which is robust to changes in the marginals formula_170. Based on kernel embeddings of these distributions, Domain Invariant Component Analysis (DICA) is a method which determines the transformation of the training data that minimizes the difference between marginal distributions while preserving a common conditional distribution shared between all training domains. DICA thus extracts \"invariants\", features that transfer across domains, and may be viewed as a generalization of many popular dimension-reduction methods such as kernel principal component analysis, transfer component analysis, and covariance operator inverse regression. \nDefining a probability distribution formula_258 on the RKHS formula_9 with \nDICA measures dissimilarity between domains via distributional variance which is computed as \nwhere \nso formula_263 is a formula_264 Gram matrix over the distributions from which the training data are sampled. Finding an orthogonal transform onto a low-dimensional subspace \"B\" (in the feature space) which minimizes the distributional variance, DICA simultaneously ensures that \"B\" aligns with the bases of a central subspace \"C\" for which formula_34 becomes independent of formula_4 given formula_267 across all domains. In the absence of target values formula_34, an unsupervised version of DICA may be formulated which finds a low-dimensional subspace that minimizes distributional variance while simultaneously maximizing the variance of formula_4 (in the feature space) across all domains (rather than preserving a central subspace).\nDistribution regression.\nIn distribution regression, the goal is to regress from probability distributions to reals (or vectors). Many important machine learning and statistical tasks fit into this framework, including multi-instance learning, and point estimation problems without analytical solution (such as hyperparameter or entropy estimation). In practice only samples from sampled distributions are observable, and the estimates have to rely on similarities computed between \"sets of points\". Distribution regression has been successfully applied for example in supervised entropy learning, and aerosol prediction using multispectral satellite images.\nGiven formula_270 training data, where the formula_271 bag contains samples from a probability distribution formula_272 and the formula_273 output label is formula_274, one can tackle the distribution regression task by taking the embeddings of the distributions, and learning the regressor from the embeddings to the outputs. In other words, one can consider the following kernel ridge regression problem formula_275\nwhere \nwith a formula_7 kernel on the domain of formula_272-s formula_280, formula_281 is a kernel on the embedded distributions, and formula_282 is the RKHS determined by formula_281. Examples for formula_281 include the linear kernel formula_285, the Gaussian kernel formula_286, the exponential kernel formula_287, the Cauchy kernel formula_288, the generalized t-student kernel formula_289, or the inverse multiquadrics kernel formula_290.\nThe prediction on a new distribution formula_291 takes the simple, analytical form\nwhere formula_293, formula_294, formula_295, formula_296. Under mild regularity conditions this estimator can be shown to be consistent and it can achieve the one-stage sampled (as if one had access to the true formula_272-s) minimax optimal rate. In the formula_298 objective function formula_299-s are real numbers; the results can also be extended to the case when formula_299-s are formula_301-dimensional vectors, or more generally elements of a separable Hilbert space using operator-valued formula_281 kernels.\nExample.\nIn this simple example, which is taken from Song et al., formula_303 are assumed to be discrete random variables which take values in the set formula_304 and the kernel is chosen to be the Kronecker delta function, so formula_305. The feature map corresponding to this kernel is the standard basis vector formula_306. The kernel embeddings of such a distributions are thus vectors of marginal probabilities while the embeddings of joint distributions in this setting are formula_307 matrices specifying joint probability tables, and the explicit form of these embeddings is\nThe conditional distribution embedding operator, \nis in this setting a conditional probability table\nand \nThus, the embeddings of the conditional distribution under a fixed value of formula_4 may be computed as\nIn this discrete-valued setting with the Kronecker delta kernel, the kernel sum rule becomes\nThe kernel chain rule in this case is given by", "Automation-Control": 0.7838815451, "Qwen2": "Yes"} {"id": "52316686", "revid": "44813618", "url": "https://en.wikipedia.org/wiki?curid=52316686", "title": "Best worst method", "text": "Best Worst Method (BWM) is a multi-criteria decision-making (MCDM) method that was proposed by Dr. Jafar Rezaei in 2015. The method is used to evaluate a set of alternatives with respect to a set of decision criteria. The BWM is based on pairwise comparisons of the decision criteria. That is, after identifying the decision criteria by the decision-maker (DM), two criteria are selected by the DM: the best criterion and the worst criterion. The best criterion is the one that has the most important role in making the decision, while the worst criterion has the opposite role. The DM then gives his/her preferences of the best criterion over all the other criteria and also his/her preferences of all the criteria over the worst criterion using a number from a predefined scale (e.g. 1 to 9). These two sets of pairwise comparisons are used as input for an optimization problem, the optimal results of which are the weights of the criteria. The salient feature of the BWM is that it uses a structured way to generate pairwise comparisons which leads to reliable results.", "Automation-Control": 0.9959770441, "Qwen2": "Yes"} {"id": "37184734", "revid": "754619", "url": "https://en.wikipedia.org/wiki?curid=37184734", "title": "Dick Volz Award", "text": "The Dick Volz Best US PhD Thesis in Robotics and Automation is a yearly award to recognize outstanding Ph.D. thesis in the field of robotics and automation at any research institution in the United States of America. It is awarded with a four years delay as it is based both on thesis quality as well as post-graduation impact, hence, the 2007 Dick Volz Best US PhD Thesis Award was awarded in 2011. Its European counterpart is the Georges Giralt PhD Award.\nThe award is named after Professor Emeritus Richard A. Volz. He was a former Texas A&M Department of Computer Science department head and the president of the IEEE Robotics and Automation Society in 2006-2007. To honor his outstanding research on robotics and control as well as his mentoring, the Dick Volz Best US PhD Thesis in Robotics and Automation was established.\nThe list of award recipients includes the following:\nThe award committee includes Seth A. Hutchinson (University of Illinois Urbana-Champaign), John M. Hollerbach (University of Utah), Vijay Kumar (University of Pennsylvania), Gaurav Sukhatme (University of Southern California), and Henrik I. Christensen (Georgia Tech).", "Automation-Control": 0.9463720322, "Qwen2": "Yes"} {"id": "36732828", "revid": "42522270", "url": "https://en.wikipedia.org/wiki?curid=36732828", "title": "Flat pseudospectral method", "text": "The flat pseudospectral method is part of the family of the Ross–Fahroo pseudospectral methods introduced by Ross and Fahroo. The method combines the concept of differential flatness with pseudospectral optimal control to generate outputs in the so-called flat space. \nConcept.\nBecause the differentiation matrix, formula_1, in a pseudospectral method is square, higher-order derivatives of any polynomial, formula_2, can be obtained by powers of formula_1,\nwhere formula_5 is the pseudospectral variable and formula_6 is a finite positive integer. \nBy differential flatness, there exists functions formula_7 and formula_8 such that the state and control variables can be written as,\nThe combination of these concepts generates the flat pseudospectral method; that is, x and u are written as,\nThus, an optimal control problem can be quickly and easily transformed to a problem with just the Y pseudospectral variable.", "Automation-Control": 0.7099113464, "Qwen2": "Yes"} {"id": "36764607", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=36764607", "title": "Positive systems", "text": "Positive systems constitute a class of systems that has the important property that its state variables are never negative, given a positive initial state. These systems appear frequently in practical applications, as these variables represent physical quantities, with positive sign (levels, heights, concentrations, etc.).\nThe fact that a system is positive has important implications in the control system design. For instance, an asymptotically stable positive linear time-invariant system always admits a diagonal quadratic Lyapunov function, which makes these systems more numerical tractable in the context of Lyapunov analysis.\nIt is also important to take this positivity into account for state observer design, as standard observers (for example Luenberger observers) might give illogical negative values.\nConditions for positivity.\nA continuous-time linear system formula_1 is positive if and only if A is a Metzler matrix.\nA discrete-time linear system formula_2 is positive if and only if A is a nonnegative matrix.", "Automation-Control": 0.9996020198, "Qwen2": "Yes"} {"id": "6223779", "revid": "42816086", "url": "https://en.wikipedia.org/wiki?curid=6223779", "title": "Ignition switch", "text": "An ignition switch, starter switch or start switch is a switch in the control system of a motor vehicle that activates the main electrical systems for the vehicle, including \"accessories\" (radio, power windows, etc.). In vehicles powered by internal combustion engines, the switch provides power to the starter solenoid and the ignition system components (including the engine control unit and ignition coil), and is frequently combined with the starter switch which activates the starter motor.\nHistorically, ignition switches were key switches that requires the proper key to be inserted in order for the switch functions to be unlocked. These mechanical switches remain common in modern vehicles, further combined with an immobiliser to only activate the switch functions when a transponder signal in the key is detected. However, many new vehicles have been equipped with so-called \"keyless\" systems, which replace the key switch with a push button that also requires a transponder signal.\nThe ignition locking system may be sometimes bypassed by disconnecting the wiring to the switch and manipulating it directly; this is known as hotwiring.\nIgnition switches are generally a simple repair that can be completed without much knowledge.\nThey are mainly vehicle specific and plug and play.", "Automation-Control": 0.7963494062, "Qwen2": "Yes"} {"id": "11151120", "revid": "13079754", "url": "https://en.wikipedia.org/wiki?curid=11151120", "title": "Impulse invariance", "text": "Impulse invariance is a technique for designing discrete-time infinite-impulse-response (IIR) filters from continuous-time filters in which the impulse response of the continuous-time system is sampled to produce the impulse response of the discrete-time system. The frequency response of the discrete-time system will be a sum of shifted copies of the frequency response of the continuous-time system; if the continuous-time system is approximately band-limited to a frequency less than the Nyquist frequency of the sampling, then the frequency response of the discrete-time system will be approximately equal to it for frequencies below the Nyquist frequency.\nDiscussion.\nThe continuous-time system's impulse response, formula_1, is sampled with sampling period formula_2 to produce the discrete-time system's impulse response, formula_3.\nThus, the frequency responses of the two systems are related by\nIf the continuous time filter is approximately band-limited (i.e. formula_6 when formula_7), then the frequency response of the discrete-time system will be approximately the continuous-time system's frequency response for frequencies below π radians per sample (below the Nyquist frequency 1/(2\"T\") Hz):\nComparison to the bilinear transform.\nNote that aliasing will occur, including aliasing below the Nyquist frequency to the extent that the continuous-time filter's response is nonzero above that frequency. The bilinear transform is an alternative to impulse invariance that uses a different mapping that maps the continuous-time system's frequency response, out to infinite frequency, into the range of frequencies up to the Nyquist frequency in the discrete-time case, as opposed to mapping frequencies linearly with circular overlap as impulse invariance does.\nEffect on poles in system function.\nIf the continuous poles at formula_10, the system function can be written in partial fraction expansion as\nThus, using the inverse Laplace transform, the impulse response is\nThe corresponding discrete-time system's impulse response is then defined as the following\nPerforming a z-transform on the discrete-time impulse response produces the following discrete-time system function\nThus the poles from the continuous-time system function are translated to poles at z = eskT. The zeros, if any, are not so simply mapped.\nPoles and zeros.\nIf the system function has zeros as well as poles, they can be mapped the same way, but the result is no longer an impulse invariance result: the discrete-time impulse response is not equal simply to samples of the continuous-time impulse response. This method is known as the matched Z-transform method, or pole–zero mapping.\nStability and causality.\nSince poles in the continuous-time system at \"s\" = \"sk\" transform to poles in the discrete-time system at z = exp(\"skT\"), poles in the left half of the \"s\"-plane map to inside the unit circle in the \"z\"-plane; so if the continuous-time filter is causal and stable, then the discrete-time filter will be causal and stable as well.\nCorrected formula.\nWhen a causal continuous-time impulse response has a discontinuity at formula_16, the expressions above are not consistent.\nThis is because formula_17 has different right and left limits, and should really only contribute their average, half its right value formula_18, to formula_19.\nMaking this correction gives\nPerforming a z-transform on the discrete-time impulse response produces the following discrete-time system function\nThe second sum is zero for filters without a discontinuity, which is why ignoring it is often safe.", "Automation-Control": 0.985850811, "Qwen2": "Yes"} {"id": "13002240", "revid": "6603820", "url": "https://en.wikipedia.org/wiki?curid=13002240", "title": "DataAdapter", "text": "In ADO.NET, a DataAdapter functions as a bridge between a data source, and a disconnected data class, such as a DataSet. At the simplest level it will specify SQL commands that provide elementary CRUD functionality. At a more advanced level it offers all the functions required in order to create Strongly Typed DataSets, including DataRelations. Data adapters are an integral part of ADO.NET managed providers, which are the set of objects used to communicate between a data source and a dataset. (In addition to adapters, managed providers include connection objects, data reader objects, and command objects.) Adapters are used to exchange data between a data source and a dataset. In many applications, this means reading data from a database into a dataset, and then writing changed data from the dataset back to the database. However, a data adapter can move data between any source and a dataset. For example, there could be an adapter that moves data between a Microsoft Exchange server and a dataset.\nSometimes the data you work with is primarily read-only and you rarely need to make changes to the underlying data source Some situations also call for caching data in memory to minimize the number of database calls for data that does not change. The data adapter makes it easy for you to accomplish these things by helping to manage data in a disconnected mode. The data adapter fills a DataSet object when reading the data and writes in a single batch when persisting changes back to the database. A data adapter contains a reference to the connection object and opens and closes the connection automatically when reading from or writing to the database. Additionally, the data adapter contains command object references for SELECT, INSERT, UPDATE, and DELETE operations on the data. You will have a data adapter defined for each table in a DataSet and it will take care of all communication with the database for you. All you need to do is tell the data adapter when to load from or write to the database.", "Automation-Control": 0.731292963, "Qwen2": "Yes"} {"id": "13004865", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=13004865", "title": "Profile angle", "text": "The profile angle of a gear is the angle at a specified pitch point between a line tangent to a tooth surface and the line normal to the pitch surface (which is a radial line of a pitch circle). This definition is applicable to every type of gear for which a pitch surface can be defined. The profile angle gives the direction of the tangent to a tooth profile.\nIn spur gears and straight bevel gears, tooth profiles are considered only in a transverse plane, and the general terms profile angle and pressure angle are customarily used rather than transverse profile angle and transverse pressure angle. In helical teeth, the profiles may be considered in different planes, and in specifications it is essential to use terms that indicate the direction of the plane in which the profile angle or the pressure angle lies, such as transverse profile angle, normal pressure angle, axial profile angle.\nTypes.\nStandard.\nIn tools for cutting, grinding, and gaging gear teeth, the profile angle is the angle between a cutting edge or a cutting surface, and some principal direction such as that of a shank, an axis, or a plane of rotation.\nStandard profile angles are established in connection with standard proportions of gear teeth and standard gear cutting tools. Involute gears operate together correctly after a change of center distance, and gears designed for a different center distance can be generated correctly by standard tools. A change of center distance is accomplished by changes in operating values for pitch diameter, circular pitch, diametral pitch, pressure angle, and tooth thicknesses or backlash. The same involute gear may be used under conditions that change its operating pitch diameter and pressure angle. Unless there is a good reason for doing otherwise, it is practical to consider that the pitch and the profile angle of a single gear correspond to the pitch and the profile angle of the hob or cutter used to generate its teeth.\nTransverse.\nThe transverse pressure angle and transverse profile angle are the pressure angle and the profile angle in a transverse plane.\nNormal.\nNormal pressure angle and normal profile angle are the pressure and profile angles in a normal plane of a helical or a spiral tooth. In a spiral bevel gear, unless otherwise specified, profile angle means normal profile angle at the mean cone distance.\nAxial.\nAxial pressure angle and axial profile angle are the pressure angle and the profile angle in an axial plane of a helical gear or a worm, or of a spiral bevel gear.", "Automation-Control": 0.8297654986, "Qwen2": "Yes"} {"id": "5559076", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=5559076", "title": "Monitoring and surveillance agents", "text": "Monitoring and surveillance agents (also known as predictive agents) are a type of intelligent agent software that observes and reports on computer equipment. Monitoring and surveillance agents are often used to monitor complex computer networks to predict when a crash or some other defect may occur. Another type of monitoring and surveillance agent works on computer networks keeping track of the configuration of each computer connected to the network. It tracks and updates the central configuration database when anything on any computer changes, such as the number or type of disk drives. An important task in managing networks lies in prioritizing traffic and shaping bandwidth.", "Automation-Control": 0.8691785932, "Qwen2": "Yes"} {"id": "68976412", "revid": "19921271", "url": "https://en.wikipedia.org/wiki?curid=68976412", "title": "FOG Project", "text": "The FOG Project is a software project that implements FOG (Free and Open-source Ghost), a software tool that can deploy disk images of Microsoft Windows and Linux using the Preboot Execution Environment. It makes use of TFTP, the Apache webserver and iPXE. It is written in PHP.\nThe configuration tool developed by the FOG Project makes it possible to do remote system administration of the computers in a network. FOG depends on Partclone to copy the disk image.", "Automation-Control": 0.8081618547, "Qwen2": "Yes"} {"id": "12538966", "revid": "1073936314", "url": "https://en.wikipedia.org/wiki?curid=12538966", "title": "Panasonic Electric Works", "text": " is a Japanese company specializing in the production of industrial devices. It can trace its beginnings to a firm that was founded in 1918 by Konosuke Matsushita. Matsushita began making the flashlight components for bicycles, then progressed to making lighting fixtures.\nDuring World War II, the company manufactured everything from airplane propellers to light sockets. At the conclusion of World War II the U.S.A. forced the company to split into two separate companies, Matsushita Electric Works, Ltd. (MEW), and Matsushita Electric Industrial Co., Ltd. (MEI) (which became Panasonic).\nMEW conducts business in automation controls, electronic materials, lighting products, information equipment, and wiring products, building products and home appliances. In 2004 MEW began a pursuit of collaborate business ties with its brother company, MEI.\nIn 2005, the company was renamed from Aromat to Panasonic Electric Works.\nOn July 29, 2010, Panasonic reached an agreement to acquire the remaining shares of Panasonic Electric Works and Sanyo shares for $9.4 billion.\nIn 2007, Panasonic acquired Indian company Anchor Electricals Pvt. Ltd.", "Automation-Control": 0.9983224869, "Qwen2": "Yes"} {"id": "47757307", "revid": "23646674", "url": "https://en.wikipedia.org/wiki?curid=47757307", "title": "Rule based DFM analysis for deep drawing", "text": "Rule based DFM analysis for deep drawing. Deep drawing is a widely used cold sheet metal forming process to draw the sheet metal in forming dye of desirable cross-section using mechanical force of the punch. DFM refers to design for manufacturability. DFA refers to design for assembly. DFMA stands for design for manufacture and assembly. It is a practice for designing the engineering components keeping manufacturing and assembly aspects in mind. DFMA tries to tackle the problems that may come during the manufacturing and assembly at the design stage itself. Changes in the parts design to remove these problems while keeping the functionality of the parts intact. This is done to reduce the cost of iterations thus making the manufacturing of components more efficient and economical.\nIn the deep drawing process, a blank of sheet metal (usually circular) is placed on the die. The die is fixed to the base. The metal blank is held in position on the die using blank holder. Mechanical force is applied on the part of the metal blank above the die cavity through a punch. As the punch force increases the metal flows from the flange region in to the die cavity.\nHere is the Rule based DFM analysis for Deep drawing process. These rules can be incorporated at the design stage to improve the efficiency of the process:\nMaterial of Sheet Metal.\nAs the deep drawing is a cold forming operation, the germane properties of the sheet metal are formability, ductility and yield strength. The material should have good formability and ductility so that it can be drawn into the desired shape without any cracks. The yield strength of the material should be low facilitating initiation of the flow of metal without tearing near the punch radius.\nClearance between Punch and Die.\nClearance between the punch and die guides the flow of the metal into the die. Clearance should be more than the metal thickness to avoid concentration of metal at the top of the die cavity. Clearance should not be as large so that the flow of metal into the die region becomes unrestricted leading to the wrinkling of wall.\nDie corner radius.\nRadius of curvature at the die where the metal enters from the flange region into the die region is an important geometrical parameter. If the die corner radius is small than wrinkling near the flange region becomes more prominent. Too small die corner radius results in cracks due to sharp change in the direction of metal flow. Generally it should be 5-10 times the sheet thickness.\nPunch corner radius.\nAs the metal draws into the die the thickness of the sheet decreases near in the lower region of the punch. Maximum reduction happens near punch corner because the metal flow decreases significantly here. Too sharp corner results in cracks near the punch base. Corner radius of punch should be 4-10 times the sheet thickness.\nBlank holding force.\nThe friction in the flange region is mainly affected by blank holding force. Blank holding force is required for checking the amount of the metal flow in to the die. The low value of blank holding force results in wrinkling in the flange region and too high value of holding force results in increase in the drawing force due to the increase in the friction between the flange region. The blank holding force should be just enough to restrict the flow of the metal.\nDrawing Ratio.\nMeasurement of the amount of drawing performed on a sheet metal blank is quantified using drawing ratio. The higher the drawing ratio, the more extreme the amount of deep drawing. Due to the geometry, forces, metal flow and material properties of the work, there is a limit to the amount of deep drawing that can be performed on a sheet metal blank in a single operation. The drawing ratio is roughly calculated as,\nDR = Db/Dp.\nDb is the diameter of the blank and Dp is the diameter of the punch. For shapes that are noncircular the maximum diameter is sometimes used, or occasionally drawing ratio is calculated using surface areas. The limit to the drawing ratio for an operation is usually 2 or under", "Automation-Control": 0.9938437343, "Qwen2": "Yes"} {"id": "47775245", "revid": "11308236", "url": "https://en.wikipedia.org/wiki?curid=47775245", "title": "DFM Guidelines for Hot Metal Extrusion Process", "text": "Extrusion is a metal forming process to form parts with constant cross-section along its length. This process uses a metal billet or ingot which is inserted in a chamber. One side of this contains a die to produce the desired cross section and the other side a hydraulic ram is present to push the metal billet or ingot. Metal flows around the profile of the die and after solidification takes the desired shape. \nExtrusion process can be done with the material hot or cold, but most of the metals are heated before the process, if high surface finish and tight tolerances are required then the material is not heated.\nDFM stands for design for manufacturing, so as the name suggest the design is manufacturing friendly, in simple terms design that can be manufactured easily and cheaply. DFM guidelines define a set of rules for a person designing a product to ease the manufacturing process, reduce cost and time. For example, if a hole is to be drilled, if the designer specifies a standard hole size then it reduces the cost because the drill bits of unusual sizes are not readily available they have to be custom made.\nMaterial based guidelines.\nNowadays, quite a wide variety of metals are currently extruded commercially, the most common are (in order of decreasing extrudability): Aluminium, Magnesium and their alloys, Copper and Copper alloys, low-carbon and medium-carbon steels, low-alloy steels and stainless steels. So obviously as the extrudability decrease the cost of production increases", "Automation-Control": 0.9944050312, "Qwen2": "Yes"} {"id": "47775741", "revid": "23646674", "url": "https://en.wikipedia.org/wiki?curid=47775741", "title": "Rule-based DFM analysis for electric discharge machining", "text": "Electrical discharge machining (or EDM) is one of the most accurate manufacturing processes available for creating complex or simple shapes and geometries within parts and assemblies. A machining method typically used for hard metals, EDM makes it possible to work with metals for which traditional machining techniques are ineffective.\nDesign for manufacturability (also sometimes known as design for manufacturing or DFM) is the general engineering art of designing products in such a way that they are easy to manufacture. The concept exists in almost all engineering disciplines, but the implementation differs widely depending on the manufacturing technology. DFM describes the process of designing or engineering a product in order to facilitate the manufacturing process in order to reduce its manufacturing costs. DFM will allow potential problems to be fixed in the design phase which is the least expensive place to address them. Other factors may affect the manufacturability such as the type of raw material, the form of the raw material, dimensional tolerances, and secondary processing such as finishing.\nDepending on various types of manufacturing processes there are set guidelines for DFM practices. These DFM guidelines help to precisely define various tolerances, rules and common manufacturing checks related to DFM. Rule based guidelines which can be referred to while designing parts are mentioned below. The parts are designed considering manufacturability with electrical discharge machining in mind.\nMechanical design considerations.\nMinimum internal corner radius.\nThe minimum internal corner radius of the feature will dictate the maximum wire diameter that can be used. The wire diameter needs to be less than double the minimum internal corner radius for successful machining. However, the amount of final overcut and a small amount of maneuvering need to be taken into account for the corner to be generated. For small diameter wires, the following are recommended:\nSurface finishing.\nSurface finishing comprises the small local deviations of a surface from the perfectly flat ideal. It is one of the important factors that controls friction and transfer layer formation during sliding.\nMany wire EDM machines have adopted the pulse generating circuit using low power for ignition and high power for machining. However, it is not suitable for finishing process since the energy generated by the high voltage sub-circuit is too high to obtain a desired fine surface. Relaxing the surface finish allows the manufacturer to produce the part with fewer passes, at a higher current level and a higher metal-removal rate, enabling lower production time and cost.\nMaterial removal.\nThe removal of material in EDM is associated with the erosive effects produced when discrete and spatial discharge occurs between the tool and work-piece electrodes. Short duration sparks generated between these two electrodes. The generator releases electrical energy, which is responsible for melting a small quantity of material from both the electrodes. The part should be designed and prepared such that the amount of stock removed by EDM is relatively small. Traditional machining techniques, such as milling can be used to remove bulk of stock with the finishing operations performed by EDM.\nSimultaneous machining.\nEDM enhanced with CNC systems is a highly competitive model for making forging dies, casting tooling, plastic injection molds and tooling for powder metals. It enables the user to machine simultaneously multiple highly precise parts from a single clamping. Designs should be considered such that several parts can be stacked and machined simultaneously or a single part can have several EDM operations performed simultaneously.\nEnlarging holes.\nWhen existing holes are to be enlarged or reshaped by EDM, through holes are preferred to blind holes as they permit easier flow of dielectric fluid past the area being machined.\nSharp corners.\nWhen cutting sharp corners, the wire dwells longer by the inside radius causing a slight overcut. On the outside radius, it speeds, leaving a slight undercut. Hence, sharp corners should be avoided while designing part.\nGalvanic corrosion.\nGalvanic corrosion is an electrochemical process in which one metal corrodes preferentially to another when both metals are in electrical contact, in the presence of an electrolyte. In EDM, there will be some degree of material exchange between the wire or the probe and the base material. Electrodes and base material should be chosen to prevent galvanic corrosion as far as possible.", "Automation-Control": 0.9920848608, "Qwen2": "Yes"} {"id": "47780890", "revid": "910180", "url": "https://en.wikipedia.org/wiki?curid=47780890", "title": "PLC technician", "text": "PLC technicians design, program, repair, and maintain programmable logic controller (PLC) systems used within manufacturing and service industries ranging from industrial packaging to commercial car washes and traffic lights.\nScope of work.\nPLC technicians are knowledgeable in overall plant systems and the interactions of processes. They install and service a variety of systems including safety and security, energy delivery (hydraulic, pneumatic and electrical), communication, and process control systems. They also install and service measuring and indicating instruments to monitor process control variables associated with PLCs, and monitor the operation of PLC equipment. PLC technicians work with final control devices such as valves, actuators and positioners to manipulate the process medium. They install and terminate electrical, pneumatic, and fluid connections. They also work on network and signal transmission systems such as fibre optic and wireless.\nAlong with the calibration, repair, adjustment, and replacement of components, PLC technicians inspect and test the operation of instruments and systems to diagnose faults and verify repairs. They establish and optimize process control strategies, and configure related systems such as Distributed Control Systems (DCSs), Supervisory Control & Data Acquisition (SCADA), and Human Machine Interfaces (HMIs). PLC technicians maintain backups, documentation, and software revisions as part of maintaining these computer-based control systems. Scheduled maintenance and the commissioning of systems are also important aspects of the work. PLC technicians consult technical documentation, drawings, schematics, and manuals. They may assist engineering in plant design, modification and hazard analysis, and work with plant operators to optimize plant controls.\nPLC technicians use hand, power, and electronic tools, test equipment, and material handling equipment. They work on a variety of systems including primary control elements, transmitters, analyzers, sensors, detectors, signal conditioners, recorders, controllers, and final control elements (actuators, valve positioners, etc.). These instruments measure and control variables such as pressure, flow, temperature, level, motion, force, and chemical composition. PLC systems designed and maintained by PLC technicians range from high-speed robotic assembly to conveyors, to batch mixers, to DCS and SCADA systems. PLC systems are often found within industrial and manufacturing plants, such as food processing facilities. Alternate job titles include PLC engineer, Automation Technician, Field Technician, or Controls Technician.\nEducation, training and skills.\nPLC technician educational courses and programs integrate PLC programming with mechanics, electronics and process controls, They also commonly include coursework in hydraulics, pneumatics, robotics, DCS, SCADA, electrical circuits, electrical machinery and human-machine interfaces. Typical courses include math, communications, circuits, digital devices, and electrical controls. Other courses include robotics, automation, electrical motor controls, programmable logic controllers, and computer-aided design. When performing their duties, PLC technicians must comply with federal, jurisdictional, industrial, and site-specific standards, codes, and regulations. They must ensure that all processes operate and are maintained within these set standards, codes, and regulations. Keeping up-to-date with advances in technology in the industry is important. Key attributes for PLC Technicians are critical thinking skills, manual dexterity, mechanical aptitude, attention to detail, strong problem-solving skills, communication skills, and mathematical and scientific aptitude.\nEmployers generally prefer applicants who have completed a PLC technician certificate or related associate degree. These programs can be completed at Colleges and Universities in either an in-class or online format. Some Colleges, such as George Brown College, offer an online PLC Technician program that uses simulation software, PLCLogix, to complete PLC lab projects and assignments. Certification by accredited schools and third-party organizations can enhance employment opportunities and keep PLC technicians current and up-to-date. In addition to Colleges and Universities, other organizations and companies also offer credential programs in PLCs, including equipment manufacturers such as Rockwell and professional associations, such as the Electronics Technicians Association, Robotics Industries Association and the Manufacturing Skill Standards Council.\nCareer opportunities.\nPLC technicians install and repair industrial electronic equipment (including input/output networks, data highways, variable speed drives, and process control equipment) and write PLC programs for a wide variety of automated control systems, ranging from simple on–off controls to robotics. PLC technicians also find employment in the industrial engineering field where they are actively involved in the design and implementation of PLC control systems.\nCareer opportunities for PLC Technicians include a wide range of manufacturing and service industries such as automotive, pharmaceutical, power distribution, food processing, mining, and transportation. Other career prospects include areas such as machine assembly, troubleshooting, and testing, systems integration, application support, maintenance, component testing and assembly, automation programming, robot maintenance and programming, technical sales and services.\nPLC Technicians work mainly indoors, on the plant floor and sometimes in cramped conditions. They may be required to stand for prolonged periods of time and be exposed to high noise, fumes and heat levels. Because this is such an important job, they must pay close attention to safety and may be called out in emergencies. Constant learning may be required to keep up with new technology.\nPrimarily work in this area is full-time and can be in shifts. Employers who hire PLC Technicians include:", "Automation-Control": 0.892503202, "Qwen2": "Yes"} {"id": "51309216", "revid": "17956451", "url": "https://en.wikipedia.org/wiki?curid=51309216", "title": "MIDACO", "text": "MIDACO (Mixed Integer Distributed Ant Colony Optimization) is a software package for numerical optimization based on evolutionary computing.\nMIDACO was created in collaboration of\nEuropean Space Agency and EADS Astrium to solve constrained mixed-integer non-linear (MINLP) space applications.\nMIDACO holds several record solutions on interplanetary spaceflight trajectory design problems made publicly available by European Space Agency. MIDACO is included in software packages like TOMLAB, Astos, and SigmaXL.", "Automation-Control": 0.7442187071, "Qwen2": "Yes"} {"id": "3945884", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=3945884", "title": "Nonnegative matrix", "text": "In mathematics, a nonnegative matrix, written\nis a matrix in which all the elements are equal to or greater than zero, that is,\nA positive matrix is a matrix in which all the elements are strictly greater than zero. The set of positive matrices is a subset of all non-negative matrices. While such matrices are commonly found, the term is only occasionally used due to the possible confusion with positive-definite matrices, which are different. A matrix which is both non-negative and is positive semidefinite is called a doubly non-negative matrix.\nA rectangular non-negative matrix can be approximated by a decomposition with two other non-negative matrices via non-negative matrix factorization.\nEigenvalues and eigenvectors of square positive matrices are described by the Perron–Frobenius theorem.\nInversion.\nThe inverse of any non-singular M-matrix is a non-negative matrix. If the non-singular M-matrix is also symmetric then it is called a Stieltjes matrix. \nThe inverse of a non-negative matrix is usually not non-negative. The exception is the non-negative monomial matrices: a non-negative matrix has non-negative inverse if and only if it is a (non-negative) monomial matrix. Note that thus the inverse of a positive matrix is not positive or even non-negative, as positive matrices are not monomial, for dimension .\nSpecializations.\nThere are a number of groups of matrices that form specializations of non-negative matrices, e.g. stochastic matrix; doubly stochastic matrix; symmetric non-negative matrix.", "Automation-Control": 0.7705176473, "Qwen2": "Yes"} {"id": "65309", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=65309", "title": "Support vector machine", "text": "In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laboratories by Vladimir Vapnik with colleagues (Boser et al., 1992, Guyon et al., 1993, Cortes and Vapnik, 1995, Vapnik et al., 1997) SVMs are one of the most robust prediction methods, being based on statistical learning frameworks or VC theory proposed by Vapnik (1982, 1995) and Chervonenkis (1974). Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a non-probabilistic binary linear classifier (although methods such as Platt scaling exist to use SVM in a probabilistic classification setting). SVM maps training examples to points in space so as to maximise the width of the gap between the two categories. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall.\nIn addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the , implicitly mapping their inputs into high-dimensional feature spaces.\nThe support vector clustering algorithm, created by Hava Siegelmann and Vladimir Vapnik, applies the statistics of support vectors, developed in the support vector machines algorithm, to categorize unlabeled data. These data sets require unsupervised learning approaches, which attempt to find natural clustering of the data to groups and, then, to map new data according to these clusters. \nMotivation.\nClassifying data is a common task in machine learning.\nSuppose some given data points each belong to one of two classes, and the goal is to decide which class a \"new\" data point will be in. In the case of support vector machines, a data point is viewed as a formula_1-dimensional vector (a list of formula_1 numbers), and we want to know whether we can separate such points with a formula_3-dimensional hyperplane. This is called a linear classifier. There are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, or margin, between the two classes. So we choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as the \"maximum-margin hyperplane\" and the linear classifier it defines is known as a \"maximum-margin classifier\"; or equivalently, the \"perceptron of optimal stability\".\nMore formally, a support vector machine constructs a hyperplane or set of hyperplanes in a high or infinite-dimensional space, which can be used for classification, regression, or other tasks like outliers detection. Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training-data point of any class (so-called functional margin), since in general the larger the margin, the lower the generalization error of the classifier. A lower generalization error means that the implementer is less likely to experience overfitting.\nWhereas the original problem may be stated in a finite-dimensional space, it often happens that the sets to discriminate are not linearly separable in that space. For this reason, it was proposed that the original finite-dimensional space be mapped into a much higher-dimensional space, presumably making the separation easier in that space. To keep the computational load reasonable, the mappings used by SVM schemes are designed to ensure that dot products of pairs of input data vectors may be computed easily in terms of the variables in the original space, by defining them in terms of a kernel function formula_4 selected to suit the problem. The hyperplanes in the higher-dimensional space are defined as the set of points whose dot product with a vector in that space is constant, where such a set of vectors is an orthogonal (and thus minimal) set of vectors that defines a hyperplane. The vectors defining the hyperplanes can be chosen to be linear combinations with parameters formula_5 of images of feature vectors formula_6 that occur in the data base. With this choice of a hyperplane, the points formula_7 in the feature space that are mapped into the hyperplane are defined by the relation formula_8 Note that if formula_4 becomes small as formula_10 grows further away from formula_7, each term in the sum measures the degree of closeness of the test point formula_7 to the corresponding data base point formula_6. In this way, the sum of kernels above can be used to measure the relative nearness of each test point to the data points originating in one or the other of the sets to be discriminated. Note the fact that the set of points formula_7 mapped into any hyperplane can be quite convoluted as a result, allowing much more complex discrimination between sets that are not convex at all in the original space.\nApplications.\nSVMs can be used to solve various real-world problems:\nHistory.\nThe original SVM algorithm was invented by Vladimir N. Vapnik and Alexey Ya. Chervonenkis in 1964. In 1992, Bernhard Boser, Isabelle Guyon and Vladimir Vapnik suggested a way to create nonlinear classifiers by applying the kernel trick to maximum-margin hyperplanes. The \"soft margin\" incarnation, as is commonly used in software packages, was proposed by Corinna Cortes and Vapnik in 1993 and published in 1995.\nLinear SVM.\nWe are given a training dataset of formula_15 points of the form\nformula_16\nwhere the formula_17 are either 1 or −1, each indicating the class to which the point formula_18 belongs. Each formula_19 is a formula_1-dimensional real vector. We want to find the \"maximum-margin hyperplane\" that divides the group of points formula_19 for which formula_22 from the group of points for which formula_23, which is defined so that the distance between the hyperplane and the nearest point formula_19 from either group is maximized.\nAny hyperplane can be written as the set of points formula_25 satisfying\nformula_26\nwhere formula_27 is the (not necessarily normalized) normal vector to the hyperplane. This is much like Hesse normal form, except that formula_27 is not necessarily a unit vector. The parameter formula_29 determines the offset of the hyperplane from the origin along the normal vector formula_27.\nHard-margin.\nIf the training data is linearly separable, we can select two parallel hyperplanes that separate the two classes of data, so that the distance between them is as large as possible. The region bounded by these two hyperplanes is called the \"margin\", and the maximum-margin hyperplane is the hyperplane that lies halfway between them. With a normalized or standardized dataset, these hyperplanes can be described by the equations\nand\nGeometrically, the distance between these two hyperplanes is formula_33, so to maximize the distance between the planes we want to minimize formula_34. The distance is computed using the distance from a point to a plane equation. We also have to prevent data points from falling into the margin, we add the following constraint: for each formula_35 either\nformula_36\nor\nformula_37\nThese constraints state that each data point must lie on the correct side of the margin.\nThis can be rewritten as\nWe can put this together to get the optimization problem:\nformula_38\nThe formula_27 and formula_40 that solve this problem determine our classifier, formula_41 where formula_42 is the sign function.\nAn important consequence of this geometric description is that the max-margin hyperplane is completely determined by those formula_19 that lie nearest to it. These formula_19 are called \"support vectors\".\nSoft-margin.\nTo extend SVM to cases in which the data are not linearly separable, the \"hinge loss\" function is helpful\nformula_45\nNote that formula_17 is the \"i\"-th target (i.e., in this case, 1 or −1), and formula_47 is the \"i\"-th output.\nThis function is zero if the constraint in is satisfied, in other words, if formula_19 lies on the correct side of the margin. For data on the wrong side of the margin, the function's value is proportional to the distance from the margin.\nThe goal of the optimization then is to minimize\nformula_49\nwhere the parameter formula_50 determines the trade-off between increasing the margin size and ensuring that the formula_19 lie on the correct side of the margin. By deconstructing the hinge loss, this optimization problem can be massaged into the following:\nformula_52\nThus, for large values of formula_53, it will behave similar to the hard-margin SVM, if the input data are linearly classifiable, but will still learn if a classification rule is viable or not. (formula_54 is inversely related to formula_53, e.g. in \"LIBSVM\".)\nNonlinear Kernels.\nThe original maximum-margin hyperplane algorithm proposed by Vapnik in 1963 constructed a linear classifier. However, in 1992, Bernhard Boser, Isabelle Guyon and Vladimir Vapnik suggested a way to create nonlinear classifiers by applying the kernel trick (originally proposed by Aizerman et al.) to maximum-margin hyperplanes. The resulting algorithm is formally similar, except that every dot product is replaced by a nonlinear kernel function. This allows the algorithm to fit the maximum-margin hyperplane in a transformed feature space. The transformation may be nonlinear and the transformed space high-dimensional; although the classifier is a hyperplane in the transformed feature space, it may be nonlinear in the original input space.\nIt is noteworthy that working in a higher-dimensional feature space increases the generalization error of support vector machines, although given enough samples the algorithm still performs well.\nSome common kernels include:\nThe kernel is related to the transform formula_65 by the equation formula_66. The value is also in the transformed space, with formula_67. Dot products with for classification can again be computed by the kernel trick, i.e. formula_68.\nComputing the SVM classifier.\nComputing the (soft-margin) SVM classifier amounts to minimizing an expression of the form\nWe focus on the soft-margin classifier since, as noted above, choosing a sufficiently small value for formula_54 yields the hard-margin classifier for linearly classifiable input data. The classical approach, which involves reducing to a quadratic programming problem, is detailed below. Then, more recent approaches such as sub-gradient descent and coordinate descent will be discussed.\nPrimal.\nMinimizing can be rewritten as a constrained optimization problem with a differentiable objective function in the following way.\nFor each formula_70 we introduce a variable formula_71. Note that formula_72 is the smallest nonnegative number satisfying formula_73\nThus we can rewrite the optimization problem as follows\nformula_74\nThis is called the \"primal\" problem.\nDual.\nBy solving for the Lagrangian dual of the above problem, one obtains the simplified problem\nformula_75\nThis is called the \"dual\" problem. Since the dual maximization problem is a quadratic function of the formula_76 subject to linear constraints, it is efficiently solvable by quadratic programming algorithms.\nHere, the variables formula_76 are defined such that\nformula_78\nMoreover, formula_79 exactly when formula_80 lies on the correct side of the margin, and formula_81 when formula_80 lies on the margin's boundary. It follows that formula_27 can be written as a linear combination of the support vectors.\nThe offset, formula_84, can be recovered by finding an formula_80 on the margin's boundary and solving\nformula_86\nKernel trick.\nSuppose now that we would like to learn a nonlinear classification rule which corresponds to a linear classification rule for the transformed data points formula_89 Moreover, we are given a kernel function formula_90 which satisfies formula_91.\nWe know the classification vector formula_27 in the transformed space satisfies\nformula_93\nwhere, the formula_94 are obtained by solving the optimization problem\nformula_95\nThe coefficients formula_76 can be solved for using quadratic programming, as before. Again, we can find some index formula_97 such that formula_81, so that formula_99 lies on the boundary of the margin in the transformed space, and then solve\nformula_100\nFinally,\nformula_101\nModern methods.\nRecent algorithms for finding the SVM classifier include sub-gradient descent and coordinate descent. Both techniques have proven to offer significant advantages over the traditional approach when dealing with large, sparse datasets—sub-gradient methods are especially efficient when there are many training examples, and coordinate descent when the dimension of the feature space is high.\nSub-gradient descent.\nSub-gradient descent algorithms for the SVM work directly with the expression\nformula_102\nNote that formula_103 is a convex function of formula_27 and formula_40. As such, traditional gradient descent (or SGD) methods can be adapted, where instead of taking a step in the direction of the function's gradient, a step is taken in the direction of a vector selected from the function's sub-gradient. This approach has the advantage that, for certain implementations, the number of iterations does not scale with formula_15, the number of data points.\nCoordinate descent.\nCoordinate descent algorithms for the SVM work from the dual problem\nformula_107\nFor each formula_108, iteratively, the coefficient formula_76 is adjusted in the direction of formula_110. Then, the resulting vector of coefficients formula_111 is projected onto the nearest vector of coefficients that satisfies the given constraints. (Typically Euclidean distances are used.) The process is then repeated until a near-optimal vector of coefficients is obtained. The resulting algorithm is extremely fast in practice, although few performance guarantees have been proven.\nEmpirical risk minimization.\nThe soft-margin support vector machine described above is an example of an empirical risk minimization (ERM) algorithm for the \"hinge loss\". Seen this way, support vector machines belong to a natural class of algorithms for statistical inference, and many of its unique features are due to the behavior of the hinge loss. This perspective can provide further insight into how and why SVMs work, and allow us to better analyze their statistical properties.\nRisk minimization.\nIn supervised learning, one is given a set of training examples formula_112 with labels formula_113, and wishes to predict formula_114 given formula_115. To do so one forms a hypothesis, formula_103, such that formula_117 is a \"good\" approximation of formula_114. A \"good\" approximation is usually defined with the help of a \"loss function,\" formula_119, which characterizes how bad formula_120 is as a prediction of formula_10. We would then like to choose a hypothesis that minimizes the \"expected risk:\"\nformula_122\nIn most cases, we don't know the joint distribution of formula_123 outright. In these cases, a common strategy is to choose the hypothesis that minimizes the \"empirical risk:\"\nformula_124\nUnder certain assumptions about the sequence of random variables formula_125 (for example, that they are generated by a finite Markov process), if the set of hypotheses being considered is small enough, the minimizer of the empirical risk will closely approximate the minimizer of the expected risk as formula_15 grows large. This approach is called \"empirical risk minimization,\" or ERM.\nRegularization and stability.\nIn order for the minimization problem to have a well-defined solution, we have to place constraints on the set formula_127 of hypotheses being considered. If formula_127 is a normed space (as is the case for SVM), a particularly effective technique is to consider only those hypotheses formula_129 for which formula_130 . This is equivalent to imposing a \"regularization penalty\" formula_131, and solving the new optimization problem\nformula_132\nThis approach is called \"Tikhonov regularization.\"\nMore generally, formula_133 can be some measure of the complexity of the hypothesis formula_103, so that simpler hypotheses are preferred.\nSVM and the hinge loss.\nRecall that the (soft-margin) SVM classifier formula_135 is chosen to minimize the following expression:\nformula_136\nIn light of the above discussion, we see that the SVM technique is equivalent to empirical risk minimization with Tikhonov regularization, where in this case the loss function is the hinge loss\nformula_137\nFrom this perspective, SVM is closely related to other fundamental classification algorithms such as regularized least-squares and logistic regression. The difference between the three lies in the choice of loss function: regularized least-squares amounts to empirical risk minimization with the square-loss, formula_138; logistic regression employs the log-loss,\nformula_139\nTarget functions.\nThe difference between the hinge loss and these other loss functions is best stated in terms of \"target functions -\" the function that minimizes expected risk for a given pair of random variables formula_140.\nIn particular, let formula_141 denote formula_10 conditional on the event that formula_143. In the classification setting, we have:\nformula_144\nThe optimal classifier is therefore:\nformula_145\nFor the square-loss, the target function is the conditional expectation function, formula_146; For the logistic loss, it's the logit function, formula_147. While both of these target functions yield the correct classifier, as formula_148, they give us more information than we need. In fact, they give us enough information to completely describe the distribution of formula_149.\nOn the other hand, one can check that the target function for the hinge loss is \"exactly\" formula_150. Thus, in a sufficiently rich hypothesis space—or equivalently, for an appropriately chosen kernel—the SVM classifier will converge to the simplest function (in terms of formula_151) that correctly classifies the data. This extends the geometric interpretation of SVM—for linear classification, the empirical risk is minimized by any function whose margins lie between the support vectors, and the simplest of these is the max-margin classifier.\nProperties.\nSVMs belong to a family of generalized linear classifiers and can be interpreted as an extension of the perceptron. They can also be considered a special case of Tikhonov regularization. A special property is that they simultaneously minimize the empirical \"classification error\" and maximize the \"geometric margin\"; hence they are also known as maximum margin classifiers.\nA comparison of the SVM to other classifiers has been made by Meyer, Leisch and Hornik.\nParameter selection.\nThe effectiveness of SVM depends on the selection of kernel, the kernel's parameters, and soft margin parameter formula_54.\nA common choice is a Gaussian kernel, which has a single parameter \"formula_153\". The best combination of formula_54 and formula_153 is often selected by a grid search with exponentially growing sequences of formula_54 and \"formula_153\", for example, formula_158; formula_159. Typically, each combination of parameter choices is checked using cross validation, and the parameters with best cross-validation accuracy are picked. Alternatively, recent work in Bayesian optimization can be used to select formula_54 and \"formula_153\" , often requiring the evaluation of far fewer parameter combinations than grid search. The final model, which is used for testing and for classifying new data, is then trained on the whole training set using the selected parameters.\nIssues.\nPotential drawbacks of the SVM include the following aspects:\nExtensions.\nSupport vector clustering (SVC).\nSVC is a similar method that also builds on kernel functions but is appropriate for unsupervised learning. \nMulticlass SVM.\nMulticlass SVM aims to assign labels to instances by using support vector machines, where the labels are drawn from a finite set of several elements.\nThe dominant approach for doing so is to reduce the single multiclass problem into multiple binary classification problems. Common methods for such reduction include:\nCrammer and Singer proposed a multiclass SVM method which casts the multiclass classification problem into a single optimization problem, rather than decomposing it into multiple binary classification problems. See also Lee, Lin and Wahba and Van den Burg and Groenen.\nTransductive support vector machines.\nTransductive support vector machines extend SVMs in that they could also treat partially labeled data in semi-supervised learning by following the principles of transduction. Here, in addition to the training set formula_162, the learner is also given a set\nformula_163\nof test examples to be classified. Formally, a transductive support vector machine is defined by the following primal optimization problem:\nMinimize (in formula_164)\nformula_165\nsubject to (for any formula_166 and any formula_167)\nformula_168\nand\nformula_169\nTransductive support vector machines were introduced by Vladimir N. Vapnik in 1998.\nStructured SVM.\nSVMs have been generalized to structured SVMs, where the label space is structured and of possibly infinite size.\nRegression.\nA version of SVM for regression was proposed in 1996 by Vladimir N. Vapnik, Harris Drucker, Christopher J. C. Burges, Linda Kaufman and Alexander J. Smola. This method is called support vector regression (SVR). The model produced by support vector classification (as described above) depends only on a subset of the training data, because the cost function for building the model does not care about training points that lie beyond the margin. Analogously, the model produced by SVR depends only on a subset of the training data, because the cost function for building the model ignores any training data close to the model prediction. Another SVM version known as least-squares support vector machine (LS-SVM) has been proposed by Suykens and Vandewalle.\nTraining the original SVR means solving\nwhere formula_6 is a training sample with target value formula_17. The inner product plus intercept formula_174 is the prediction for that sample, and formula_175 is a free parameter that serves as a threshold: all predictions have to be within an formula_175 range of the true predictions. Slack variables are usually added into the above to allow for errors and to allow approximation in the case the above problem is infeasible.\nBayesian SVM.\nIn 2011 it was shown by Polson and Scott that the SVM admits a Bayesian interpretation through the technique of data augmentation. In this approach the SVM is viewed as a graphical model (where the parameters are connected via probability distributions). This extended view allows the application of Bayesian techniques to SVMs, such as flexible feature modeling, automatic hyperparameter tuning, and predictive uncertainty quantification. Recently, a scalable version of the Bayesian SVM was developed by Florian Wenzel, enabling the application of Bayesian SVMs to big data. Florian Wenzel developed two different versions, a variational inference (VI) scheme for the Bayesian kernel support vector machine (SVM) and a stochastic version (SVI) for the linear Bayesian SVM.\nImplementation.\nThe parameters of the maximum-margin hyperplane are derived by solving the optimization. There exist several specialized algorithms for quickly solving the quadratic programming (QP) problem that arises from SVMs, mostly relying on heuristics for breaking the problem down into smaller, more manageable chunks.\nAnother approach is to use an interior-point method that uses Newton-like iterations to find a solution of the Karush–Kuhn–Tucker conditions of the primal and dual problems.\nInstead of solving a sequence of broken-down problems, this approach directly solves the problem altogether. To avoid solving a linear system involving the large kernel matrix, a low-rank approximation to the matrix is often used in the kernel trick.\nAnother common method is Platt's sequential minimal optimization (SMO) algorithm, which breaks the problem down into 2-dimensional sub-problems that are solved analytically, eliminating the need for a numerical optimization algorithm and matrix storage. This algorithm is conceptually simple, easy to implement, generally faster, and has better scaling properties for difficult SVM problems.\nThe special case of linear support vector machines can be solved more efficiently by the same kind of algorithms used to optimize its close cousin, logistic regression; this class of algorithms includes sub-gradient descent (e.g., PEGASOS) and coordinate descent (e.g., LIBLINEAR). LIBLINEAR has some attractive training-time properties. Each convergence iteration takes time linear in the time taken to read the train data, and the iterations also have a Q-linear convergence property, making the algorithm extremely fast.\nThe general kernel SVMs can also be solved more efficiently using sub-gradient descent (e.g. P-packSVM), especially when parallelization is allowed.\nKernel SVMs are available in many machine-learning toolkits, including LIBSVM, MATLAB, SAS, SVMlight, kernlab, scikit-learn, Shogun, Weka, Shark, JKernelMachines, OpenCV and others.\nPreprocessing of data (standardization) is highly recommended to enhance accuracy of classification. There are a few methods of standardization, such as min-max, normalization by decimal scaling, Z-score. Subtraction of mean and division by variance of each feature is usually used for SVM.", "Automation-Control": 0.9193934798, "Qwen2": "Yes"} {"id": "38306273", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=38306273", "title": "Mean-field game theory", "text": "Mean-field game theory is the study of strategic decision making by small interacting agents in very large populations. It lies at the intersection of game theory with stochastic analysis and control theory. The use of the term \"mean field\" is inspired by mean-field theory in physics, which considers the behavior of systems of large numbers of particles where individual particles have negligible impacts upon the system. In other words, each agent acts according to his minimization or maximization problem taking into account other agents’ decisions and because their population is large we can assume the number of agents goes to infinity and a representative agent exists. \nIn traditional game theory, the subject of study is usually a game with two players and discrete time space, and extends the results to more complex situations by induction. However, for games in continuous time with continuous states (differential games or stochastic differential games) this strategy cannot be used because of the complexity that the dynamic interactions generate. On the other hand with MFGs we can handle large numbers of players through the mean representative agent and at the same time describe complex state dynamics.\nThis class of problems was considered in the economics literature by Boyan Jovanovic and Robert W. Rosenthal, in the engineering literature by Minyi Huang, Roland Malhame, and Peter E. Caines and independently and around the same time by mathematicians and Pierre-Louis Lions.\nIn continuous time a mean-field game is typically composed of a Hamilton–Jacobi–Bellman equation that describes the optimal control problem of an individual and a Fokker–Planck equation that describes the dynamics of the aggregate distribution of agents. Under fairly general assumptions it can be proved that a class of mean-field games is the limit as formula_1 of an \"N\"-player Nash equilibrium.\nA related concept to that of mean-field games is \"mean-field-type control\". In this case, a social planner controls the distribution of states and chooses a control strategy. The solution to a mean-field-type control problem can typically be expressed as a dual adjoint Hamilton–Jacobi–Bellman equation coupled with Kolmogorov equation. Mean-field-type game theory is the multi-agent generalization of the single-agent mean-field-type control.\nGeneral Form of a Mean-field Game.\nThe following system of equations can be used to model a typical Mean-field game:\nformula_2\nThe basic dynamics of this set of Equations can be explained by an average agent's optimal control problem. In a mean-field game, an average agent can control their movement formula_3 to influence the population's overall location by:\nformula_4\nwhere formula_5 is a parameter and formula_6 is a standard Brownian motion. By controlling their movement, the agent aims to minimize their overall expected cost formula_7 throughout the time period formula_8:\nformula_9\nwhere formula_10 is the running cost at time formula_11 and formula_12 is the terminal cost at time formula_13. By this definition, at time formula_14 and position formula_15, the value function formula_16 can be determined as:\nformula_17\nGiven the definition of the value function formula_16, it can be tracked by the Hamilton-Jacobi equation (1). The optimal action of the average players formula_19 can be determined as formula_20. As all agents are relatively small and cannot single-handedly change the dynamics of the population, they will individually adapt the optimal control and the population would move in that way. This is similar to a Nash Equilibrium, in which all agents act in response to a specific set of others' strategies. The optimal control solution then leads to the Kolmogorov-Fokker-Planck equation (2).\nFinite State Games.\nA prominent category of mean field is games with a finite number of states and a finite number of actions per player. For those games, the analog of the Hamilton-Jacobi-Bellman equation is the Bellman equation, and the discrete version of the Fokker-Planck equation is the Kolmogorov equation. Specifically, for discrete-time models, the players' strategy is the Kolmogorov equation's probability matrix. In continuous time models, players have the ability to control the transition rate matrix.\nA discrete mean field game can be defined by a tuple formula_21, where formula_22 is the state space, formula_23 the action set, formula_24 the transition rate matrices, formula_25 the initial state, formula_26 the cost functions and formula_27 formula_28 a discount factor. Furthermore, a mixed strategy is a measurable function formula_29, that associates to each state formula_30 and each time formula_31 a probability measure formula_32 on the set of possible actions. Thus formula_33 is the probability that, at time formula_14 a player in state formula_35 takes action formula_36, under strategy formula_37. Additionally, rate matrices formula_38 define the evolution over the time of population distribution, where formula_39 is the population distribution at time formula_14.\nLinear-quadratic Gaussian game problem.\nFrom Caines (2009), a relatively simple model of large-scale games is the linear-quadratic Gaussian model. The individual agent's dynamics are modeled as a stochastic differential equation\nformula_41\nwhere formula_42 is the state of the formula_35-th agent, formula_44 is the control of the formula_35-th agent, and formula_46 are independent Wiener processes for all formula_47. The individual agent's cost is\nformula_48\nThe coupling between agents occurs in the cost function.\nGeneral and Applied Use.\nThe paradigm of Mean Field Games has become a major connection between distributed decision-making and stochastic modeling. Starting out in the stochastic control literature, it is gaining rapid adoption across a range of applications, including:\na. Financial market\nCarmona reviews applications in financial engineering and economics that can be cast and tackled within the framework of the MFG paradigm. Carmona argues that models in macroeconomics, contract theory, finance, …, greatly benefit from the switch to continuous time from the more traditional discrete-time models. He considers only continuous time models in his review chapter, including systemic risk, price impact, optimal execution, models for bank runs, high-frequency trading, and cryptocurrencies.\nb. Crowd motions\nMFG assumes that individuals are smart players which try to optimize their strategy and path with respect to certain costs (equilibrium with rational expectations approach). MFG models are useful to describe the anticipation phenomenon: the forward part describes the crowd evolution while the backward gives the process of how the anticipations are built. Additionally, compared to multi-agent microscopic model computations, MFG only requires lower computational costs for the macroscopic simulations. Some researchers have turned to MFG in order to model the interaction between populations and study the decision-making process of intelligent agents, including aversion and congestion behavior between two groups of pedestrians, departure time choice of morning commuters, and decision-making processes for autonomous vehicle. \nc. Control and mitigation of Epidemics\nSince the epidemic has affected society and individuals significantly, MFG and mean-field controls (MFCs) provide a perspective to study and understand the underlying population dynamics, especially in the context of the Covid-19 pandemic response. MFG has been used to extend the SIR-type dynamics with spatial effects or allowing for individuals to choose their behaviors and control their contributions to the spread of the disease. MFC is applied to design the optimal strategy to control the virus spreading within a spatial domain, control individuals’ decisions to limit their social interactions, and support the government’s nonpharmaceutical interventions.", "Automation-Control": 0.8389356136, "Qwen2": "Yes"} {"id": "53638354", "revid": "12487726", "url": "https://en.wikipedia.org/wiki?curid=53638354", "title": "Kite rig", "text": "Kite rigs are wind-assisted propulsion systems for propelling a vehicle. They differ from conventional sails in that they are flown from kite control lines, not supported by masts.\nVehicles driven by kites include boats, buggies, and vehicles with snow and ice runners. They may be as simple as a person flying a kite while standing on a specialized skateboard, or be large, complex systems fixed to the vehicle, with powered and automated controls. They have recreational and commercial uses.\nStructure.\nCurrent kite rigs can be sailed within 50 degrees of the wind. This allows them to sail upwind by tacking.\nA power kite is held at an angle to the wind using control lines. Like any other sail, the kite develops lift and drag, pulling the vessel. The vector of the kite's pull is added to the forces produced by the vessel (water resistance against the hull, force of wheels against the ground, etc.) to move the vessel in the desired direction.\nWindspeed increases with height, allowing kites to develop substantially more thrust per unit area than a conventional sail. Winds are also steadier and less turbulent higher up.\nKites may be adjusted with respect to the wind, manually or by an automated system. A kite cannot stay aloft when there is no wind, and must be re-launched.\nApplications.\nSolo sports.\nKite rigs power a variety of recreational conveyances on water and land. On water, kites are used to power surf-board-like boards in the sport of kitesurfing. Kiteboating is done in boats with kite rigs. On land, kite landboarding derives the same mode of power for skate-board-like boards. Over snow, kites power snowboards or skis in the sport of snowkiting. Traction kites for solosports generally have an area of 1-16 square meters, with anything over ~5 square meters being a big kite that requires expertise.\nShips.\nShip-pulling kites run to hundreds of square meters of area and require a special attachment points, a launch and recovery system, and fly-by-wire controls.\nThe SkySails propulsion system consists of a large foil kite, an electronic control system for the kite, and an automatic system to retract the kite.\nThe kite, while 1–2 orders of magnitude larger, bears similarities to the arc kites used in kitesurfing. However, the kite is an inflatable rather than a ram-air kite. Additionally, a control pod is used rather than direct tension on multiple kite control lines; only one line runs the full distance from kite to ship, with the bridle lines running from kite to control pod. Power to the pod is provided by cables embedded in the line; the same line also carries commands to the control pod from the ship.\nThe kite is launched and recovered by an animated mast or arm, which grips the kite by its leading edge. The mast also inflates and deflates the kite. When not in use, mast and deflated kite fold away.\nUse.\nA commercial cargo ship, the MS \"Beluga Skysails\", was built, and launched in 2007, with a kite rig supplementing conventional propulsion. A European Union-funded four-year study of wind propulsion, using the MS \"Beluga Skysails\", reported that the ship attained 5% fuel savings overall, which translated into of CO2 for a typical year and itinerary. The study concluded that 25,000 similarly equipped ships could reduce fuel consumption by and save of CO2, of NOx. The return on investment for installing a kite sail was estimated to be about two-three years. On her maiden voyage, MS \"Beluga Skysails\" saved an estimated 10–15% fuel, $1,000 to $1,500 per day, while the kite was in use.\n\"Maartje Theadora\", a large fishing trawler, was retrofitted with a kite rig in 2010.\nCompanies.\nSkysails and KiteShip both made kite rigs.", "Automation-Control": 0.8189680576, "Qwen2": "Yes"} {"id": "1188592", "revid": "35498457", "url": "https://en.wikipedia.org/wiki?curid=1188592", "title": "Qualcomm code-excited linear prediction", "text": "Qualcomm code-excited linear prediction (QCELP), also known as Qualcomm PureVoice, is a speech codec developed in 1994 by Qualcomm to increase the speech quality of the IS-96A codec earlier used in CDMA networks. It was later replaced with EVRC since it provides better speech quality with fewer bits. The two versions, \"QCELP8\" and \"QCELP13\", operate at 8 and 13 kilobits per second (Kbit/s) respectively.\nIn CDMA systems, a QCELP vocoder converts a sound signal into a signal transmissible within a circuit. In wired systems, voice signals are generally sampled at 8 kHz (that is, 8,000 sample values per second) and then encoded by 8-bit quantization for each sample value. Such a system transmits at 64 kbit/s, an expensive rate in a wireless system. A QCELP vocoder with variable rates can reduce the rate enough to fit a wireless system by coding the information more efficiently. In particular, it can change its own coding rates based on the speaker's volume or pitch; a louder or higher-pitched voice requires a higher rate.", "Automation-Control": 0.8375192881, "Qwen2": "Yes"} {"id": "9351168", "revid": "3138265", "url": "https://en.wikipedia.org/wiki?curid=9351168", "title": "Web part", "text": "A Web Part, also called a Web Widget, is an ASP.NET server control which is added to a Web Part Zone on Web Part Pages by users at run time. The controls enable end users to modify the content, appearance, and behavior of Web pages directly from a browser. It can be put into certain places in a web page by end users, after development by a programmer.\nWeb Parts can be used as an add-on ASP.NET technology to Windows SharePoint Services.\nWeb Parts are equivalent to Portlets, but don't necessarily require a web portal such as SharePoint to host them.", "Automation-Control": 0.876360774, "Qwen2": "Yes"} {"id": "15561336", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=15561336", "title": "Gauss pseudospectral method", "text": "The Gauss pseudospectral method (GPM), one of many topics named after Carl Friedrich Gauss, is a direct transcription method for discretizing a continuous optimal control problem into a nonlinear program (NLP). The Gauss pseudospectral method differs from several other pseudospectral methods in that the dynamics are not collocated at either endpoint of the time interval. This collocation, in conjunction with the proper approximation to the costate, leads to a set of KKT conditions that are identical to the discretized form of the first-order optimality conditions. This equivalence between the KKT conditions and the discretized first-order optimality conditions leads to an accurate costate estimate using the KKT multipliers of the NLP.\nDescription.\nThe method is based on the theory of orthogonal collocation where the collocation points (i.e., the points at which the optimal control problem is discretized) are the Legendre–Gauss (LG) points. The approach used in the GPM is to use a Lagrange polynomial approximation for the state that includes coefficients for the initial state plus the values of the state at the N LG points. In a somewhat opposite manner, the approximation for the costate (adjoint) is performed using a basis of Lagrange polynomials that includes the final value of the costate plus the costate at the N LG points. These two approximations together lead to the ability to map the KKT multipliers of the nonlinear program (NLP) to the costates of the optimal control problem at the N LG points PLUS the boundary points. The costate mapping theorem that arises from the GPM has been described in several references including two PhD theses and journal articles that include the theory along with applications\nBackground.\nPseudospectral methods, also known as \"orthogonal collocation methods\", in optimal control arose from spectral methods which were traditionally used to solve fluid dynamics problems. Seminal work in orthogonal collocation methods for optimal control problems date back to 1979 with the work of Reddien and some of the first work using orthogonal collocation methods in engineering can be found in the chemical engineering literature. More recent work in chemical and aerospace engineering have used collocation at the Legendre–Gauss–Radau (LGR) points. Within the aerospace engineering community, several well-known pseudospectral methods have been developed for solving optimal control problems such as the Chebyshev pseudospectral method (CPM) the Legendre pseudospectral method (LPM) and the Gauss pseudospectral method (GPM). The CPM uses Chebyshev polynomials to approximate the state and control, and performs orthogonal collocation at the Chebyshev–Gauss–Lobatto (CGL) points. An enhancement to the Chebyshev pseudospectral method that uses a Clenshaw–Curtis quadrature was developed. The LPM uses Lagrange polynomials for the approximations, and Legendre–Gauss–Lobatto (LGL) points for the orthogonal collocation. A costate estimation procedure for the Legendre pseudospectral method was also developed. Recent work shows several variants of the standard LPM, The Jacobi pseudospectral method is a more general pseudospectral approach that uses Jacobi polynomials to find the collocation points, of which Legendre polynomials are a subset. Another variant, called the Hermite-LGL method uses piecewise cubic polynomials rather than Lagrange polynomials, and collocates at a subset of the LGL points.", "Automation-Control": 0.8487411737, "Qwen2": "Yes"} {"id": "33512934", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=33512934", "title": "Cuboid (computer vision)", "text": "In computer vision, the term cuboid is used to describe a small spatiotemporal volume extracted for purposes of behavior recognition. The cuboid is regarded as a basic geometric primitive type and is used to depict three-dimensional objects within a three dimensional representation of a flat, two dimensional image.\nProduction.\nCuboids can be produced from both two-dimensional and three-dimensional images.\nOne method used to produce cuboids utilizes scene understanding (SUN) primitive databases, which are collections of pictures that already contain cuboids. By sorting through SUN primitive databases with machine learning tools, computers observe the conditions in which cuboids are produced in images from SUN primitive databases and can learn to produce cuboids from other images.\nRGB-D images, which are RGB images that also record the depth of each pixel, are occasionally used to produce cuboids because computers no longer need to determine the depth of an object, as they typically do because depth is already recorded.\nCuboid production is sensitive to changes in color and illumination, blockage, and background clutter. This means that it is difficult for computers to produce cuboids of objects that are multicolored, irregularly illuminated, or partially covered, or if there are many objects in the background. This is partially due to the fact that algorithms for producing cuboids are still relatively simple.\nUsage.\nCuboids are created for point cloud-based three-dimensional maps and can be utilized in various situations such as augmented reality, the automated control of cars, drones, and robots, and object detection.\nCuboids allow for software to identify a scene through geometric descriptions in an “object-agnostic” fashion.\nInterest points, locations within images that are identified by a computer as essential to identifying the image, created from two-dimensional images can be used with cuboids for image matching, identifying a room or scene, and instance recognition. Interest points created from three dimensional images can be used with cuboids to recognize activities. This is possible because interest points aid software to focus on only the most important aspects of the images.\nRGB-D images and SLAM systems are used together in RGB-D SLAM systems, which are employed by Computer-aided design systems to generate point cloud-based three-dimensional maps.\nMost industrial multi-axis machining tools use computer-aided manufacturing and subsequently work in cuboid work spaces.", "Automation-Control": 0.7859459519, "Qwen2": "Yes"} {"id": "32050260", "revid": "35498457", "url": "https://en.wikipedia.org/wiki?curid=32050260", "title": "Cloud manufacturing", "text": "Cloud manufacturing (CMfg) is a new manufacturing paradigm developed from existing advanced manufacturing models (e.g., ASP, AM, NM, MGrid) and enterprise information technologies under the support of cloud computing, Internet of Things (IoT), virtualization and service-oriented technologies, and advanced computing technologies. It transforms manufacturing resources and manufacturing capabilities into manufacturing services, which can be managed and operated in an intelligent and unified way to enable the full sharing and circulating of manufacturing resources and manufacturing capabilities. CMfg can provide safe and reliable, high quality, cheap and on-demand manufacturing services for the whole lifecycle of manufacturing. The concept of manufacturing here refers to big manufacturing that includes the whole lifecycle of a product (e.g. design, simulation, production, test, maintenance). \nThe concept of Cloud manufacturing was initially proposed by the research group led by Prof. Bo Hu Li and Prof. Lin Zhang in China in 2009.\n Related discussions and research were conducted hereafter, and some similar definitions (e.g. Cloud-Based Design and Manufacturing (CBDM).\n) to cloud manufacturing were introduced.\nCloud manufacturing is a type of parallel, networked, and distributed system consisting of an integrated and inter-connected virtualized service pool (manufacturing cloud) of manufacturing resources and capabilities as well as capabilities of intelligent management and on-demand use of services to provide solutions for all kinds of users involved in the whole lifecycle of manufacturing.\nTypes.\nCloud Manufacturing can be divided into two categories.\nIn CMfg system, various manufacturing resources and abilities can be intelligently sensed and connected into wider Internet, and automatically managed and controlled using IoT technologies (e.g., RFID, wired and wireless sensor network, embedded system). Then the manufacturing resources and abilities are virtualized and encapsulated into different manufacturing cloud services (MCSs), that can be accessed, invoked, and deployed based on knowledge by using virtualization technologies, service-oriented technologies, and cloud computing technologies. The MCSs are classified and aggregated according to specific rules and algorithms, and different kinds of manufacturing clouds are constructed. Different users can search and invoke the qualified MCSs from related manufacturing cloud according to their needs, and assemble them to be a virtual manufacturing environment or solution to complete their manufacturing task involved in the whole life cycle of manufacturing processes under the support of cloud computing, service-oriented technologies, and advanced computing technologies.\nFour types of cloud deployment modes (public, private, community and hybrid clouds) are ubiquitous as a single point of access.\nResources.\nFrom the resource’s perspective, each kind of manufacturing capability requires support from the related manufacturing resource. For each type of manufacturing capability, its related manufacturing resource comes in two forms, soft resources and hard resources.", "Automation-Control": 0.9862654209, "Qwen2": "Yes"} {"id": "2427912", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=2427912", "title": "False nearest neighbor algorithm", "text": "Within abstract algebra, the false nearest neighbor algorithm is an algorithm for estimating the embedding dimension. The concept was proposed by Kennel et al. (1992). The main idea is to examine how the number of neighbors of a point along a signal trajectory change with increasing embedding dimension. In too low an embedding dimension, many of the neighbors will be false, but in an appropriate embedding dimension or higher, the neighbors are real. With increasing dimension, the false neighbors will no longer be neighbors. Therefore, by examining how the number of neighbors change as a function of dimension, an appropriate embedding can be determined.", "Automation-Control": 0.9734650254, "Qwen2": "Yes"} {"id": "48777199", "revid": "38427", "url": "https://en.wikipedia.org/wiki?curid=48777199", "title": "Manifold regularization", "text": "In machine learning, Manifold regularization is a technique for using the shape of a dataset to constrain the functions that should be learned on that dataset. In many machine learning problems, the data to be learned do not cover the entire input space. For example, a facial recognition system may not need to classify any possible image, but only the subset of images that contain faces. The technique of manifold learning assumes that the relevant subset of data comes from a manifold, a mathematical structure with useful properties. The technique also assumes that the function to be learned is \"smooth\": data with different labels are not likely to be close together, and so the labeling function should not change quickly in areas where there are likely to be many data points. Because of this assumption, a manifold regularization algorithm can use unlabeled data to inform where the learned function is allowed to change quickly and where it is not, using an extension of the technique of Tikhonov regularization. Manifold regularization algorithms can extend supervised learning algorithms in semi-supervised learning and transductive learning settings, where unlabeled data are available. The technique has been used for applications including medical imaging, geographical imaging, and object recognition.\nManifold regularizer.\nMotivation.\nManifold regularization is a type of regularization, a family of techniques that reduces overfitting and ensures that a problem is well-posed by penalizing complex solutions. In particular, manifold regularization extends the technique of Tikhonov regularization as applied to Reproducing kernel Hilbert spaces (RKHSs). Under standard Tikhonov regularization on RKHSs, a learning algorithm attempts to learn a function formula_1 from among a hypothesis space of functions formula_2. The hypothesis space is an RKHS, meaning that it is associated with a kernel formula_3, and so every candidate function formula_1 has a norm formula_5, which represents the complexity of the candidate function in the hypothesis space. When the algorithm considers a candidate function, it takes its norm into account in order to penalize complex functions.\nFormally, given a set of labeled training data formula_6 with formula_7 and a loss function formula_8, a learning algorithm using Tikhonov regularization will attempt to solve the expression\nwhere formula_10 is a hyperparameter that controls how much the algorithm will prefer simpler functions over functions that fit the data better.\nManifold regularization adds a second regularization term, the \"intrinsic regularizer\", to the \"ambient regularizer\" used in standard Tikhonov regularization. Under the manifold assumption in machine learning, the data in question do not come from the entire input space formula_11, but instead from a nonlinear manifold formula_12. The geometry of this manifold, the intrinsic space, is used to determine the regularization norm.\nLaplacian norm.\nThere are many possible choices for the intrinsic regularizer formula_13. Many natural choices involve the gradient on the manifold formula_14, which can provide a measure of how smooth a target function is. A smooth function should change slowly where the input data are dense; that is, the gradient formula_15 should be small where the \"marginal probability density\" formula_16, the probability density of a randomly drawn data point appearing at formula_17, is large. This gives one appropriate choice for the intrinsic regularizer:\nIn practice, this norm cannot be computed directly because the marginal distribution formula_19 is unknown, but it can be estimated from the provided data. \nGraph-based approach of the Laplacian norm.\nWhen the distances between input points are interpreted as a graph, then the Laplacian matrix of the graph can help to estimate the marginal distribution. Suppose that the input data include formula_20 labeled examples (pairs of an input formula_17 and a label formula_22) and formula_23 unlabeled examples (inputs without associated labels). Define formula_24 to be a matrix of edge weights for a graph, where formula_25 is a distance measure between the data points formula_26 and formula_27. Define formula_28 to be a diagonal matrix with formula_29 and formula_30 to be the Laplacian matrix formula_31. Then, as the number of data points formula_32 increases, formula_30 converges to the Laplace–Beltrami operator formula_34, which is the divergence of the gradient formula_35. Then, if formula_36 is a vector of the values of formula_1 at the data, formula_38, the intrinsic norm can be estimated:\nAs the number of data points formula_32 increases, this empirical definition of formula_41 converges to the definition when formula_19 is known.\nSolving the regularization problem with graph-based approach.\nUsing the weights formula_43 and formula_44 for the ambient and intrinsic regularizers, the final expression to be solved becomes:\nAs with other kernel methods, formula_2 may be an infinite-dimensional space, so if the regularization expression cannot be solved explicitly, it is impossible to search the entire space for a solution. Instead, a representer theorem shows that under certain conditions on the choice of the norm formula_13, the optimal solution formula_48 must be a linear combination of the kernel centered at each of the input points: for some weights formula_49,\nUsing this result, it is possible to search for the optimal solution formula_48 by searching the finite-dimensional space defined by the possible choices of formula_49.\nFunctional approach of the Laplacian norm.\nThe idea beyond graph-Laplacian is to use neighbors to estimate Laplacian. \nThis method is akin local averaging methods, that are known to scale poorly in high-dimensional problem.\nIndeed, graph Laplacian is known to suffer from the curse of dimensionality.\nLuckily, it is possible to leverage expected smoothness of the function to estimate thanks to more advanced functional analysis.\nThis method consists in estimating the Laplacian operator thanks to derivatives of the kernel reading formula_53 where formula_54 denotes the partial derivatives according to the \"j\"-th coordinate of the first variable.\nThis second approach of the Laplacian norm is to put in relation with meshfree methods, that contrast with the finite difference method in PDE.\nApplications.\nManifold regularization can extend a variety of algorithms that can be expressed using Tikhonov regularization, by choosing an appropriate loss function formula_8 and hypothesis space formula_2. Two commonly used examples are the families of support vector machines and regularized least squares algorithms. (Regularized least squares includes the ridge regression algorithm; the related algorithms of LASSO and elastic net regularization can be expressed as support vector machines.) The extended versions of these algorithms are called Laplacian Regularized Least Squares (abbreviated LapRLS) and Laplacian Support Vector Machines (LapSVM), respectively.\nLaplacian Regularized Least Squares (LapRLS).\nRegularized least squares (RLS) is a family of regression algorithms: algorithms that predict a value formula_57 for its inputs formula_17, with the goal that the predicted values should be close to the true labels for the data. In particular, RLS is designed to minimize the mean squared error between the predicted values and the true labels, subject to regularization. Ridge regression is one form of RLS; in general, RLS is the same as ridge regression combined with the kernel method. The problem statement for RLS results from choosing the loss function formula_8 in Tikhonov regularization to be the mean squared error:\nThanks to the representer theorem, the solution can be written as a weighted sum of the kernel evaluated at the data points:\nand solving for formula_62 gives:\nwhere formula_3 is defined to be the kernel matrix, with formula_65, and formula_66 is the vector of data labels.\nAdding a Laplacian term for manifold regularization gives the Laplacian RLS statement:\nThe representer theorem for manifold regularization again gives\nand this yields an expression for the vector formula_62. Letting formula_3 be the kernel matrix as above, formula_66 be the vector of data labels, and formula_72 be the formula_73 block matrix formula_74:\nwith a solution of\nLapRLS has been applied to problems including sensor networks,\nmedical imaging,\nobject detection,\nspectroscopy,\ndocument classification,\ndrug-protein interactions,\nand compressing images and videos.\nLaplacian Support Vector Machines (LapSVM).\nSupport vector machines (SVMs) are a family of algorithms often used for classifying data into two or more groups, or \"classes\". Intuitively, an SVM draws a boundary between classes so that the closest labeled examples to the boundary are as far away as possible. This can be directly expressed as a linear program, but it is also equivalent to Tikhonov regularization with the hinge loss function, formula_77:\nAdding the intrinsic regularization term to this expression gives the LapSVM problem statement:\nAgain, the representer theorem allows the solution to be expressed in terms of the kernel evaluated at the data points:\nformula_81 can be found by writing the problem as a linear program and solving the dual problem. Again letting formula_3 be the kernel matrix and formula_72 be the block matrix formula_74, the solution can be shown to be\nwhere formula_86 is the solution to the dual problem\nand formula_88 is defined by\nLapSVM has been applied to problems including geographical imaging,\nmedical imaging,\nface recognition,\nmachine maintenance,\nand brain–computer interfaces.", "Automation-Control": 0.7635034323, "Qwen2": "Yes"} {"id": "314366", "revid": "44058392", "url": "https://en.wikipedia.org/wiki?curid=314366", "title": "H-infinity methods in control theory", "text": "H\"∞ (i.e. \"H\"-infinity\") methods are used in control theory to synthesize controllers to achieve stabilization with guaranteed performance. To use \"H\"∞ methods, a control designer expresses the control problem as a mathematical optimization problem and then finds the controller that solves this optimization. \"H\"∞ techniques have the advantage over classical control techniques in that \"H\"∞ techniques are readily applicable to problems involving multivariate systems with cross-coupling between channels; disadvantages of \"H\"∞ techniques include the level of mathematical understanding needed to apply them successfully and the need for a reasonably good model of the system to be controlled. It is important to keep in mind that the resulting controller is only optimal with respect to the prescribed cost function and does not necessarily represent the best controller in terms of the usual performance measures used to evaluate controllers such as settling time, energy expended, etc. Also, non-linear constraints such as saturation are generally not well-handled. These methods were introduced into control theory in the late 1970s-early 1980s\nby George Zames (sensitivity minimization), J. William Helton (broadband matching),\nand Allen Tannenbaum (gain margin optimization).\nThe phrase \"H\"∞ \"control\" comes from the name of the mathematical space over which the optimization takes place: \"H\"∞ is the \"Hardy space\" of matrix-valued functions that are analytic and bounded in the open right-half of the complex plane defined by Re(\"s\") > 0; the \"H\"∞ norm is the maximum singular value of the function over that space. (This can be interpreted as a maximum gain in any direction and at any frequency; for SISO systems, this is effectively the maximum magnitude of the frequency response.) \"H\"∞ techniques can be used to minimize the closed loop impact of a perturbation: depending on the problem formulation, the impact will either be measured in terms of stabilization or performance.\nSimultaneously optimizing robust performance and robust stabilization is difficult. One method that comes close to achieving this is \"H\"∞ loop-shaping, which allows the control designer to apply classical loop-shaping concepts to the multivariable frequency response to get good robust performance, and then optimizes the response near the system bandwidth to achieve good robust stabilization.\nCommercial software is available to support \"H\"∞ controller synthesis.\nProblem formulation.\nFirst, the process has to be represented according to the following standard configuration:\nThe plant \"P\" has two inputs, the exogenous input \"w\", that includes reference signal and disturbances, and the manipulated variables \"u\". There are two outputs, the error signals \"z\" that we want to minimize, and the measured variables \"v\", that we use to control the system. \"v\" is used in \"K\" to calculate the manipulated variables \"u\". Notice that all these are generally vectors, whereas P and K are matrices.\nIn formulae, the system is:\nIt is therefore possible to express the dependency of \"z\" on \"w\" as:\nCalled the \"lower linear fractional transformation\", formula_4 is defined (the subscript comes from \"lower\"):\nTherefore, the objective of formula_6 control design is to find a controller formula_7 such that formula_8 is minimised according to the formula_6 norm. The same definition applies to formula_10 control design. The infinity norm of the transfer function matrix formula_8 is defined as:\nwhere formula_13 is the maximum singular value of the matrix formula_14.\nThe achievable \"H\"∞ norm of the closed loop system is mainly given through the matrix \"D\"11 (when the system \"P\" is given in the form (\"A\", \"B\"1, \"B\"2, \"C\"1, \"C\"2, \"D\"11, \"D\"12, \"D\"22, \"D\"21)). There are several ways to come to an \"H\"∞ controller:", "Automation-Control": 0.9391551614, "Qwen2": "Yes"} {"id": "912904", "revid": "35498457", "url": "https://en.wikipedia.org/wiki?curid=912904", "title": "Flow control valve", "text": "A flow control valve regulates the flow or pressure of a fluid. Control valves normally respond to signals generated by independent devices such as flow meters or temperature gauges.\nOperation.\nControl valves are normally fitted with actuators and positioners. Pneumatically-actuated globe valves and diaphragm valves are widely used for control purposes in many industries, although quarter-turn types such as (modified) ball and butterfly valves are also used.\nControl valves can also work with hydraulic actuators (also known as hydraulic pilots). These types of valves are also known as automatic control valves. The hydraulic actuators respond to changes of pressure or flow and will open/close the valve. Automatic control valves do not require an external power source, meaning that the fluid pressure is enough to open and close them. \nAutomatic control valves include pressure reducing valves, flow control valves, back-pressure sustaining valves, altitude valves, and relief valves.\nApplication.\nProcess plants consist of hundreds, or even thousands, of control loops all networked together to produce a product to be offered for sale. Each of these control loops is designed to keep some important process variable, such as pressure, flow, level, or temperature, within a required operating range to ensure the quality of the end product. Each loop receives and internally creates disturbances that detrimentally affect the process variable, and interaction from other loops in the network provides disturbances that influence the process variable.\nTo reduce the effect of these load disturbances, sensors and transmitters collect information about the process variable and its relationship to some desired set point. A controller then processes this information and decides what must be done to get the process variable back to where it should be after a load disturbance occurs. When all the measuring, comparing, and calculating are done, some type of final control element must implement the strategy selected by the controller. The most common final control element in the process control industries is the control valve. The control valve manipulates a flowing fluid, such as gas, steam, water, or chemical compounds, to compensate for the load disturbance and keep the regulated process variable as close as possible to the desired set point.", "Automation-Control": 1.0000085831, "Qwen2": "Yes"} {"id": "32776972", "revid": "40561892", "url": "https://en.wikipedia.org/wiki?curid=32776972", "title": "Morita Kagaku Kogyo Co., Ltd", "text": "Morita Kagaku Kogyo Co., Ltd was founded in 1949 as a sweetener manufacturer. With the social trend the company began research on natural products as the alternatives. In 1971, they developed a system for producing a natural stevia sweetener, which was the first in the world.\nThe company has more recently been involved manufacture of a sweetener based on rebaudioside A, extracted from the stevia leaf.\nProduct.\nKarori Zero is a Japanese powder-like sweetener product made from the latest technology of Morita Kagaku Kogyo Co., Ltd. It is made with rebaudioside A and Erythritol, which have been proved to be safe for human intake.", "Automation-Control": 0.7608895898, "Qwen2": "Yes"} {"id": "1722960", "revid": "11555324", "url": "https://en.wikipedia.org/wiki?curid=1722960", "title": "Hydroforming", "text": "Hydroforming is a cost-effective way of shaping ductile metals such as aluminium, brass, low alloy steel, and stainless steel into lightweight, structurally stiff and strong pieces. One of the largest applications of hydroforming is the automotive industry, which makes use of the complex shapes made possible by hydroforming to produce stronger, lighter, and more rigid unibody structures for vehicles. This technique is particularly popular with the high-end sports car industry and is also frequently employed in the shaping of aluminium tubes for bicycle frames.\nHydroforming is a specialized type of die forming that uses a high pressure hydraulic fluid to press room temperature working material into a die. To hydroform aluminium into a vehicle's frame rail, a hollow tube of aluminium is placed inside a negative mold that has the shape of the desired result. High pressure hydraulic pumps then inject fluid at very high pressure inside the aluminium tube which causes it to expand until it matches the mold. The hydroformed aluminium is then removed from the mold.\nHydroforming allows complex shapes with concavities to be formed, which would be difficult or impossible with standard solid die stamping. Hydroformed parts can often be made with a higher stiffness-to-weight ratio and at a lower per unit cost than traditional stamped or stamped and welded parts. Virtually all metals capable of cold forming can be hydroformed, including aluminium, brass, carbon and stainless steel, copper, and high strength alloys.\nIf electrodes are used to vaporize the fluid explosively in an arc this would describe a similar process known as electrohydraulic forming.\nMain process variants.\nSheet hydroforming.\nThis process is based on the 1950s patent for hydramolding by Fred Leuthesser, Jr. and John Fox of the Schaible Company of Cincinnati, Ohio in the United States. It was originally used in producing kitchen spouts. This was done because in addition to the strengthening of the metal, hydromolding also produced less \"grainy\" parts, allowing for easier metal finishing.\nIn sheet hydroforming there are bladder forming (where there is a bladder that contains the liquid; no liquid contacts the sheet) and hydroforming where the fluid contacts the sheet (no bladder). Bladder forming is sometimes called flexforming. Flexforming is mostly used for low volume productions, as in the aerospace field.\nForming with the fluid in direct contact with the part can be done either with a male solid punch (this version is sometimes called hydro-mechanical deep drawing) or with a female solid die.\nIn hydro-mechanical deep drawing, a work piece is placed on a draw ring (blank holder) over a male punch then a hydraulic chamber surrounds the work piece and a relatively low initial pressure seats the work piece against the punch. The punch then is raised into the hydraulic chamber and pressure is increased to as high as 100 MPa (15000 psi) which forms the part around the punch. Then the pressure is released and punch retracted, hydraulic chamber lifted, and the process is complete.\nAmong these techniques hydraulic bulge testing allows for an increased work hardening of sheet material by distinctive stretching operations and provides better shape accuracy for complex parts. Hence, by selecting proper material and the forming parameters for hydraulic sheet bulging study one can determine Forming Limit Curves (FLCs). \nSignificance\nTube hydroforming.\nIn tube hydroforming there are two major practices: high pressure and low pressure.\nWith the high pressure process the tube is fully enclosed in a die prior to pressurization of the tube. In low pressure the tube is slightly pressurized to a fixed volume during the closing of the die (this used to be called the Variform process). Historically, the process was patented in the '50s, but it was industrially spread in the 1970s for the production of large T-shaped joints for the oil and gas industry. Today it is mostly used in the automotive sector, where many industrial applications can be found. With the rise of the electric bicycle it is now a method of choice for e-bicycle manufacturers. Especially down tubes and top tubes are favorably made with hydroforming in order to fit the battery for the electric bicycle. Newest applications in the bicycle industry are now hydroformed handlebars to improve aero dynamics and ergonomics.\nIn tube hydroforming pressure is applied to the inside of a tube that is held by dies with the desired cross sections and forms. When the dies are closed, the tube ends are sealed by axial punches and the tube is filled with hydraulic fluid. The internal pressure can go up to a few thousand bars and it causes the tube to calibrate against the dies. The fluid is injected into the tube through one of the two axial punches. Axial punches are movable and their action is required to provide axial compression and to feed material towards the center of the bulging tube. Transverse counterpunches may also be incorporated in the forming die in order to form protrusions with small diameter/length ratio. Transverse counter punches may also be used to punch holes in the work piece at the end of the forming process. \nDesigning the process has in the past been a challenging task, since initial analytical modeling is possible only for limited cases. Advances in FEA and FEM in recent years has enabled hydroform processes to be more widely engineered for varieties of parts and materials. Often FEM simulations must be performed in order to find a feasible process solution and to define the correct loading curves: pressure vs. time and axial feed vs. time. In the case of more complex tube hydroformed parts the tube must be pre-bent prior to loading into the hydroforming die. Bending is done sequentially along the length of the tube, with the tube being bent around bending discs (or dies) as the tube length is fed in. Bending can be done with or without mandrels. This additional complexity of process further increases the reliance on FEM for designing and evaluating manufacturing processes. The feasibility of a hydroforming process must take into consideration the initial tube material properties and its potential for variation, along with the bending process, hydraulic pressure throughout the forming process, in inclusion of axial feed or not, in order to predict metal formability. \nTypical tools.\nTools and punches can be interchanged for different part requirements.\nOne advantage of hydroforming is the savings on tools. For sheet metal only a draw ring and punch (metalworking) or male die is required. Depending on the part being formed, the punch can be made from epoxy, rather than metal. The bladder of the hydroform itself acts as the female die eliminating the need to fabricate it. This allows for changes in material thickness to be made with usually no necessary changes to the tool. However, dies must be highly polished and in tube hydroforming a two-piece die is required to allow opening and closing.\nGeometry produced.\nAnother advantage of hydroforming is that complex shapes can be made in one step. In sheet hydroforming with the bladder acting as the male die almost limitless geometries can be produced. However, the process is limited by the very high closing force required in order to seal the dies, especially for large panels and thick hard materials. Small concave corner radii are difficult to be completely calibrated, i.e. filled, because too large a pressure would be required. in fact, the die closing force can be very high, both in tube and sheet hydroforming and may easily overcome the maximum tonnage of the forming press. In order to keep the die closing force under prescribed limits, the maximum internal fluid pressure must be limited. This reduces the calibration abilities of the process, i.e. it reduces the possibility of forming parts with small concave radii.\nLimits of the sheet hydroforming process are due to risks of excessive thinning, fracture, wrinkling and are strictly related to the material formability and to a proper selection of process parameters (e.g. hydraulic pressure vs. time curve). Tube hydroforming can produce many geometric options as well, reducing the need for tube welding operations. Similar limitations and risks can be listed as in sheet hydroforming; however, the maximum closing force is seldom a limiting factor in tube hydroforming.\nTolerances and surface finish.\nHydroforming is capable of producing parts within tight tolerances including aircraft tolerances where a common tolerance for sheet metal parts is within 0.76 mm (1/30th of an inch). Metal hydroforming also allows for a smoother finish as draw marks produced by the traditional method of pressing a male and female die together are eliminated. \nWhile springback has long been a topic of discussion for sheet metal forming operations it has been far less of a topic of research for tube hydroforming. This may in part be a result of the relatively low levels of springback naturally occurring when deforming the tubes into their closed section geometries. Tube Hydroformed sections by the nature of their closed sections are very rigid and do not display high degrees of elastic deformation under load. For this reason it is likely that negative residual stress induced during tube hydroforming might be insufficient to deform the part elastically after the completion of forming. However, as more and more tubular parts are being manufactured using high strength steel and advanced high strength steel parts, springback must be accounted for in the design and manufacture of closed section tube hydroformed parts.\nExamples.\nNotable examples include:\nReferences.\n", "Automation-Control": 0.9996880293, "Qwen2": "Yes"} {"id": "1724001", "revid": "6908984", "url": "https://en.wikipedia.org/wiki?curid=1724001", "title": "Job shop", "text": "Job shops are typically small manufacturing systems that handle job production, that is, custom/bespoke or semi-custom/bespoke manufacturing processes such as small to medium-size customer orders or batch jobs. Job shops typically move on to different jobs (possibly with different customers) when each job is completed. Job shops machines are aggregated in shops by the nature of skills and technological processes involved, each shop therefore may contain different machines, which gives this production system processing flexibility, since jobs are not necessarily constrained to a single machine. In computer science the problem of job shop scheduling is considered strongly NP-hard.\nA typical example would be a machine shop, which may make parts for local industrial machinery, farm machinery and implements, boats and ships, or even batches of specialized components for the aircraft industry. Other types of common job shops are grinding, honing, jig-boring, gear manufacturing, and fabrication shops.\nThe opposite would be continuous continuous-flow manufacturing, such as textile, steel, food manufacturing and manual labor.\nAdvantages.\nCompare to transfer line.\nDisadvantages.\nCompare to transfer line.", "Automation-Control": 0.9600141048, "Qwen2": "Yes"} {"id": "58878004", "revid": "8340447", "url": "https://en.wikipedia.org/wiki?curid=58878004", "title": "Stochastic gradient Langevin dynamics", "text": "Stochastic gradient Langevin dynamics (SGLD) is an optimization and sampling technique composed of characteristics from Stochastic gradient descent, a Robbins–Monro optimization algorithm, and Langevin dynamics, a mathematical extension of molecular dynamics models. Like stochastic gradient descent, SGLD is an iterative optimization algorithm which uses minibatching to create a stochastic gradient estimator, as used in SGD to optimize a differentiable objective function. Unlike traditional SGD, SGLD can be used for Bayesian learning as a sampling method. SGLD may be viewed as Langevin dynamics applied to posterior distributions, but the key difference is that the likelihood gradient terms are minibatched, like in SGD. SGLD, like Langevin dynamics, produces samples from a posterior distribution of parameters based on available data. First described by Welling and Teh in 2011, the method has applications in many contexts which require optimization, and is most notably applied in machine learning problems.\nFormal definition.\nGiven some parameter vector formula_1, its prior distribution formula_2, and a set of data points formula_3, Langevin dynamics samples from the posterior distribution formula_4 by updating the chain:\nStochastic gradient Langevin dynamics uses a modified update procedure with minibatched likelihood terms:\nwhere formula_7 is a positive integer, formula_8 is Gaussian noise, formula_9 is the likelihood of the data given the parameter vector formula_1, and our step sizes formula_11satisfy the following conditions:\nFor early iterations of the algorithm, each parameter update mimics Stochastic Gradient Descent; however, as the algorithm approaches a local minimum or maximum, the gradient shrinks to zero and the chain produces samples surrounding the maximum a posteriori mode allowing for posterior inference. This process generates approximate samples from the posterior as by balancing variance from the injected Gaussian noise and stochastic gradient computation.\nApplication.\nSGLD is applicable in any optimization context for which it is desirable to quickly obtain posterior samples instead of a maximum a posteriori mode. In doing so, the method maintains the computational efficiency of stochastic gradient descent when compared to traditional gradient descent while providing additional information regarding the landscape around the critical point of the objective function. In practice, SGLD can be applied to the training of Bayesian Neural Networks in Deep Learning, a task in which the method provides a distribution over model parameters. By introducing information about the variance of these parameters, SGLD characterizes the generalizability of these models at certain points in training. Additionally, obtaining samples from a posterior distribution permits uncertainty quantification by means of confidence intervals, a feature which is not possible using traditional stochastic gradient descent.\nVariants and associated algorithms.\nIf gradient computations are exact, SGLD reduces down to the \"Langevin Monte Carlo\" algorithm, first coined in the literature of lattice field theory. This algorithm is also a reduction of Hamiltonian Monte Carlo, consisting of a single leapfrog step proposal rather than a series of steps. Since SGLD can be formulated as a modification of both stochastic gradient descent and MCMC methods, the method lies at the intersection between optimization and sampling algorithms; the method maintains SGD's ability to quickly converge to regions of low cost while providing samples to facilitate posterior inference.\nConsidering relaxed constraints on the step sizes formula_11such that they do not approach zero asymptotically, SGLD fails to produce samples for which the Metropolis Hastings rejection rate is zero, and thus a MH rejection step becomes necessary. The resulting algorithm, dubbed the Metropolis Adjusted Langevin algorithm, requires the step:\nwhere formula_15is a normal distribution centered one gradient descent step from formula_16and formula_17is our target distribution.\nMixing rates and algorithmic convergence.\nRecent contributions have proven upper bounds on mixing times for both the traditional Langevin algorithm and the Metropolis adjusted Langevin algorithm. Released in Ma et al., 2018, these bounds define the rate at which the algorithms converge to the true posterior distribution, defined formally as:\nwhere formula_19is an arbitrary error tolerance, formula_20is some initial distribution, formula_21is the posterior distribution, and formula_22is the total variation norm. Under some regularity conditions of an L-Lipschitz smooth objective function formula_23which is m-strongly convex outside of a region of radius formula_24 with condition number formula_25, we have mixing rate bounds:\nwhere formula_28 and formula_29refer to the mixing rates of the Unadjusted Langevin Algorithm and the Metropolis Adjusted Langevin Algorithm respectively. These bounds are important because they show computational complexity is polynomial in dimension formula_30 conditional on formula_31 being formula_32.", "Automation-Control": 0.9979986548, "Qwen2": "Yes"} {"id": "73575144", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=73575144", "title": "Power Surfacing", "text": "Power Surfacing is a computer-aided design software that allows users to create and edit complex freeform surfaces in SOLIDWORKS. It is developed by nPower Software, a division of IntegrityWare Inc., and is available as an add-in for SOLIDWORKS.\nOverview.\nPower Surfacing uses subdivision surface (Sub-D) modeling and Non-uniform rational B-spline (NURBS) modeling methods together, to provide a flexible and intuitive way of designing organic shapes with high quality class A surfaces. Users can create and manipulate Sub-D models inside SOLIDWORKS, and convert them to NURBS models that are compatible with SOLIDWORKS features and commands. Power Surfacing also supports reverse engineering of scanned meshes with Power Surfacing RE, a separate add-in that can reconstruct Sub-D models from polygonal meshes.\nPower Surfacing is designed for industrial design, product design, automotive design, jewelry design, and other applications that require complex freeform surfaces. It aims to simplify the design process and reduce the editing time for organic shapes, compared to traditional surface creation methods. It also provides video tutorials and examples to help users learn how to use the software effectively.\nFeatures.\nSome of the features of Power Surfacing include:\nUsage.\nPower Surfacing functions as a generative design tool, generating iterative, evolutionary results based on initial constraints.\nThis tool is commonly used to optimize manufacturing processes for parts in various industries, such as automotive, packaging design, and medical implants.\nPower Surfacing can also reverse-engineer the shapes of 3D-scanned objects and recreate their geometry algorithmically, facilitating reproduction through industrial production processes. This capability can be employed to digitally replicate physical aspects of human anatomy, such as bones, and modify the model to produce precise-fitting physical Prosthesis for patients.", "Automation-Control": 0.7496148348, "Qwen2": "Yes"} {"id": "46355761", "revid": "38132428", "url": "https://en.wikipedia.org/wiki?curid=46355761", "title": "SARL (programming language)", "text": "The SARL programming language is a modular agent-oriented programming language. It aims at providing the fundamental abstractions for dealing with concurrency, distribution, interaction, decentralization, reactivity, autonomy and dynamic reconfiguration.\nSARL is platform-independent and agent’s architecture-agnostic. It provides a set of agent-oriented first-class abstractions directly at the language level (see the section on the concepts). Nevertheless, it supports the integration and the mapping of concepts provided by other agent-oriented metamodels. SARL itself exploits this extension mechanism for defining its own extensions (organizational, event-driven, etc.).\nAn important feature of the SARL programming language is its native support for \"holonic multiagent systems,\" and \"recursive agents\"\n(also called \"holons\").\nOverview.\nThe metamodel of SARL is based on four main concepts: Agent, Capacity, Space and Skill.\nThe core metamodel of SARL is presented in Figure 1, and the main concepts are colored in light blue.\nEach of them are detailed in the following sections, as well as the corresponding piece of SARL code to illustrate their practical use.\nIn SARL, a Multiagent System (MAS) is a collection of Agents interacting together in shared distributed Spaces.\nEach agent has a collection of Capacities describing what it is able to perform, its personal competences.\nEach Capacity may then be realized/implemented by various Skills.\nFor understanding the relationship between the concepts of Capacity and Skill, a parallel can be drawn with concepts of Interface and their implementation classes in object-oriented languages.\nTo implement specific architectures (like BDI, reasoning, reactive, hybrid, etc.) developers should develop their own capacities and skills providing the agents with new exploitable features.\nDespite its open nature, SARL imposes some fundamental principles to be respected by the various Virtual Machines (VM) that wants to support it. First of all, the implementation of Space must be fully distributed and the execution layer must be abstracted from agents. SARL encourages a massively parallel execution of Agents and Behaviors. SARL is fully interoperable with Java to easily reuse all the contributions provided by the Java community, but also to facilitate the integration and evolution of legacy systems. One of the key principles governing SARL consists in not imposing a predefined way for Agents to interact within a Space. Similarly, the way to identify agents is dependent on the type of Space considered. This allows to define different types of interaction mechanisms and models on Spaces.\nThe metamodel and the syntax of the SARL programming language have been inspired by the languages like Scala, Clojure, and Ruby.\nThe SARL tools have been developed on top of Xtext, that enables to easily build domain-specific languages that are directly integrated into the Eclipse framework. The complete definition of the SARL syntax is available on GitHub.\nConcepts.\nThe SARL programming language is based on an agent-oriented metamodel based on the following concepts.\nEmotional software agents.\nAn agent is an autonomous entity having a set of skills to realize the capacities it exhibits. An agent has a set of built-in capacities considered essential to respect the commonly accepted competences of agents, such autonomy, reactivity, proactivity and social capacities. Among these built-in capacities (BIC), is the \"behaviors\" capacity that determines its global conduct. An agent has also a default behavior directly described within its definition.\nA Behavior maps a collection of perceptions represented by Events to a sequence of Actions. An Event is the specification of some occurrence in a Space that may potentially trigger effects by a listener (e.g. agent, behavior, etc.).\nThese language does not imposes a specific agent’s control loop.\nThe programmer is free to implement any control or authority protocol for their own application scenario, except for the initialization and destruction events.\nIndeed, when agents are created, the virtual machine that is executing the emotional software program is in charge of creating the agent instances, and installing the skills associated to the built-in capacities into the agent.\nThen, when the agent is ready to begin its execution, it fires the Initialize event.\nWhen the agent has decided to stop its own execution, the virtual machine fires the Destroy event to enable the agent to release any resource it may still hold.\nCapacity and Skill.\nAn Action is a specification of a transformation of a part of the designed system or its environment.\nThis transformation guarantees resulting properties if the system before the transformation satisfies a set of constraints.\nAn action is defined in terms of pre- and post-conditions.\nA Capacity is the specification of a collection of actions.\nThis specification makes no assumptions about its implementation.\nIt could be used to specify what an agent can do, what a behavior requires for its execution.\nA Skill is a possible implementation of a capacity fulfilling all the constraints of this specification.\nAn agent can dynamically evolve by learning/acquiring new Capacities, but it can also dynamically change the Skill associated to a given capacity.\nAcquiring new capacities also enables an agent to get access to new behaviors requiring these capacities.\nThis provides agents with a self-adaptation mechanism that allow them to dynamically change their architecture according to their current needs and goals.\nContext and Space.\nA Context defines the perimeter/boundary of a sub-system, and gathers a collection of Spaces.\nIn each context, there is at least one particular Space called Default Space to which all agents in this context belong.\nThis ensures the existence of a common shared Space to all agents in the same context.\nEach agent can then create specific public or private spaces to achieve its personal goals.\nSince their creation, agents are incorporated into a context called the Default Context.\nThe notion of Context makes complete sense when agents are considered composed or holonic (see below).\nA Space is the support of the interaction between agents respecting the rules defined in a Space Specification. A Space Specification defines the rules (including action and perception) for interacting within a given set of Spaces respecting this specification.\nRecursive Agent or Emotional software agent.\nAgents can be composed of other agents to define hierarchical multiagent systems. Each agent defines its own Context, called the Inner Context, and it is part of one or more External Contexts.\nExamples.\nHello, World!.\npackage helloworld\nimport io.sarl.core.Initialize\nagent HelloWorldAgent {\n on Initialize { \n println(\"Hello, World!\")\n} \nExchanging messages between two agents.\nFor illustrating the syntax of the SARL language, the Ping-Pong scheme is coded below.\nThe agent A is sending a message PING to the agent B for determining if it is still alive.\nThe agent B is replying with a PONG message.\nFirst, the two messages must be defined as events (without attribute):\n event PING\n event PONG\nThe agent A is defined with \n agent A {\n uses DefaultContextInteraction, Logging\n on Initialize {\n emit(new Ping)\n on Pong {\n println(\"Agent \" + occurrence.source + \" is alive.\")\nIn the previous code, the keyword enables the agent to use previously-defined capacities: the capacity to interact with other agents inside the default context (DefaultContextInteraction), and the capacity to log messages (Logging).\nThe keyword permits to define the actions when an occurrence of the specified event is received by the agent A.\nWhen the agent A receives the Initialize event, it emits a Ping event to all the existing agents.\nWhen the agent A receives the Pong event, it logs a message with the identity of the event's emitter inside.\nThe agent B is defined with \n agent B {\n uses DefaultContextInteraction, Logging\n on Ping {\n println(\"Agent \" + occurrence.source + \" wants to know if I'm alive.)\n emit(new Pong, Scopes addresses(occurrence.source))\nWhen the agent B receives the Ping message, it logs a message and reply with a Pong message.\nFor avoiding a broadcast of the Pong message, the receiver of this message is restricted with the scope corresponding to the address of the Ping's emitter.\nJanus Platform: a SARL Run-time Environment.\nSARL language specifies a set of concepts and their relations.\nHowever, the SARL language does not impose a particular execution infrastructure for being platform-independent.\nNevertheless, the Janus Project provides the infrastructure for running SARL agents.\nJanus is an open-source multi-agent platform fully implemented in Java 1.7.\nIt implements all required infrastructure to execute a MAS programmed in the SARL language.\nThe major assumption made at the SARL language level are supported by this run-time environment: fully distributed, parallel execution of agent’s behaviors.\nAdditionally, the Janus platform provides the tools for helping the programmer to deploy its MAS with the automatic discovery of Janus kernels, for instance.\nTechnically, the Janus platform follows the best practices in current software development, such as Inversion of Control, and profits from new technologies like Distributed Data Structures (In-Memory Data Grid like Hazelcast).", "Automation-Control": 0.8985550404, "Qwen2": "Yes"} {"id": "26759757", "revid": "27823944", "url": "https://en.wikipedia.org/wiki?curid=26759757", "title": "Simbad robot simulator", "text": "Simbad robot simulator is an open-source cross-platform software simulator used to develop robotics and artificial intelligence applications. The Simbad project started in 2005, initially developed by Dr. Louis Hugues and is widely used for educational purposes. Simbad is distributed under the GNU General Public License. It is written in Java language and enables users to develop robot controllers in a simulated 3D environment.", "Automation-Control": 0.9932074547, "Qwen2": "Yes"} {"id": "1456863", "revid": "15951685", "url": "https://en.wikipedia.org/wiki?curid=1456863", "title": "Setpoint (control system)", "text": "In cybernetics and control theory, a setpoint (SP; also set point) is the desired or target value for an essential variable, or process value (PV) of a control system, which may differ from the actual measured value of the variable. Departure of such a variable from its setpoint is one basis for error-controlled regulation using negative feedback for automatic control.\nExamples.\nCruise control\nThe SP-PV error can be used to return a system to its norm. An everyday example is the cruise control on a road vehicle; where external influences such as gradients cause speed changes (PV), and the driver also alters the desired set speed (SP). The automatic control algorithm restores the actual speed to the desired speed in the optimum way, without delay or overshoot, by altering the power output of the vehicle's engine. In this way the SP-PV error is used to control the PV so that it equals the SP. A widespread use of SP-PV error control is the PID controller.\nIndustrial applications\nSpecial consideration must be given for engineering applications. In industrial systems, physical or process restraints may limit the determined set point. For example, a reactor which operates more efficiently at higher temperatures may be rated to withstand 500°C. However, for safety reasons, the set point for the reactor temperature control loop would be well below this limit, even if this means the reactor is running less efficiently.", "Automation-Control": 0.9967764616, "Qwen2": "Yes"} {"id": "1461077", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=1461077", "title": "Sensor fusion", "text": "Sensor fusion is the process of combining sensor data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. For instance, one could potentially obtain a more accurate location estimate of an indoor object by combining multiple data sources such as video cameras and WiFi localization signals. The term \"uncertainty reduction\" in this case can mean more accurate, more complete, or more dependable, or refer to the result of an emerging view, such as stereoscopic vision (calculation of depth information by combining two-dimensional images from two cameras at slightly different viewpoints).\nThe data sources for a fusion process are not specified to originate from identical sensors. One can distinguish \"direct fusion\", \"indirect fusion\" and fusion of the outputs of the former two. Direct fusion is the fusion of sensor data from a set of heterogeneous or homogeneous sensors, soft sensors, and history values of sensor data, while indirect fusion uses information sources like \"a priori\" knowledge about the environment and human input.\nSensor fusion is also known as \"(multi-sensor) data fusion\" and is a subset of \"information fusion\".\nAlgorithms.\nSensor fusion is a term that covers a number of methods and algorithms, including:\nExample calculations.\nTwo example sensor fusion calculations are illustrated below.\nLet formula_1 and formula_2 denote two sensor measurements with noise variances formula_3 and formula_4\n, respectively. One way of obtaining a combined measurement formula_5 is to apply inverse-variance weighting, which is also employed within the Fraser-Potter fixed-interval smoother, namely\nwhere formula_7 is the variance of the combined estimate. It can be seen that the fused result is simply a linear combination of the two measurements weighted by their respective noise variances.\nAnother method to fuse two measurements is to use the optimal Kalman filter. Suppose that the data is generated by a first-order system and let formula_8 denote the solution of the filter's Riccati equation. By applying Cramer's rule within the gain calculation it can be found that the filter gain is given by:\nBy inspection, when the first measurement is noise free, the filter ignores the second measurement and vice versa. That is, the combined estimate is weighted by the quality of the measurements.\nCentralized versus decentralized.\nIn sensor fusion, centralized versus decentralized refers to where the fusion of the data occurs. In centralized fusion, the clients simply forward all of the data to a central location, and some entity at the central location is responsible for correlating and fusing the data. In decentralized, the clients take full responsibility for fusing the data. \"In this case, every sensor or platform can be viewed as an intelligent asset having some degree of autonomy in decision-making.\"\nMultiple combinations of centralized and decentralized systems exist.\nAnother classification of sensor configuration refers to the coordination of information flow between sensors. These mechanisms provide a way to resolve conflicts or disagreements and to allow the development of dynamic sensing strategies. \nSensors are in redundant (or competitive) configuration if each node delivers independent measures of the same properties. This configuration can be used in error correction when comparing information from multiple nodes. Redundant strategies are often used with high level fusions in voting procedures.\nComplementary configuration occurs when multiple information sources supply different information about the same features. This strategy is used for fusing information at raw data level within decision-making algorithms. Complementary features are typically applied in motion recognition tasks with Neural network, Hidden Markov model, Support-vector machine, clustering methods and other techniques. \nCooperative sensor fusion uses the information extracted by multiple independent sensors to provide information that would not be available from single sensors. For example, sensors connected to body segments are used for the detection of the angle between them. Cooperative sensor strategy gives information impossible to obtain from single nodes. Cooperative information fusion can be used in motion recognition, gait analysis, motion analysis.\nLevels.\nThere are several categories or levels of sensor fusion that are commonly used.* \nSensor fusion level can also be defined basing on the kind of information used to feed the fusion algorithm. More precisely, sensor fusion can be performed fusing raw data coming from different sources, extrapolated features or even decision made by single nodes.\nApplications.\nOne application of sensor fusion is GPS/INS, where Global Positioning System and inertial navigation system data is fused using various different methods, e.g. the extended Kalman filter. This is useful, for example, in determining the attitude of an aircraft using low-cost sensors. Another example is using the data fusion approach to determine the traffic state (low traffic, traffic jam, medium flow) using road side collected acoustic, image and sensor data. In the field of autonomous driving, sensor fusion is used to combine the redundant information from complementary sensors in order to obtain a more accurate and reliable representation of the environment.\nAlthough technically not a dedicated sensor fusion method, modern Convolutional neural network based methods can simultaneously process many channels of sensor data (such as Hyperspectral imaging with hundreds of bands ) and fuse relevant information to produce classification results.", "Automation-Control": 0.8853827119, "Qwen2": "Yes"} {"id": "8271663", "revid": "11555324", "url": "https://en.wikipedia.org/wiki?curid=8271663", "title": "Scilab Image Processing", "text": "SIP is a toolbox for processing images in Scilab. SIP is meant to be a free, complete, and useful image toolbox for Scilab. Its goals include tasks such as filtering, blurring, edge detection, thresholding, histogram manipulation, segmentation, mathematical morphology, and color image processing.\nThough SIP is still in early development it can currently import and output image files in many formats including BMP, JPEG, GIF, PNG, TIFF, XPM, and PCX. SIP uses ImageMagick to accomplish this.\nSIP is licensed under the GPL.", "Automation-Control": 0.9095782638, "Qwen2": "Yes"} {"id": "8518299", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=8518299", "title": "Modified Richardson iteration", "text": "Modified Richardson iteration is an iterative method for solving a system of linear equations. Richardson iteration was proposed by Lewis Fry Richardson in his work dated 1910. It is similar to the Jacobi and Gauss–Seidel method.\nWe seek the solution to a set of linear equations, expressed in matrix terms as\nThe Richardson iteration is\nwhere formula_3 is a scalar parameter that has to be chosen such that the sequence formula_4 converges.\nIt is easy to see that the method has the correct fixed points, because if it converges, then formula_5 and formula_4 has to approximate a solution of formula_7.\nConvergence.\nSubtracting the exact solution formula_8, and introducing the notation for the error formula_9, we get the equality for the errors\nThus,\nfor any vector norm and the corresponding induced matrix norm. Thus, if formula_12, the method converges.\nSuppose that formula_13 is symmetric positive definite and that formula_14 are the eigenvalues of formula_13. The error converges to formula_16 if formula_17 for all eigenvalues formula_18. If, e.g., all eigenvalues are positive, this can be guaranteed if formula_3 is chosen such that formula_20. The optimal choice, minimizing all formula_21, is formula_22, which gives the simplest Chebyshev iteration. This optimal choice yields a spectral radius of \nwhere formula_24 is the condition number.\nIf there are both positive and negative eigenvalues, the method will diverge for any formula_3 if the initial error formula_26 has nonzero components in the corresponding eigenvectors.\nEquivalence to gradient descent.\nConsider minimizing the function formula_27. Since this is a convex function, a sufficient condition for optimality is that the gradient is zero (formula_28) which gives rise to the equation\nDefine formula_30 and formula_31.\nBecause of the form of \"A\", it is a positive semi-definite matrix, so it has no negative eigenvalues.\nA step of gradient descent is\nwhich is equivalent to the Richardson iteration by making formula_33.", "Automation-Control": 0.7814474702, "Qwen2": "Yes"} {"id": "59886546", "revid": "40272459", "url": "https://en.wikipedia.org/wiki?curid=59886546", "title": "Popov criterion", "text": "In nonlinear control and stability theory, the Popov criterion is a stability criterion discovered by Vasile M. Popov for the absolute stability of a class of nonlinear systems whose nonlinearity must satisfy an open-sector condition. While the circle criterion can be applied to nonlinear time-varying systems, the Popov criterion is applicable only to autonomous (that is, time invariant) systems.\nSystem description.\nThe sub-class of Lur'e systems studied by Popov is described by:\nformula_2\nwhere \"x\" ∈ R\"n\", \"ξ\",\"u\",\"y\" are scalars, and \"A\",\"b\",\"c\" and \"d\" have commensurate dimensions. The nonlinear element Φ: R → R is a time-invariant nonlinearity belonging to \"open sector\" (0, ∞), that is, Φ(0) = 0 and \"y\"Φ(\"y\") > 0 for all \"y\" not equal to 0.\nNote that the system studied by Popov has a pole at the origin and there is no direct pass-through from input to output, and the transfer function from \"u\" to \"y\" is given by\nCriterion.\nConsider the system described above and suppose\nthen the system is globally asymptotically stable if there exists a number \"r\" > 0 such that formula_4", "Automation-Control": 1.000007987, "Qwen2": "Yes"} {"id": "5346611", "revid": "3069179", "url": "https://en.wikipedia.org/wiki?curid=5346611", "title": "Linear–quadratic regulator", "text": "The theory of optimal control is concerned with operating a dynamic system at minimum cost. The case where the system dynamics are described by a set of linear differential equations and the cost is described by a quadratic function is called the LQ problem. One of the main results in the theory is that the solution is provided by the linear–quadratic regulator (LQR), a feedback controller whose equations are given below. \nLQR controllers possess inherent robustness with guaranteed gain and phase margin, and they also are part of the solution to the LQG (linear–quadratic–Gaussian) problem. Like the LQR problem itself, the LQG problem is one of the most fundamental problems in control theory. \nGeneral description.\nThe settings of a (regulating) controller governing either a machine or process (like an airplane or chemical reactor) are found by using a mathematical algorithm that minimizes a cost function with weighting factors supplied by a human (engineer). The cost function is often defined as a sum of the deviations of key measurements, like altitude or process temperature, from their desired values. The algorithm thus finds those controller settings that minimize undesired deviations. The magnitude of the control action itself may also be included in the cost function.\nThe LQR algorithm reduces the amount of work done by the control systems engineer to optimize the controller. However, the engineer still needs to specify the cost function parameters, and compare the results with the specified design goals. Often this means that controller construction will be an iterative process in which the engineer judges the \"optimal\" controllers produced through simulation and then adjusts the parameters to produce a controller more consistent with design goals.\nThe LQR algorithm is essentially an automated way of finding an appropriate state-feedback controller. As such, it is not uncommon for control engineers to prefer alternative methods, like full state feedback, also known as pole placement, in which there is a clearer relationship between controller parameters and controller behavior. Difficulty in finding the right weighting factors limits the application of the LQR based controller synthesis.\nVersions.\nFinite-horizon, continuous-time.\nFor a continuous-time linear system, defined on formula_1, described by:\nwhere formula_3 (that is, formula_4 is an formula_5-dimensional real-valued vector) is the state of the system and formula_6 is the control input. Given a quadratic cost function for the system, defined as:\nthe feedback control law that minimizes the value of the cost is:\nwhere formula_9 is given by:\nand formula_11 is found by solving the continuous time Riccati differential equation:\nwith the boundary condition:\nThe first order conditions for Jmin are:\n1) State equation \n2) Co-state equation \n3) Stationary equation\n4) Boundary conditions\nand\nformula_18\nInfinite-horizon, continuous-time.\nFor a continuous-time linear system described by:\nwith a cost function defined as:\nthe feedback control law that minimizes the value of the cost is:\nwhere formula_9 is given by:\nand formula_11 is found by solving the continuous time algebraic Riccati equation:\nThis can be also written as:\nwith\nFinite-horizon, discrete-time.\nFor a discrete-time linear system described by:\nwith a performance index defined as:\nthe optimal control sequence minimizing the performance index is given by:\nwhere:\nand formula_33 is found iteratively backwards in time by the dynamic Riccati equation:\nfrom terminal condition formula_35. Note that formula_36 is not defined, since formula_4 is driven to its final state formula_38 by formula_39.\nInfinite-horizon, discrete-time.\nFor a discrete-time linear system described by:\nwith a performance index defined as:\nthe optimal control sequence minimizing the performance index is given by:\nwhere:\nand formula_11 is the unique positive definite solution to the discrete time algebraic Riccati equation (DARE):\nThis can be also written as:\nwith:\nNote that one way to solve the algebraic Riccati equation is by iterating the dynamic Riccati equation of the finite-horizon case until it converges.\nConstraints.\nIn practice, not all values of formula_48 may be allowed. One common constraint is the linear one:\nThe finite horizon version of this is a convex optimization problem, and so the problem is often solved repeatedly with a receding horizon. This is a form of model predictive control.\nRelated controllers.\nQuadratic-quadratic regulator.\nIf the state equation is quadratic then the problem is known as the quadratic-quadratic regulator (QQR). The Al'Brekht algorithm can be applied to reduce this problem to one that can be solved efficiently using tensor based linear solvers.\nPolynomial-quadratic regulator.\nIf the state equation is polynomial then the problem is known as the polynomial-quadratic regulator (PQR). Again, the Al'Brekht algorithm can be applied to reduce this problem to a large linear one which can be solved with a generalization of the Bartels-Stewart algorithm; this is feasible provided that the degree of the polynomial is not too high.\nModel-predictive control.\nModel predictive control and linear-quadratic regulators are two types of optimal control methods that have distinct approaches for setting the optimization costs. In particular, when the LQR is run repeatedly with a receding horizon, it becomes a form of model predictive control (MPC). In general, however, MPC does not rely on any assumptions regarding linearity of the system.", "Automation-Control": 0.9999880195, "Qwen2": "Yes"} {"id": "5347179", "revid": "1161646560", "url": "https://en.wikipedia.org/wiki?curid=5347179", "title": "Linear–quadratic–Gaussian control", "text": "In control theory, the linear–quadratic–Gaussian (LQG) control problem is one of the most fundamental optimal control problems, and it can also be operated repeatedly for model predictive control. It concerns linear systems driven by additive white Gaussian noise. The problem is to determine an output feedback law that is optimal in the sense of minimizing the expected value of a quadratic cost criterion. Output measurements are assumed to be corrupted by Gaussian noise and the initial state, likewise, is assumed to be a Gaussian random vector.\nUnder these assumptions an optimal control scheme within the class of linear control laws can be derived by a completion-of-squares argument. This control law which is known as the LQG controller, is unique and it is simply a combination of a Kalman filter (a linear–quadratic state estimator (LQE)) together with a linear–quadratic regulator (LQR). The separation principle states that the state estimator and the state feedback can be designed independently. LQG control applies to both linear time-invariant systems as well as linear time-varying systems, and constitutes a linear dynamic feedback control law that is easily computed and implemented: the LQG controller itself is a dynamic system like the system it controls. Both systems have the same state dimension.\nA deeper statement of the separation principle is that the LQG controller is still optimal in a wider class of possibly nonlinear controllers. That is, utilizing a nonlinear control scheme will not improve the expected value of the cost function. This version of the separation principle is a special case of the separation principle of stochastic control which states that even when the process and output noise sources are possibly non-Gaussian martingales, as long as the system dynamics are linear, the optimal control separates into an optimal state estimator (which may no longer be a Kalman filter) and an LQR regulator.\nIn the classical LQG setting, implementation of the LQG controller may be problematic when the dimension of the system state is large. The reduced-order LQG problem (fixed-order LQG problem) overcomes this by fixing \"a priori\" the number of states of the LQG controller. This problem is more difficult to solve because it is no longer separable. Also, the solution is no longer unique. Despite these facts numerical algorithms are available to solve the associated optimal projection equations which constitute necessary and sufficient conditions for a locally optimal reduced-order LQG controller.\nLQG optimality does not automatically ensure good robustness properties. The robust stability of the closed loop system must be checked separately after the LQG controller has been designed. To promote robustness some of the system parameters may be assumed stochastic instead of deterministic. The associated more difficult control problem leads to a similar optimal controller of which only the controller parameters are different.\nIt is possible to compute the expected value of the cost function for the optimal gains, as well as any other set of stable gains.\nThe LQG controller is also used to control perturbed non-linear systems.\nMathematical description of the problem and solution.\nContinuous time.\nConsider the continuous-time linear dynamic system\nwhere formula_3 represents the vector of state variables of the system, formula_4 the vector of control inputs and formula_5 the vector of measured outputs available for feedback. Both additive white Gaussian system noise formula_6 and additive white Gaussian measurement noise formula_7 affect the system. Given this system the objective is to find the control input history formula_8 which at every time formula_9 may depend linearly only on the past measurements formula_10 such that the following cost function is minimized:\nwhere formula_13 denotes the expected value. The final time (horizon) formula_14 may be either finite or infinite. If the horizon tends to infinity the first term formula_15 of the cost function becomes negligible and irrelevant to the problem. Also to keep the costs finite the cost function has to be taken to be formula_16.\nThe LQG controller that solves the LQG control problem is specified by the following equations:\nThe matrix formula_19 is called the Kalman gain of the associated Kalman filter represented by the first equation. At each time formula_9 this filter generates estimates formula_21 of the state formula_22 using the past measurements and inputs. The Kalman gain formula_19 is computed from the matrices formula_24, the two intensity matrices formula_25 associated to the white Gaussian noises formula_6 and formula_7 and finally formula_28. These five matrices determine the Kalman gain through the following associated matrix Riccati differential equation:\nGiven the solution formula_31 the Kalman gain equals\nThe matrix formula_33 is called the feedback gain matrix. This matrix is determined by the matrices formula_34 and formula_35 through the following associated matrix Riccati differential equation:\nGiven the solution formula_38 the feedback gain equals\nObserve the similarity of the two matrix Riccati differential equations, the first one running forward in time, the second one running backward in time. This similarity is called duality. The first matrix Riccati differential equation solves the linear–quadratic estimation problem (LQE). The second matrix Riccati differential equation solves the linear–quadratic regulator problem (LQR). These problems are dual and together they solve the linear–quadratic–Gaussian control problem (LQG). So the LQG problem separates into the LQE and LQR problem that can be solved independently. Therefore, the LQG problem is called separable.\nWhen formula_40 and the noise intensity matrices formula_41, formula_42 do not depend on formula_9 and when formula_14 tends to infinity the LQG controller becomes a time-invariant dynamic system. In that case the second matrix Riccati differential equation may be replaced by the associated algebraic Riccati equation.\nDiscrete time.\nSince the discrete-time LQG control problem is similar to the one in continuous-time, the description below focuses on the mathematical equations.\nThe discrete-time linear system equations are\nHere formula_47 represents the discrete time index and formula_48 represent discrete-time Gaussian white noise processes with covariance matrices formula_49, respectively, and are independent of each other.\nThe quadratic cost function to be minimized is\nThe discrete-time LQG controller is\nand formula_54 corresponds to the predictive estimate formula_55.\nThe Kalman gain equals\nwhere formula_57 is determined by the following matrix Riccati difference equation that runs forward in time:\nThe feedback gain matrix equals\nwhere formula_60 is determined by the following matrix Riccati difference equation that runs backward in time:\nIf all the matrices in the problem formulation are time-invariant and if the horizon formula_62 tends to infinity the discrete-time LQG controller becomes time-invariant. In that case the matrix Riccati difference equations may be replaced by their associated discrete-time algebraic Riccati equations. These determine the time-invariant linear–quadratic estimator and the time-invariant linear–quadratic regulator in discrete-time. To keep the costs finite instead of formula_63 one has to consider formula_64 in this case.", "Automation-Control": 0.9999966621, "Qwen2": "Yes"} {"id": "5347359", "revid": "1034054007", "url": "https://en.wikipedia.org/wiki?curid=5347359", "title": "Observability Gramian", "text": "In control theory, we may need to find out whether or not a system such as\nformula_1\nis observable, where formula_2, formula_3, formula_4 and formula_5 are, respectively, formula_6, formula_7,formula_8 and formula_9 matrices.\nOne of the many ways one can achieve such goal is by the use of the Observability Gramian.\nObservability in LTI Systems.\nLinear Time Invariant (LTI) Systems are those systems in which the parameters formula_2, formula_3, formula_4 and formula_5 are invariant with respect to time.\nOne can determine if the LTI system is or is not observable simply by looking at the pair formula_14. Then, we can say that the following statements are equivalent:\n1. The pair formula_14 is observable.\n2. The formula_6 matrix\nformula_17\nis nonsingular for any formula_18.\n3. The formula_19 observability matrix\nformula_20\nhas rank n.\n4. The formula_21 matrix\nformula_22\nhas full column rank at every eigenvalue formula_23 of formula_2.\nIf, in addition, all eigenvalues of formula_2 have negative real parts (formula_2 is stable) and the unique solution of\nformula_27\nis positive definite, then the system is observable. The solution is called the Observability Gramian and can be expressed as\nformula_28\nIn the following section we are going to take a closer look at the Observability Gramian.\nObservability Gramian.\nThe Observability Gramian can be found as the solution of the Lyapunov equation given by\nformula_27\nIn fact, we can see that if we take\nformula_30\nas a solution, we are going to find that:\nformula_31\nWhere we used the fact that formula_32 at formula_33 for stable formula_2 (all its eigenvalues have negative real part). This shows us that formula_35 is indeed the solution for the Lyapunov equation under analysis.\nProperties.\nWe can see that formula_36 is a symmetric matrix, therefore, so is formula_35.\nWe can use again the fact that, if formula_2 is stable (all its eigenvalues have negative real part) to show that formula_35 is unique. In order to prove so, suppose we have two different solutions for\nformula_27\nand they are given by formula_41 and formula_42. Then we have:\nformula_43\nMultiplying by formula_44 by the left and by formula_45 by the right, would lead us to\nformula_46\nIntegrating from formula_47 to formula_48:\nformula_49\nusing the fact that formula_50 as formula_51:\nformula_52\nIn other words, formula_35 has to be unique.\nAlso, we can see that\nformula_54\nis positive for any formula_55 (assuming the non-degenerate case where formula_56 is not identically zero), and that makes formula_35 a positive definite matrix.\nMore properties of observable systems can be found in, as well as the proof for the other equivalent statements of \"The pair formula_14 is observable\" presented in section Observability in LTI Systems.\nDiscrete Time Systems.\nFor discrete time systems as\nformula_59\nOne can check that there are equivalences for the statement \"The pair formula_14 is observable\" (the equivalences are much alike for the continuous time case).\nWe are interested in the equivalence that claims that, if \"The pair formula_14 is observable\" and all the eigenvalues of formula_2 have magnitude less than formula_63 (formula_2 is stable), then the unique solution of\nformula_65\nis positive definite and given by\nformula_66\nThat is called the discrete Observability Gramian. We can easily see the correspondence between discrete time and the continuous time case, that is, if we can check that formula_67 is positive definite, and all eigenvalues of formula_2 have magnitude less than formula_63, the system formula_70 is observable. More properties and proofs can be found in.\nLinear Time Variant Systems.\nLinear time variant (LTV) systems are those in the form:\nformula_71\nThat is, the matrices formula_2, formula_3 and formula_4 have entries that varies with time. Again, as well as in the continuous time case and in the discrete time case, one may be interested in discovering if the system given by the pair formula_75 is observable or not. This can be done in a very similar way of the preceding cases.\nThe system formula_75 is observable at time formula_77 if and only if there exists a finite formula_78 such that the formula_6 matrix also called the Observability Gramian is given by\nformula_80\nwhere formula_81 is the state transition matrix of formula_82 is nonsingular.\nAgain, we have a similar method to determine if a system is or not an observable system.\nProperties of formula_83.\nWe have that the Observability Gramian formula_83 have the following property:\nformula_85\nthat can easily be seen by the definition of formula_83 and by the property of the state transition matrix that claims that:\nformula_87\nMore about the Observability Gramian can be found in.", "Automation-Control": 0.6557398438, "Qwen2": "Yes"} {"id": "18506279", "revid": "1152682103", "url": "https://en.wikipedia.org/wiki?curid=18506279", "title": "Blanking and piercing", "text": "Blanking and piercing are shearing processes in which a punch and die are used to produce parts from coil or sheet stock. Blanking produces the outside features of the component, while piercing produces internal holes or shapes. The web is created after multiple components have been produced and is considered scrap material. The \"slugs\" produced by piercing internal features are also considered scrap. The terms \"piercing\" and \"punching\" can be used interchangeably.\nDie roll and burr formation.\nBurrs and die roll are typical features of stamped components. Die roll is created when the material being stamped in compressed before the material begins to shear. Die roll takes the form of a radius around the outside edge of the blank and the pierced holes. After compression, the part shears for about 10% of the part thickness, and then fractures free of the strip or sheet. This fracturing produces a raised, jagged edge which is called a \"burr\". Burrs are typically removed by tumbling in a secondary process. Burr height can be used as an important indicator of tool wear.\nTooling design guidelines.\nThe selection criteria of all process parameters are governed by the sheet thickness and by the strength of the work-piece material being pierced.\nThe punch/die clearance is a crucial parameter, which determines the load or pressure experienced at the cutting edge of the tool, commonly known as point pressure. Excessive point pressure can lead to accelerated wear and ultimately failure. the surface quality of the trimmed edge is severely affected by the clearance, too.\nMaterial specific design guidelines are developed by companies in order to define the minimum acceptable values of hole diameters, bridge sizes, slot dimensions. Similarly, the strip lay-out must be determined (strip width and pitch). The bridge width between the parts and the edge allowance between the part and the edge of the strip also have to be selected.\nA simple operation may only need a pancake die. While many dies perform complex procedures simultaneously, a pancake die may only perform one simple procedure with the finished product being removed by hand. \nProcess variants.\nThere are various types of blanking and piercing: lancing, perforating, notching, nibbling, shaving, cutoff, and dinking.\nLancing.\nLancing is a piercing operation in which the workpiece is sheared and bent with one strike of the die. A key part of this process is that there is not reduction of material, only a modification in its geometry. This operation is used to make tabs, vents, and louvers.\nThe cut made in lancing is not a closed cut, like in perforation even though a similar machine is used, but a side is left connected to be bent sharply or in more of a rounded manner.\nLancing can be used to make partial contours and free up material for other operations further down the production line. Along with these reasons, lancing is also used to make tabs (where the material is bent at a 90 degree angle to the material), vents (where the bend is around 45 degrees), and louvers (where the piece is rounded or cupped). Lancing also helps to cut or slight shear of sheet on cylindrical shape.\nNormally lancing is done on a mechanical press, lancing requires the use of punches and dies to be used. The different punches and dies determine the shape and angle (or curvature) of the newly made section of the material. The dies and punches are needed to be made of tool steel to withstand the repetitious nature of the procedure.\nPerforating.\nPerforating is a piercing tooling that involves punching a large number of closely spaced holes.\nNotching.\nNotching is a piercing operation that removes material from the edge of the workpiece.\nNibbling.\nThe nibbling process cuts a contour by producing a series of overlapping slits or notches. A nibbler may be employed to do this. This allows for complex shapes to be formed in sheet metal up to 6 mm (0.25 in) thick using simple tools. that is essentially a small punch and die that reciprocates quickly; around 300–900 times per minute. Punches are available in various shape and sizes; oblong and rectangular punches are common because they minimize waste and allow for greater distances between strokes, as compared to a round punch. Nibbling can occur on the exterior or interior of the material, however interior cuts require a hole to insert the tool.\nThe process is often used on parts that do not have quantities that can justify a dedicated blanking die. The edge smoothness is determined by the shape of the cutting die and the amount the cuts overlap; naturally the more the cuts overlap, the cleaner the edge. For added accuracy and smoothness, most shapes created by nibbling undergo filing or grinding processes after completion.\nShaving.\nThe shaving process is a finishing operation where a small amount of metal is sheared away from an already blanked part. Its main purpose is to obtain better dimensional accuracy, but secondary purposes include squaring the edge and smoothing the edge. Blanked parts can be shaved to an accuracy of up to 0.025 mm (0.001 in).\nShaving of metals is done in order to remove excess or scrap metal. A straight, smooth edge is provided and therefore shaving is frequently performed on instrument parts, watch and clock parts, and the like. Shaving is accomplished in shaving dies especially designed for the purpose.\nTrimming.\nThe trimming operation is the last operation performed, because it cuts away excess or unwanted irregular features from the walls of drawn sheets.\nFine blanking.\nFine blanking is a specialized form of blanking where there is no fracture zone when shearing. This is achieved by compressing the whole part and then an upper and lower punch extract the blank. This allows the process to hold very tight tolerances, and perhaps eliminate secondary operations.\nMaterials that can be fine blanked include aluminium, brass, copper, and carbon, alloy, and stainless steels.\nFine blanking presses are similar to other metal stamping presses, but they have a few critical additional parts. A typical compound fine blanking press includes a hardened die punch (male), the hardened blanking die (female), and a guide plate of similar shape/size to the blanking die. The guide plate is the first applied to the material, impinging the material with a sharp protrusion or \"stinger\" around the perimeter of the die opening. Next, a counter pressure is applied opposite the punch, and finally, the die punch forces the material through the die opening. Since the guide plate holds the material so tightly, and since the counter pressure is applied, the material is cut in a manner more like extrusion than typical punching. Mechanical properties of the cut benefit similarly with a hardened layer at the cut edge of the part. Because the material is so tightly held and controlled in this setup, part flatness remains very true, distortion is nearly eliminated, and edge burr is minimal. Clearances between the die and punch are generally around 1% of the cut material thickness, which typically varies between . Currently parts as thick as can be cut using fine blanking. Tolerances between ± are possible, depending on the base material thickness and tensile strength, and part layout.\nWith standard compound fine blanking processes, multiple parts can often be completed in a single operation. Parts can be pierced, partially pierced, offset (up to 75°), embossed, or coined, often in a single operation. Some combinations may require progressive fine blanking operations, in which multiple operations are performed at the same pressing station. Due to the higher lifetime, blanking punches are usually covered by PVD protective coatings. \nThe advantages of fine blanking are:\nOne of the main advantages of fine blanking is that slots or holes can be placed very near to the edges of the part, or near to each other. Also, fineblanking can produce holes that are much smaller (as compared to material thickness) than can be produced by conventional stamping.\nThe disadvantages are:", "Automation-Control": 0.9345694184, "Qwen2": "Yes"} {"id": "17654379", "revid": "6289403", "url": "https://en.wikipedia.org/wiki?curid=17654379", "title": "Shaft alignment", "text": "Shaft alignment is the process of aligning two or more shafts with each other to within a tolerated margin. The resulting fault if alignment is not achieved within the demanded specifications is shaft misalignment, which may be offset or angular. Faults can lead to premature wear and damage to systems.\nBackground.\nWhen a driver like an electric motor or a turbine is coupled to a pump, generator, or any other piece of equipment, the shafts of the two pieces must be aligned. Any misalignment increases the stress on the shafts and will almost certainly result in excessive wear and premature breakdown of the equipment. This can be very costly. When the equipment is down, production requiring the equipment may be delayed. Bearings or mechanical seals may be damaged and need to be replaced.\nShaft alignment is the process of aligning two or more shafts with each other to within a tolerated margin. The process is used for machinery before the machinery is put in service.\nTechnology.\nBefore shaft alignment can be done, the foundations for the driver and the driven piece must be designed and installed correctly.\nFlexible couplings are designed to allow a driver (e.g., electric motor, engine, turbine, hydraulic motor) to be connected to the driven equipment. Flexible couplings use an elastomeric insert to allow a slight degree of misalignment. Flexible couplings can also use shim packs. These couplings are called disc couplings.\nTools used to achieve alignment may be mechanical, optical (e.g., laser shaft alignment), or gyroscope–based. The gyroscopic systems can be operated very time efficiently and can also be used if the shafts have a large distance (e.g., on marine vessels).\nMisalignment.\nThe resulting fault if alignment is not achieved within the demanded specifications is shaft misalignment, which may be offset, angular, or both. Misalignment can cause increased vibration and loads on the machine parts for which they have not been designed (i.e. improper operation).\nTypes of misalignment.\nThere are two types of misalignment: offset or parallel misalignment and angular, gap, or face misalignment. With offset misalignment, the center lines of both shafts are parallel but they are offset. With angular misalignment, the shafts are at an angle to each other. Errors of alignment can be caused by parallel misalignment, angular misalignment or a combination of the two.\nOffset misalignment can be further divided up in horizontal and vertical misalignment. Horizontal misalignment is misalignment of the shafts in the horizontal plane and vertical misalignment is misalignment of the shafts in the vertical plane:\nSimilar, angular misalignment can be divided up in horizontal and vertical misalignment:", "Automation-Control": 0.6214976907, "Qwen2": "Yes"} {"id": "17667699", "revid": "9755426", "url": "https://en.wikipedia.org/wiki?curid=17667699", "title": "Automotive hemming", "text": "Hemming is a technology used in the automotive industry to join inner and outer closure panels together (hoods, doors, tailgates, etc.). It is the process of bending/folding the flange of the outer panel over the inner one. The accuracy of the operation significantly affects the appearance of the car’s outer surfaces and is therefore a critical factor in the final quality of a finished vehicle.\nHemming processes.\nPress hemming.\nHemming presses are widely used in automotive manufacturing for hemming of sheet-metal body components. The process uses traditional hydraulically operated ‘stamping presses’ to hem closure parts, and, being the last forming process in stamping, it largely determines the external quality of such automotive parts as doors, hood and trunk lid.\nTable top hemming.\nTabletop hemming machines are utilised for the manufacture of medium to high production volumes, with the ability to achieve cycle times as low as 15 seconds.\nRobot (roller hemming).\nRobot hemming is utilized for the manufacture of Low to medium production volumes. It uses a standard industrial robot integrated with a roller hemming head to provide a flexible method for forming closures. The flange of the outer panel is bent over the inner panel in progressive steps, by means of a roller-hemming head.\nOne advantage of this process is that it can use the robot-controlled hemming head to hem several different components within a single cell. Another is that minor changes or fluctuations in panel-hemming conditions can be quickly and cost-effectively accommodated. If equipped with a tool-changing system, the robot could serve a variety of additional functions within the same assembly cell, such as operating dispensing equipment for adhesives and sealants, or carrying out panel manipulations, using a gripper unit.", "Automation-Control": 0.9818999171, "Qwen2": "Yes"} {"id": "10640172", "revid": "35498457", "url": "https://en.wikipedia.org/wiki?curid=10640172", "title": "Robotics conventions", "text": "There are many conventions used in the robotics research field. This article summarises these conventions.\nLine representations.\nLines are very important in robotics because:\nNon-minimal vector coordinates.\nA line formula_1 is completely defined by the ordered set of two vectors:\nEach point formula_5 on the line is given a parameter value formula_6 that satisfies:\nformula_7. The parameter t is unique once formula_2 and formula_4 are chosen. The representation formula_1 is not minimal, because it uses six parameters for only four degrees of freedom. The following two constraints apply:\nPlücker coordinates.\nArthur Cayley and Julius Plücker introduced an alternative representation using two free vectors. This representation was finally named after Plücker.\n The Plücker representation is denoted by formula_15. Both formula_4 and formula_17 are free vectors: formula_4 represents the direction of the line and formula_17 is the moment of formula_4 about the chosen reference origin. formula_21 (formula_17 is independent of which point formula_2 on the line is chosen!)\n The advantage of the Plücker coordinates is that they are homogeneous.\n A line in Plücker coordinates has still four out of six independent parameters, so it is not a minimal representation. The two constraints on the six Plücker coordinates are\nMinimal line representation.\nA line representation is minimal if it uses four parameters, which is the minimum needed to represent all possible lines in the Euclidean Space (E³).\nDenavit–Hartenberg line coordinates.\nJaques Denavit and Richard S. Hartenberg presented the first minimal representation for a line which is now widely used. The common normal between two lines was the main geometric concept that allowed Denavit and Hartenberg to find a minimal representation. Engineers use the Denavit–Hartenberg convention(D–H) to help them describe the positions of links and joints unambiguously. Every link gets its own coordinate system. There are a few rules to consider in choosing the coordinate system:\nOnce the coordinate frames are determined, inter-link transformations are uniquely described by the following four parameters:\nHayati–Roberts line coordinates.\nThe Hayati–Roberts line representation, denoted formula_45, is another minimal line representation, with parameters:\nThis representation is unique for a directed line. The coordinate singularities are different from the DH singularities: it has singularities if the line becomes parallel to either the formula_48 or formula_49 axis of the world frame.\nProduct of exponentials formula.\nThe product of exponentials formula represents the kinematics of an open-chain mechanism as the product of exponentials of twists, and may be used to describe a series of revolute, prismatic, and helical joints. ", "Automation-Control": 0.7372875214, "Qwen2": "Yes"} {"id": "4736445", "revid": "46016783", "url": "https://en.wikipedia.org/wiki?curid=4736445", "title": "Matsuura Machinery", "text": " is a machine tool manufacturing company based in Fukui, Fukui Prefecture, Japan, in operation since August 1935.\nMatsuura Machinery began as a manufacturer and distributor of lathes in 1935. Production of milling machines began in 1957, and the company went public in 1960. Production of automatic-controlled milling machines began in 1961 and numerically controlled milling machines from 1964. Production of automatically controlled drilling machines began in 1972, and vertical machining centers from 1974.\nThe company began exporting to the United States from 1975. In 1981, Matsuura Machinery began production of high-speed machining centers and twin-spindle vertical machining centers, and horizontal machining centers from 1983. The total number of machining centers shipped surpassed 10,000 units in 1993 and 20,000 in 2015.\nThe company's machining centers are used in a variety of industries, among them aerospace equipment manufacturers. Machine tools manufactured by Matsuura were used by NASA on the Space Shuttle Discovery's fuel tanks in 1998, making them four tons lighter than before.", "Automation-Control": 0.9887799025, "Qwen2": "Yes"} {"id": "45857", "revid": "1097504199", "url": "https://en.wikipedia.org/wiki?curid=45857", "title": "Hurwitz polynomial", "text": "In mathematics, a Hurwitz polynomial, named after Adolf Hurwitz, is a polynomial whose roots (zeros) are located in the left half-plane of the complex plane or on the imaginary axis, that is, the real part of every root is zero or negative. Such a polynomial must have coefficients that are positive real numbers. The term is sometimes restricted to polynomials whose roots have real parts that are strictly negative, excluding the imaginary axis (i.e., a Hurwitz stable polynomial).\nA polynomial function \"P\"(\"s\") of a complex variable \"s\" is said to be Hurwitz if the following conditions are satisfied:\nHurwitz polynomials are important in control systems theory, because they represent the characteristic equations of stable linear systems. Whether a polynomial is Hurwitz can be determined by solving the equation to find the roots, or from the coefficients without solving the equation by the Routh–Hurwitz stability criterion.\nExamples.\nA simple example of a Hurwitz polynomial is:\nThe only real solution is −1, because it factors as\nIn general, all quadratic polynomials with positive coefficients are Hurwitz.\nThis follows directly from the quadratic formula:\nwhere, if the discriminant \"b\"2−4\"ac\" is less than zero, then the polynomial will have two complex-conjugate solutions with real part −\"b\"/2\"a\", which is negative for positive \"a\" and \"b\".\nIf the discriminant is equal to zero, there will be two coinciding real solutions at −\"b\"/2\"a\". Finally, if the discriminant is greater than zero, there will be two real negative solutions,\nbecause formula_4 for positive \"a\", \"b\" and \"c\".\nProperties.\nFor a polynomial to be Hurwitz, it is necessary but not sufficient that all of its coefficients be positive (except for quadratic polynomials, which also imply sufficiency). A necessary and sufficient condition that a polynomial is Hurwitz is that it passes the Routh–Hurwitz stability criterion. A given polynomial can be efficiently tested to be Hurwitz or not by using the Routh continued fraction expansion technique.", "Automation-Control": 0.8998792768, "Qwen2": "Yes"} {"id": "46545", "revid": "19595349", "url": "https://en.wikipedia.org/wiki?curid=46545", "title": "Telecommunications network", "text": "A telecommunications network is a group of nodes interconnected by telecommunications links that are used to exchange messages between the nodes. The links may use a variety of technologies based on the methodologies of circuit switching, message switching, or packet switching, to pass messages and signals. \nMultiple nodes may cooperate to pass the message from an originating node to the destination node, via multiple network hops. For this routing function, each node in the network is assigned a network address for identification and locating it on the network. The collection of addresses in the network is called the address space of the network.\nExamples of telecommunications networks include computer networks, the Internet, the public switched telephone network (PSTN), the global Telex network, the aeronautical ACARS network, and the wireless radio networks of cell phone telecommunication providers.\nNetwork structure.\nIn general, every telecommunications network conceptually consists of three parts, or planes (so-called because they can be thought of as being and often are, separate overlay networks):\nData networks.\nData networks are used extensively throughout the world for communication between individuals and organizations. Data networks can be connected to allow users seamless access to resources that are hosted outside of the particular provider they are connected to. The Internet is the best example of the internetworking of many data networks from different organizations.\nTerminals attached to IP networks like the Internet are addressed using IP addresses. Protocols of the Internet protocol suite (TCP/IP) provide the control and routing of messages across the and IP data network. There are many different network structures that IP can be used across to efficiently route messages, for example:\nThere are three features that differentiate MANs from LANs or WANs:\nData center networks also rely highly on TCP/IP for communication across machines. They connect thousands of servers, are designed to be highly robust, provide low latency and high bandwidth. Data center network topology plays a significant role in determining the level of failure resiliency, ease of incremental expansion, communication bandwidth and latency.\nCapacity and speed.\nIn analogy to the improvements in the speed and capacity of digital computers, provided by advances in semiconductor technology and expressed in the bi-yearly doubling of transistor density, which is described empirically by Moore's law, the capacity and speed of telecommunications networks have followed similar advances, for similar reasons. In telecommunication, this is expressed in Edholm's law, proposed by and named after Phil Edholm in 2004. This empirical law holds that the bandwidth of telecommunication networks doubles every 18 months, which has proven to be true since the 1970s. The trend is evident in the Internet, cellular (mobile), wireless and wired local area networks (LANs), and personal area networks. This development is the consequence of rapid advances in the development of metal-oxide-semiconductor technology.", "Automation-Control": 0.8355209827, "Qwen2": "Yes"} {"id": "63569430", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=63569430", "title": "Teacher forcing", "text": "Teacher forcing is an algorithm for training the weights of recurrent neural networks (RNNs). It involves feeding observed sequence values (i.e. ground-truth samples) back into the RNN after each step, thus forcing the RNN to stay close to the ground-truth sequence.\nThe term \"teacher forcing\" can be motivated by comparing the RNN to a human student taking a multi-part exam where the answer to each part (for example a mathematical calculation) depends on the answer to the preceding part. In this analogy, rather than grading every answer in the end, with the risk that the student fails every single part even though they only made a mistake in the first one, a teacher records the score for each individual part and then tells the student the correct answer, to be used in the next part.\nThe use of an external teacher signal is in contrast to real time recurrent learning (RTRL). Teacher signals are known from oscillator networks. The promise is, that teacher forcing helps to reduce the training time.\nThe term \"teacher forcing\" was introduced in 1989 by Ronald J. Williams and David Zipser, who reported that the technique was already being \"frequently used in dynamical supervised learning tasks\" around that time.\nA NeurIPS 2016 paper introduced the related method of \"professor forcing\".", "Automation-Control": 0.8509042263, "Qwen2": "Yes"} {"id": "420335", "revid": "36200549", "url": "https://en.wikipedia.org/wiki?curid=420335", "title": "Ideal machine", "text": "The term ideal machine refers to a hypothetical mechanical system in which energy and power are not lost or dissipated through friction, deformation, wear, or other inefficiencies. Ideal machines have the theoretical maximum performance, and therefore are used as a baseline for evaluating the performance of real machine systems.\nA simple machine, such as a lever, pulley, or gear train, is \"ideal\" if the power input is equal to the power output of the device, which means there are no losses. In this case, the mechanical efficiency is 100%.\nMechanical efficiency is the performance of the machine compared to its theoretical maximum as performed by an ideal machine. The mechanical efficiency of a simple machine is calculated by dividing the actual power output by the ideal power output. This is usually expressed as a percentage.\nPower loss in a real system can occur in many ways, such as through friction, deformation, wear, heat losses, incomplete chemical conversion, magnetic and electrical losses.\nCriteria.\nA machine consists of a power source and a mechanism for the controlled use of this power. The power source often relies on chemical conversion to generate heat which is then used to generate power. Each stage of the process of power generation has a maximum performance limit which is identified as ideal.\nOnce the power is generated the mechanism components of the machine direct it toward useful forces and movement. The ideal mechanism does not absorb any power, which means the power input is equal to the power output.\nAn example is the automobile engine (internal combustion engine) which burns fuel (an exothermic chemical reaction) inside a cylinder and uses the expanding gases to drive a piston. The movement of the piston rotates the crank shaft. The remaining mechanical components such as the transmission, drive shaft, differential, axles and wheels form the power transmission mechanism that directs the power from the engine into friction forces on the road to move the automobile.\n\"The ideal machine has the maximum energy conversion performance combined with a lossless power transmission mechanism that yields maximum performance.\"", "Automation-Control": 0.6219873428, "Qwen2": "Yes"} {"id": "17562674", "revid": "7226930", "url": "https://en.wikipedia.org/wiki?curid=17562674", "title": "Margin classifier", "text": "In machine learning, a margin classifier is a classifier which is able to give an associated distance from the decision boundary for each example. For instance, if a linear classifier (e.g. perceptron or linear discriminant analysis) is used, the distance (typically euclidean distance, though others may be used) of an example from the separating hyperplane is the margin of that example.\nThe notion of margin is important in several machine learning classification algorithms, as it can be used to bound the generalization error of the classifier. These bounds are frequently shown using the VC dimension. Of particular prominence is the generalization error bound on boosting algorithms and support vector machines.\nSupport vector machine definition of margin.\nSee support vector machines and maximum-margin hyperplane for details.\nMargin for boosting algorithms.\nThe margin for an iterative boosting algorithm given a set of examples with two classes can be defined as follows. The classifier is given an example pair formula_1 where formula_2 is a domain space and formula_3 is the label of the example. The iterative boosting algorithm then selects a classifier formula_4 at each iteration formula_5 where formula_6 is a space of possible classifiers that predict real values. This hypothesis is then weighted by formula_7 as selected by the boosting algorithm. At iteration formula_8, the margin of an example formula_9 can thus be defined as\nBy this definition, the margin is positive if the example is labeled correctly and negative if the example is labeled incorrectly.\nThis definition may be modified and is not the only way to define margin for boosting algorithms. However, there are reasons why this definition may be appealing.\nExamples of margin-based algorithms.\nMany classifiers can give an associated margin for each example. However, only some classifiers utilize information of the margin while learning from a data set.\nMany boosting algorithms rely on the notion of a margin to give weights to examples. If a convex loss is utilized (as in AdaBoost, LogitBoost, and all members of the AnyBoost family of algorithms) then an example with higher margin will receive less (or equal) weight than an example with lower margin. This leads the boosting algorithm to focus weight on low margin examples. In nonconvex algorithms (e.g. BrownBoost), the margin still dictates the weighting of an example, though the weighting is non-monotone with respect to margin. There exists boosting algorithms that probably maximize the minimum margin (e.g. see ).\nSupport vector machines probably maximize the margin of the separating hyperplane. Support vector machines that are trained using noisy data (there exists no perfect separation of the data in the given space) maximize the soft margin. More discussion of this can be found in the support vector machine article.\nThe voted-perceptron algorithm is a margin maximizing algorithm based on an iterative application of the classic perceptron algorithm.\nGeneralization error bounds.\nOne theoretical motivation behind margin classifiers is that their generalization error may be bound by parameters of the algorithm and a margin term. An example of such a bound is for the AdaBoost algorithm. Let formula_11 be a set of formula_12 examples sampled independently at random from a distribution formula_13. Assume the VC-dimension of the underlying base classifier is formula_14 and formula_15. Then with probability formula_16 we have the bound\nfor all formula_18.", "Automation-Control": 0.9155969024, "Qwen2": "Yes"} {"id": "27461950", "revid": "16809467", "url": "https://en.wikipedia.org/wiki?curid=27461950", "title": "Invariant extended Kalman filter", "text": "The invariant extended Kalman filter (IEKF) (not to be confused with the iterated extended Kalman filter) was first introduced as a version of the extended Kalman filter (EKF) for nonlinear systems possessing symmetries (or \"invariances\"), then generalized and recast as an adaptation to Lie groups of the linear Kalman filtering theory. Instead of using a linear correction term based on a linear output error, the IEKF uses a geometrically adapted correction term based on an invariant output error; in the same way the gain matrix is not updated from a linear state error, but from an invariant state error. The main benefit is that the gain and covariance equations have reduced dependence on the estimated value of the state. In some cases they converge to constant values on a much bigger set of trajectories than is the case for the EKF, which results in a better convergence of the estimation.\nFilter derivation.\nDiscrete-time framework.\nConsider a system whose state is encoded at time step formula_1 by an element formula_2 of a Lie group formula_3 and dynamics has the following shape:\nwhere formula_5 is a group automorphism of formula_3, formula_7 is the group operation and formula_8 an element of formula_3. The system is supposed to be observed through a measurement formula_10 having the following shape:\nwhere formula_12 belongs to a vector space formula_13 endowed with a left action of the elements of formula_3 denoted again by formula_7 (which cannot create confusion with the group operation as the second member of the operation is an element of formula_13, not formula_3). Alternatively, the same theory applies to a measurement defined by a right action:\nFilter equations.\nThe invariant extended Kalman filter is an observer formula_19 defined by the following equations if the measurement function is a left action:\nwhere formula_22 is the exponential map of formula_3 and formula_24 is a gain matrix to be tuned through a Riccati equation.\nIf measurement function is a right action then the update state is defined as:\nContinuous-time framework.\nThe discrete-time framework above was first introduced for continuous-time dynamics of the shape:\nwhere the vector field formula_27 verifies at any time formula_28 the relation:\nwhere the identity element of the group is denoted by formula_30 and is used the short-hand notation formula_31 (resp. formula_32) for the left translation formula_33 (resp. the right translation formula_34) where formula_35 denots the tangent space to formula_3 at formula_37. It leads to more involved computations than the discrete-time framework, but properties are similar.\nMain properties.\nThe main benefit of invariant extended Kalman filtering is the behavior of the invariant error variable, whose definition depends on the type of measurement. For left actions we define a left-invariant error variable as:\nwhile for right actions we define a right-invariant error variable as:\nIndeed, replacing formula_2, formula_43, formula_44 by their values we obtain for left actions, after some algebra:\nand for right actions:\nWe see the estimated value of the state is not involved in the equation followed by the error variable, a property of linear Kalman filtering the classical extended Kalman filter does not share, but the similarity with the linear case actually goes much further. Let formula_49 be a linear version of the error variable defined by the identity:\nThen, with formula_52 defined by the Taylor expansion formula_53 we actually have: \nIn other words, there are no higher-order terms: the dynamics is linear for the error variable formula_55. This result and error dynamics independence are at the core of theoretical properties and practical performance of IEKF.\nRelation to symmetry-preserving observers.\nMost physical systems possess natural symmetries (or invariance), i.e. there exist transformations (e.g. rotations, translations, scalings) that leave the system unchanged. From mathematical and engineering viewpoint, it makes sense that a filter well-designed for the considered system should preserve the same invariance properties. The idea for the IEKF is a modification of the EKF equations to take advantage of the symmetries of the system.\nDefinition.\nConsider the system\nwhere formula_58 are independent white Gaussian noises.\nConsider formula_59 a Lie group with identity formula_60, and\n(local) transformation groups formula_61 (formula_62) such that formula_63. The previous system with noise is said to be \"invariant\" if it is left unchanged by the action the transformations groups formula_61; that is, if\nFilter equations and main result.\nSince it is a symmetry-preserving filter, the general form of an IEKF reads\nwhere\nTo analyze the error convergence, an invariant state error formula_73 is defined, which is different from the standard output error formula_74, since the standard output error usually does not preserve the symmetries of the system.\nGiven the considered system and associated transformation group, there exists a constructive method to determine formula_75, based on the moving frame method.\nSimilarly to the EKF, the gain matrix formula_72 is determined from the equations\nwhere the matrices formula_79 depend here only on the known invariant vector formula_71, rather than on formula_81 as in the standard EKF. This much simpler dependence and its consequences are the main interests of the IEKF. Indeed, the matrices formula_79 are then constant on a much bigger set of trajectories (so-called \"permanent trajectories\") than equilibrium points as it is the case for the EKF. Near such trajectories, we are back to the \"true\", i.e. linear, Kalman filter where convergence is guaranteed. Informally, this means the IEKF converges in general at least around any slowly varying permanent trajectory, rather than just around any slowly varying equilibrium point for the EKF.\nApplication examples.\nAttitude and heading reference systems.\nInvariant extended Kalman filters are for instance used in attitude and heading reference systems. In such systems the orientation, velocity and/or position of a moving rigid body,\ne.g. an aircraft, are estimated from different embedded sensors, such as inertial sensors, magnetometers, GPS or sonars. The use of an IEKF naturally leads to consider the quaternion error formula_83, which is often used as an \"ad hoc\" trick to preserve the constraints of the quaternion group. The benefits of the IEKF compared to the EKF are experimentally shown for a large set of trajectories.\nInertial navigation.\nA major application of the Invariant extended Kalman filter is inertial navigation, which fits the framework after embedding of the state (consisting of attitude matrix formula_84, velocity vector formula_85 and position vector formula_37) into the Lie group formula_87 defined by the group operation:\nSimultaneous localization and mapping.\nThe problem of simultaneous localization and mapping also fits the framework of invariant extended Kalman filtering after embedding of the state (consisting of attitude matrix formula_84, position vector formula_37 and a sequence of static feature points formula_91) into the Lie group formula_92 (or formula_93 for planar systems) defined by the group operation:\nThe main benefit of the Invariant extended Kalman filter in this case is solving the problem of false observability.", "Automation-Control": 0.7861206532, "Qwen2": "Yes"} {"id": "44126781", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=44126781", "title": "Automated fueling", "text": "Automated fueling or robotic fueling involves the use of automation to remove human labor from the fueling process. The fueling is performed by a robotic arm, which opens the car's flap, unscrews the cap, picks up the fuel nozzle and inserts it into the tank opening. It requires the contours and dimensions of the fuel cap to be present in the database.", "Automation-Control": 0.9751534462, "Qwen2": "Yes"} {"id": "70302439", "revid": "123853", "url": "https://en.wikipedia.org/wiki?curid=70302439", "title": "Brain.js", "text": "Brain.js is a JavaScript library used for neural networking, which is released as free and open-source software under the MIT License. It can be used in both the browser and Node.js backends.\nBrain.js is most commonly used as a simple introduction to neural networking, as it hides complex mathematics and has a familiar modern JavaScript syntax. It is maintained by members of the Brain.js organization and open-source contributors.\nExamples.\nCreating a feedforward neural network with backpropagation:\nconst net = new brain.NeuralNetwork;\nnet.train([\n { input: [0, 0], output: [0] },\n { input: [0, 1], output: [1] },\n { input: [1, 0], output: [1] },\n { input: [1, 1], output: [0] },\nconsole.log(net.run([1, 0]));\nCreating a recurrent neural network:\nconst net = new brain.recurrent.RNN;\nnet.train([\n { input: [0, 0], output: [0] },\n { input: [0, 1], output: [1] },\n { input: [1, 0], output: [1] },\n { input: [1, 1], output: [0] },\nlet output = net.run([0, 0]); // [0]\noutput = net.run([0, 1]); // [1]\noutput = net.run([1, 0]); // [1]\noutput = net.run([1, 1]); // [0]\nTrain the neural network on RGB color contrast:\nconst net = new brain.NeuralNetwork;\nnet.train([{\n input: {\n r: 0.03,\n g: 0.7,\n b: 0.5\n output: {\n black: 1\n },\n input: {\n r: 0.16,\n g: 0.09,\n b: 0.2\n output: {\n white: 1\n },\n input: {\n r: 0.5,\n g: 0.5,\n b: 1.0\n output: {\n white: 1\nconst output = net.run({\n r: 1,\n g: 0.4,\n b: 0\nconsole.log(output)", "Automation-Control": 0.6687009335, "Qwen2": "Yes"} {"id": "6172616", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=6172616", "title": "Control-Lyapunov function", "text": "In control theory, a control-Lyapunov function (CLF) is an extension of the idea of Lyapunov function formula_1 to systems with control inputs. The ordinary Lyapunov function is used to test whether a dynamical system is \"(Lyapunov) stable\" or (more restrictively) \"asymptotically stable\". Lyapunov stability means that if the system starts in a state formula_2 in some domain \"D\", then the state will remain in \"D\" for all time. For \"asymptotic stability\", the state is also required to converge to formula_3. A control-Lyapunov function is used to test whether a system is \"asymptotically stabilizable\", that is whether for any state \"x\" there exists a control formula_4 such that the system can be brought to the zero state asymptotically by applying the control \"u\".\nThe theory and application of control-Lyapunov functions were developed by Zvi Artstein and Eduardo D. Sontag in the 1980s and 1990s.\nDefinition.\nConsider an autonomous dynamical system with inputs\nwhere formula_5 is the state vector and formula_6 is the control vector. Suppose our goal is to drive the system to an equilibrium formula_7 from every initial state in some domain formula_8. Without loss of generality, suppose the equilibrium is at formula_9 (for an equilibrium formula_10, it can be translated to the origin by a change of variables). \nDefinition. A control-Lyapunov function (CLF) is a function formula_11 that is continuously differentiable, positive-definite (that is, formula_1 is positive for all formula_13 except at formula_14 where it is zero), and such that for all formula_15 there exists formula_16 such that \nwhere formula_18 denotes the inner product of formula_19.\nThe last condition is the key condition; in words it says that for each state \"x\" we can find a control \"u\" that will reduce the \"energy\" \"V\". Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy asymptotically to zero, that is to bring the system to a stop. This is made rigorous by Artstein's theorem.\nSome results apply only to control-affine systems—i.e., control systems in the following form:\nwhere formula_20 and formula_21 for formula_22.\nTheorems.\nE. D. Sontag showed that for a given control system, there exists a continuous CLF if and only if the origin is asymptotic stabilizable. It was later shown by Francis H. Clarke that every asymptotically controllable system can be stabilized by a (generally discontinuous) feedback.\nArtstein proved that the dynamical system has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback \"u\"(\"x\").\nConstructing the Stabilizing Input.\nIt is often difficult to find a control-Lyapunov function for a given system, but if one is found, then the feedback stabilization problem simplifies considerably. For the control affine system , \"Sontag's formula\" (or \"Sontag's universal formula\") gives the feedback law formula_23 directly in terms of the derivatives of the CLF. In the special case of a single input system formula_24, Sontag's formula is written as\nwhere formula_26 and formula_27 are the Lie derivatives of formula_28 along formula_29 and formula_30, respectively.\nFor the general nonlinear system , the input formula_31 can be found by solving a static non-linear programming problem\nfor each state \"x\".\nExample.\nHere is a characteristic example of applying a Lyapunov candidate function to a control problem.\nConsider the non-linear system, which is a mass-spring-damper system with spring hardening and position dependent mass described by\nNow given the desired state, formula_34, and actual state, formula_35, with error, formula_36, define a function formula_37 as\nA Control-Lyapunov candidate is then\nwhich is positive definite for all formula_40, formula_41.\nNow taking the time derivative of formula_28\nThe goal is to get the time derivative to be\nwhich is globally exponentially stable if formula_28 is globally positive definite (which it is).\nHence we want the rightmost bracket of formula_47, \nto fulfill the requirement\nwhich upon substitution of the dynamics, formula_50, gives\nSolving for formula_31 yields the control law\nwith formula_54 and formula_55, both greater than zero, as tunable parameters\nThis control law will guarantee global exponential stability since upon substitution into the time derivative yields, as expected\nwhich is a linear first order differential equation which has solution\nAnd hence the error and error rate, remembering that formula_58, exponentially decay to zero.\nIf you wish to tune a particular response from this, it is necessary to substitute back into the solution we derived for formula_28 and solve for formula_60. This is left as an exercise for the reader but the first few steps at the solution are:\nwhich can then be solved using any linear differential equation methods.", "Automation-Control": 0.9864059687, "Qwen2": "Yes"} {"id": "46732608", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=46732608", "title": "Finsler's lemma", "text": "Finsler's lemma is a mathematical result named after Paul Finsler. It states equivalent ways to express the positive definiteness of a quadratic form \"Q\" constrained by a linear form \"L\". \nSince it is equivalent to another lemmas used in optimization and control theory, such as Yakubovich's S-lemma, Finsler's lemma has been given many proofs and has been widely used, particularly in results related to robust optimization and linear matrix inequalities.\nStatement of Finsler's lemma.\nLet , and . The following statements are equivalent:\nVariants.\nIn the particular case that \"L\" is positive semi-definite, it is possible to decompose it as . The following statements, which are also referred as Finsler's lemma in the literature, are equivalent:\nThere is also a variant of Finsler's lemma for quadratic matrix inequalities, known as matrix Finsler's lemma, which states that the following statements are equivalent for symmetric matrices \"Q\" and \"L\" belonging to R(\"l\"+\"k\")x(\"l\"+\"k\"):\nunder the assumption that \nformula_9 and formula_10\nsatisfy the following assumptions:\nGeneralizations.\nProjection lemma.\nThe following statement, known as Projection Lemma (or also as Elimination Lemma), is common on the literature of linear matrix inequalities:\nThis can be seen as a generalization of one of Finsler's lemma variants with the inclusion of an extra matrix and an extra constraint. Furthermore, there exists a version of the projection lemma that utilizes non-strict inequalities.\nRobust version.\nFinsler's lemma also generalizes for matrices \"Q\" and \"B\" depending on a parameter \"s\" within a set \"S\". In this case, it is natural to ask if the same variable μ (respectively \"X\") can satisfy formula_13 for all formula_14 (respectively, formula_15). If \"Q\" and \"B\" depends continuously on the parameter \"s\", and \"S\" is compact, then this is true. If \"S\" is not compact, but \"Q\" and \"B\" are still continuous matrix-valued functions, then μ and \"X\" can be guaranteed to be at least continuous functions.\nApplications.\nData-driven control.\nThe matrix variant of Finsler lemma has been applied to the data-driven control of Lur'e systems and in a data-driven robust linear matrix inequality-based model predictive control scheme.\nS-Variable approach to robust control of linear dynamical systems.\nFinsler's lemma can be used to give novel linear matrix inequality (LMI) characterizations to stability and control problems. The set of LMIs stemmed from this procedure yields less conservative results when applied to control problems where the system matrices has dependence on a parameter, such as robust control problems and control of linear-parameter varying systems. This approach has recently been called as S-variable approach and the LMIs stemming from this approach are known as SV-LMIs (also known as dilated LMIs).\nSufficient condition for universal stabilizability of non-linear systems.\nA nonlinear system has the universal stabilizability property if every forward-complete solution of a system can be globally stabilized. By the use of Finsler's lemma, it is possible to derive a sufficient condition for universal stabilizability in terms of a differential linear matrix inequality.", "Automation-Control": 0.9996874928, "Qwen2": "Yes"} {"id": "20047065", "revid": "45611790", "url": "https://en.wikipedia.org/wiki?curid=20047065", "title": "Kalman decomposition", "text": "In control theory, a Kalman decomposition provides a mathematical means to convert a representation of any linear time-invariant (LTI) control system to a form in which the system can be decomposed into a standard form which makes clear the observable and controllable components of the system. This decomposition results in the system being presented with a more illuminating structure, making it easier to draw conclusions on the system's reachable and observable subspaces.\nDefinition.\nConsider the continuous-time LTI control system\nor the discrete-time LTI control system \nThe Kalman decomposition is defined as the realization of this system obtained by transforming the original matrices as follows:\nwhere formula_9 is the coordinate transformation matrix defined as\nand whose submatrices are \nIt can be observed that some of these matrices may have dimension zero. For example, if the system is both observable and controllable, then formula_18, making the other matrices zero dimension.\nConsequences.\nBy using results from controllability and observability, it can be shown that the transformed system formula_19 has matrices in the following form:\nThis leads to the conclusion that\nVariants.\nA Kalman decomposition also exists for linear dynamical quantum systems. Unlike classical dynamical systems, the coordinate transformation used in this variant requires to be in a specific class of transformations due to the physical laws of quantum mechanics.", "Automation-Control": 0.9996336102, "Qwen2": "Yes"} {"id": "61907040", "revid": "7611264", "url": "https://en.wikipedia.org/wiki?curid=61907040", "title": "U-JIN Tech Corp.", "text": "U-JIN Tech Corp. is a South Korean manufacturer of friction welding machines and automated manufacturing cells.\nHistory.\nU-JIN Tech Corp. was founded in February 2009. It stablished its own R&D center within the Korea Industrial Technology Association in 2010. The R&D center has the objective to develop new products.\nU-Jin has initially developed and manufactured hydraulic friction welding machines, and it built Korea's first CNC friction welding machine in 2012.\nIn 2015 the company was recognized as \"Contributor for Development of Excellent Capital Goods\" by the Minister of Trade, Industry, and Energy. In November 2016 it received the European CE Certificate and started exporting machines to Europe. On Trade Day in December 2016, it received the \"10 Million Dollar Export Tower Award\".\nFriction welding machines.\nCNC technology is used by U-JIN Tech Corp both for automatic material transport and in cases where high accuracy is required. Due to the position measuring devices known from CNC milling machines, the length tolerance of the components can be maintained more accurately than with conventional hydraulic machines. It is even possible, to bring the spindle to a standstill in a given position so that the two eyes of a drive shaft can be positioned at an angle to each other.\nThe two spindles of U-JIN's computer numerical controlled double-head friction welding machines are driven by servo motors that allow the angular position of their motor shaft to be controlled, as well as the speed of rotation and acceleration, since they are equipped with position sensors. If the spindles are controlled in the same way as CNC-controlled servo motors, angular accuracies of ±0.5° can be achieved, e.g. at both ends of a cardan shaft.\nFriction welded products.\nAs friction welding operates below the melting point of the materials, even dissimilar material joints can be produced with high tensile strength. In many cases, the tensile strength of the bimetallic joint is higher than that of the softer base material.\nU-JIN's friction welding machines are used industrially for a wide variety of products:", "Automation-Control": 0.9466215372, "Qwen2": "Yes"} {"id": "53802271", "revid": "7226930", "url": "https://en.wikipedia.org/wiki?curid=53802271", "title": "Machine learning control", "text": "Machine learning control (MLC) is a subfield of machine learning, intelligent control and control theory\nwhich solves optimal control problems with methods of machine learning.\nKey applications are complex nonlinear systems\nfor which linear control theory methods are not applicable.\nTypes of problems and tasks.\nFour types of problems are commonly encountered.\nMLC comprises, for instance, neural network control, \ngenetic algorithm based control, \ngenetic programming control,\nreinforcement learning control, \nand has methodological overlaps with other data-driven control,\nlike artificial intelligence and robot control.\nApplications.\nMLC has been successfully applied\nto many nonlinear control problems,\nexploring unknown and often unexpected actuation mechanisms.\nExample applications include\nAs for all general nonlinear methods,\nMLC comes with no guaranteed convergence, \noptimality or robustness for a range of operating conditions.", "Automation-Control": 1.000007987, "Qwen2": "Yes"} {"id": "53813271", "revid": "6972236", "url": "https://en.wikipedia.org/wiki?curid=53813271", "title": "Maria Elena Valcher", "text": "Maria Elena Valcher is an Italian control theorist, and a professor at the Department of Information Engineering at the University of Padova.\nValcher was the president for IEEE Control Systems Society in 2015. She was named Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2012, \"for contributions to positive systems theory and the behavioral approach to system analysis and control\". She is also a Fellow of the International Federation of Automatic Control.", "Automation-Control": 0.9999783635, "Qwen2": "Yes"} {"id": "27733786", "revid": "7030996", "url": "https://en.wikipedia.org/wiki?curid=27733786", "title": "Algorithmic complexity attack", "text": "An algorithmic complexity attack (ACA) is a form of attack in which the system is attacked by an exhaustion resource to take advantage of worst-case performance. \nAlgorithmic complexity.\nAlgorithmic complexity is the rate in which an algorithm performs. Although there are multiple ways to solve a computational problem, the best and most effective way in doing so matters. For real programs, factors such as the hardware, networking, programming language, and performance constraints play into the time a program takes to output the desired result.", "Automation-Control": 0.780642271, "Qwen2": "Yes"} {"id": "1705432", "revid": "1166093617", "url": "https://en.wikipedia.org/wiki?curid=1705432", "title": "Routh–Hurwitz stability criterion", "text": "In the control system theory, the Routh–Hurwitz stability criterion is a mathematical test that is a necessary and sufficient condition for the stability of a linear time-invariant (LTI) dynamical system or control system. A stable system is one whose output signal is bounded; the position, velocity or energy do not increase to infinity as time goes on. The Routh test is an efficient recursive algorithm that English mathematician Edward John Routh proposed in 1876 to determine whether all the roots of the characteristic polynomial of a linear system have negative real parts. German mathematician Adolf Hurwitz independently proposed in 1895 to arrange the coefficients of the polynomial into a square matrix, called the Hurwitz matrix, and showed that the polynomial is stable if and only if the sequence of determinants of its principal submatrices are all positive. The two procedures are equivalent, with the Routh test providing a more efficient way to compute the Hurwitz determinants (formula_1) than computing them directly. A polynomial satisfying the Routh–Hurwitz criterion is called a Hurwitz polynomial.\nThe importance of the criterion is that the roots p of the characteristic equation of a linear system with negative real parts represent solutions ept of the system that are stable (bounded). Thus the criterion provides a way to determine if the equations of motion of a linear system have only stable solutions, without solving the system directly. For discrete systems, the corresponding stability test can be handled by the Schur–Cohn criterion, the Jury test and the Bistritz test. With the advent of computers, the criterion has become less widely used, as an alternative is to solve the polynomial numerically, obtaining approximations to the roots directly.\nThe Routh test can be derived through the use of the Euclidean algorithm and Sturm's theorem in evaluating Cauchy indices. Hurwitz derived his conditions differently.\nUsing Euclid's algorithm.\nThe criterion is related to Routh–Hurwitz theorem. From the statement of that theorem, we have formula_2 where:\nBy the fundamental theorem of algebra, each polynomial of degree \"n\" must have \"n\" roots in the complex plane (i.e., for an \"ƒ\" with no roots on the imaginary line, \"p\" + \"q\" = \"n\"). Thus, we have the condition that \"ƒ\" is a (Hurwitz) stable polynomial if and only if \"p\" − \"q\" = \"n\" (the proof is given below). Using the Routh–Hurwitz theorem, we can replace the condition on \"p\" and \"q\" by a condition on the generalized Sturm chain, which will give in turn a condition on the coefficients of \"ƒ\".\nUsing matrices.\nLet \"f\"(\"z\") be a complex polynomial. The process is as follows:\nExample.\nNotice that we had to suppose \"b\" different from zero in the first division. The generalized Sturm chain is in this case formula_25. Putting formula_26, the sign of formula_27 is the opposite sign of \"a\" and the sign of \"by\" is the sign of \"b\". When we put formula_28, the sign of the first element of the chain is again the opposite sign of \"a\" and the sign of \"by\" is the opposite sign of \"b\". Finally, -\"c\" has always the opposite sign of \"c\".\nSuppose now that \"f\" is Hurwitz-stable. This means that formula_29 (the degree of \"f\"). By the properties of the function \"w\", this is the same as formula_30 and formula_31. Thus, \"a\", \"b\" and \"c\" must have the same sign. We have thus found the necessary condition of stability for polynomials of degree 2.\nHigher-order example.\nA tabular method can be used to determine the stability when the roots of a higher order characteristic polynomial are difficult to obtain. For an \"n\"th-degree polynomial\nthe table has \"n\" + 1 rows and the following structure:\nwhere the elements formula_50 and formula_51 can be computed as follows:\nWhen completed, the number of sign changes in the first column will be the number of non-negative roots.\nIn the first column, there are two sign changes (0.75 → −3, and −3 → 3), thus there are two non-negative roots where the system is unstable.\nThe characteristic equation of a servo system is given by:\nfor stability, all the elements in the first column of the Routh array must be positive. So the conditions that must be satisfied for stability of the given system as follows:\nformula_55\nWe see that if\nformula_56\nthen\nformula_57\nIs satisfied.\nWe have the following table :\nthere are two sign changes. The system is unstable, since it has two right-half-plane poles and two left-half-plane poles. The system cannot have jω poles since a row of zeros did not appear in the Routh table.\nSometimes the presence of poles on the imaginary axis creates a situation of marginal stability. In that case the coefficients of the \"Routh array\" in a whole row become zero and thus further solution of the polynomial for finding changes in sign is not possible. Then another approach comes into play. The row of polynomial which is just above the row containing the zeroes is called the \"auxiliary polynomial\".\nWe have the following table:\nIn such a case the auxiliary polynomial is formula_60 which is again equal to zero. The next step is to differentiate the above equation which yields the polynomial formula_61. The coefficients of the row containing zero now become\n\"8\" and \"24\". The process of Routh array is proceeded using these values which yield two points on the imaginary axis. These two points on the imaginary axis are the prime cause of marginal stability.", "Automation-Control": 0.7006269693, "Qwen2": "Yes"} {"id": "53886302", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=53886302", "title": "Optomechatronics", "text": "In engineering, optomechatronics is a field that investigates the integration of optical components and technology into mechatronic systems. The optical components in these systems are used as sensors to measure mechanical quantities such as surface structure and orientation. Optical sensors are used in a feedback loop as part of control systems for mechatronic devices. Optomechatronics has applications in areas such as adaptive optics, vehicular automation, optofluidics, optical tweezers and thin-film technology.", "Automation-Control": 0.9732125401, "Qwen2": "Yes"} {"id": "37920220", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=37920220", "title": "Stochastic Petri net", "text": "Stochastic Petri nets are a form of Petri net where the transitions fire after a probabilistic delay determined by a random variable. \nDefinition.\nA \"stochastic Petri net\" is a five-tuple \"SPN\" = (\"P\", \"T\", \"F\", \"M\"0, \"Λ\") where:\nCorrespondence to Markov process.\nThe reachability graph of stochastic Petri nets can be mapped directly to a Markov process. It satisfies the Markov property, since its states depend only on the current marking. \nEach state in the reachability graph is mapped to a state in the Markov process, and the firing of a transition with firing rate λ corresponds to a Markov state transition with probability λ.", "Automation-Control": 0.9999838471, "Qwen2": "Yes"} {"id": "14313430", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=14313430", "title": "Context tree weighting", "text": "The context tree weighting method (CTW) is a lossless compression and prediction algorithm by . The CTW algorithm is among the very few such algorithms that offer both theoretical guarantees and good practical performance (see, e.g. ).\nThe CTW algorithm is an “ensemble method”, mixing the predictions of many underlying variable order Markov models, where each such model is constructed using zero-order conditional probability \"estimators\".", "Automation-Control": 0.8373453021, "Qwen2": "Yes"} {"id": "14337590", "revid": "76", "url": "https://en.wikipedia.org/wiki?curid=14337590", "title": "Oracle Clusterware", "text": "Oracle Clusterware is the cross-platform cluster software required to run the Real Application Clusters (RAC) option for Oracle Database. It provides the basic clustering services at the operating-system level that enable Oracle Database software to run in clustering mode. In earlier versions of Oracle (release 9i and earlier), RAC required a vendor-supplied clusterware like Sun Cluster or Veritas Cluster Server (except when running on Linux or on Microsoft Windows).\nOracle Clusterware Components.\nOracle Clusterware is the software which enables the nodes to communicate with each other, allowing them to form the cluster of nodes which behaves as a single logical server. Oracle Clusterware is run by Cluster Ready Services (CRS) consisting of two key components: Oracle Cluster Registry (OCR), which records and maintains the cluster and node membership information; voting disk, which polls for consistent heartbeat information from all the nodes when the cluster is running, and acts as a tiebreaker during communication failures.\nThe CRS service has four components, each handling a variety of functions: Cluster Ready Services daemon (CRSd), Oracle Cluster Synchronization Service Daemon (OCSSd), Event Volume Manager Daemon (EVMd), and Oracle Process Clusterware Daemon (OPROCd). Failure or death of the CRS daemon can cause node failure, which triggers automatic reboots of the nodes to avoid the corruption of data (due to the possible failure of communication between the nodes), also known as fencing. The CRS daemon runs as \"root\" (super user) on UNIX platforms and runs as a service on Windows platforms.\nCRSd.\nThe following functions are provided by the Oracle Cluster Ready Services daemon (CRSd):\nOCSSd.\nOracle Cluster Synchronization Services daemon (OCSSd) provides basic ‘group services’ support. Group Services is a distributed group membership system that allows the applications to coordinate activities to achieve a common result. As such, it provides synchronization services between nodes, access to the node membership information, as well as enabling basic cluster services, including cluster group services and cluster locking. It can also run without integration with vendor clusterware. Failure of OCSSd causes the machine to reboot to avoid a split-brain situation. This is also required in a single instance configuration if Automatic Storage Management (ASM) is used. ASM was a new feature in Oracle 10g. OCSSd runs as the \"oracle\" user.\nThe following functions are provided by the Oracle Cluster Synchronization Services daemon (OCSSd):\nEVMd.\nThe third component in OCS is the Event Volume Management Logger daemon (EVMd). EVMd spawns a permanent child process called \"evmlogger\" and generates events. The EVMd child process ‘evmlogger’ spawns new children processes on demand and scans the callout directory to invoke callouts. It will restart automatically on failures and death of the EVMd process does not halt the instance. EVMd runs as the \"oracle\" user.\nOPROCd.\nOPROCd provides the server fencing solution for the Oracle Clusterware. It is the process monitor for Oracle Clusterware and it uses the hang check timer or watchdog timer (depending on the implementation) for the cluster integrity. OPROCd is locked in the memory and runs as a real time process. This sleeps for a fixed time and runs as the \"root\" user. Failure of the OPROCd process causes the node to restart. OPROCd is so important that even it is being monitored by a process called OCLSOMON and causes a cluster node to reboot if OPROCd is hung.", "Automation-Control": 0.739516139, "Qwen2": "Yes"} {"id": "2987205", "revid": "23790359", "url": "https://en.wikipedia.org/wiki?curid=2987205", "title": "Control and indicating equipment", "text": "Control and indicating equipment is equipment for receiving, processing, controlling, indicating and initiating the onward transmission of information as used in fire alarm systems. The fire detection and fire alarm system subcommittee of ISO/TC 21, Equipment for Fire Protection and Fire Fighting, had oversight for development of five standards covering detectors, control and indicating equipment. ISO 7240-2:3003 specifies requirements, test methods and performance criteria for control and indicating equipment (c.i.e.) for use in fire detection and fire alarm systems installed in buildings.\nBy country.\nThe Australian CSIRO under the Active Fire Protection Equipment Scheme sets the technical specifications and standards for lead acid batteries in Control and Indicating Equipment.\nThe United Kingdom Loss Prevention Council sets requirements for the design and testing of control and indicating equipment (CIE) for use with intruder alarm and hold-up alarm systems. It covers two grades of CIE (Grade A and Grade B).\nShuichi Murao was granted a United States patent on May 29, 2001 for a fire alarm system for use in control and indicating equipment.", "Automation-Control": 0.6432094574, "Qwen2": "Yes"} {"id": "16988061", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=16988061", "title": "Shear forming", "text": "Shear forming, also referred as shear spinning, is similar to metal spinning. In shear spinning the area of the final piece is approximately equal to that of the flat sheet metal blank. The wall thickness is maintained by controlling the gap between the roller and the mandrel. In shear forming a reduction of the wall thickness occurs.\nBefore the 1950s, spinning was performed on a simple turning lathe. When new technologies were introduced to the field of metal spinning and powered dedicated spinning machines were available, shear forming started its development in Sweden.\nSchematics.\nFigure 2 shows the schematics of a shear forming process.\n1. A sheet metal blank is placed between the mandrel and the chuck of the spinning machine. The mandrel has the interior shape of the desired final component.\n2. A roller makes the sheet metal wrap the mandrel so that it takes its shape.\nAs can be seen, s1 which is the initial wall thickness of the workpiece is reduced to s0.\nWorkpiece and roller tool profiles.\nIn shear forming, the starting workpiece can have circular or rectangular cross sections. On the other hand, the profile shape of the final component can be concave, convex or a combination of these two.\nA shear-forming machine looks very much like a conventional spinning machine, except that it has to be much more robust to withstand the higher forces necessary to perform the shearing operation.\nThe design of the roller must be considered carefully, because it affects the shape of the component, the wall thickness, and dimensional accuracy. The smaller the tool nose radius, the higher the stresses and poorest thickness uniformity achieved.\nSpinnability.\nSpinnability, sometimes referred as shear spinnability, can be defined as the ability of a metal to undergo shear spinning deformation without exceeding its tensile strength and tearing. Published work on spinnability is available from the authors Kegg and Kalpakcioglu.\nKegg predicted that for materials with a tensile reduction of 80%, the limiting spinning reduction will be equal or greater than 80%. \nKalpakciouglu concluded that for metals with a true fracture strain of 0.5 or greater, there is a maximum limit for the shear forming reduction. For materials with a true strain below 0.5, the spinnability depends on the ductility of the material.\nHighly spinnable materials include ductile materials like aluminum and certain steel alloys.\nImportance of shear forming operations in manufacturing.\nShear forming and conventional spinning are being used less than other manufacturing processes such as deep drawing and ironing. Being able to achieve almost any shape, thin sectioned parts, makes spinning a versatile process used widely in the production of lightweight items. Other advantages of shear spinning include the good mechanical properties of the final item and a very good surface finish.\nTypical components produced by mechanically powered spinning machines include rocket nose cones, gas turbine engine and dish aerials.\nFlow forming.\nFlow Forming is an incremental metal-forming technique in which a disk or tube of metal is formed over a mandrel by one or more rollers using tremendous pressure. The roller deforms the workpiece, forcing it against the mandrel, both axially lengthening and radially thinning it. Since the pressure exerted by the roller is highly localized and the material is incrementally formed, often there is a net savings in energy in forming over drawing or ironing processes. However, these savings are often not realized because of the inherent difficulties in predicting the resulting deformation for a given roller path. Flow forming subjects the workpiece to a great deal of friction and deformation. These two factors may heat the workpiece to several hundred degrees if proper cooling fluid is not utilized.\nFlow forming is often used to manufacture automobile wheels and can be used to draw a wheel to net width from a machined blank.\nDuring flow forming, the workpiece is cold worked, changing its mechanical properties, so its strength becomes similar to that of forged metal.\nReferences.\n 3. https://www.pmfind.com/benefits/flowforming-process-benefits-process", "Automation-Control": 0.8386859298, "Qwen2": "Yes"} {"id": "16999355", "revid": "1169771821", "url": "https://en.wikipedia.org/wiki?curid=16999355", "title": "Ultrasonic machining", "text": "Ultrasonic machining is a subtractive manufacturing process that removes material from the surface of a part through high frequency, low amplitude vibrations of a tool against the material surface in the presence of fine abrasive particles. The tool travels vertically or orthogonal to the surface of the part at amplitudes of 0.05 to 0.125 mm (0.002 to 0.005 in.). The fine abrasive grains are mixed with water to form a slurry that is distributed across the part and the tip of the tool. Typical grain sizes of the abrasive material range from 100 to 1000, where smaller grains (higher grain number) produce smoother surface finishes.\nUltrasonic vibration machining is typically used on brittle materials as well as materials with a high hardness due to the microcracking mechanics.\nProcess.\nAn ultrasonically vibrating machine consists of two major components, an electroacoustic transducer and a sonotrode, attached to an electronic control unit with a cable. The abrasive grains in the slurry now act as a free cutting tool as they strike the workpiece thousands of times per second. An electronic oscillator in the control unit produces an alternating current oscillating at a high frequency, usually between 18 and 40 kHz in the ultrasonic range. The transducer converts the oscillating current to a mechanical vibration. Two types of transducers have been used in ultrasonic machining; either piezoelectric or magnetostrictive: \nThe transducer vibrates the sonotrode at low amplitudes and high frequencies. The sonotrode is usually made of low carbon steel. A constant stream of abrasive slurry flows between the sonotrode and work piece. This flow of slurry allows debris to flow away from the work cutting area. The slurry usually consists of abrasive boron carbide, aluminum oxide or silicon carbide particles in a suspension of water (20 to 60% by volume). The sonotrode removes material from the work piece by abrasion where it contacts it, so the result of machining is to cut a perfect negative of the sonotrode's profile into the work piece. Ultrasonic vibration machining allows extremely complex and non-uniform shapes to be cut into the workpiece with extremely high precision. \nMachining time depends on the workpiece's strength, hardness, porosity and fracture toughness; the slurry's material and particle size; and the amplitude of the sonotrode's vibration. The surface finish of materials after machining depends heavily on hardness and strength, with softer and weaker materials exhibiting smoother surface finishes. The inclusion of microcrack and microcavity features on the materials surface depend highly on the crystallographic orientation of the work piece's grains and the materials fracture toughness.\nMechanics.\nUltrasonic vibration machining physically operates by the mechanism of microchipping or erosion on the work piece's surface. Since the abrasive slurry is kept in motion by high frequency, low amplitude vibrations, the impact forces of the slurry are significant, causing high contact stresses. These high contact stresses are achieved by the small contact area between the slurry's particles and the work piece's surface. Brittle materials fail by cracking mechanics and these high stresses are sufficient to cause micro-scale chips to be removed from its surface. The material as a whole does not fail due to the extremely localized stress regions. The average force imparted by a particle of the slurry impacting the work piece's surface and rebounding can be characterized by the following equation:\nWhere \"m\" is the mass of the particle, \"v\" is the velocity of the particle when striking the surface and \"to\" is the contact time, which can be approximated according to the following equation:\nWhere \"r\" is the radius of the particle, \"co\" is the elastic wave velocity of the work piece, \"E\" is the work pieces Young's Modulus and \"ρ\" is the materials density.\nTypes.\nRotary ultrasonic vibration machining.\nIn rotary ultrasonic vibration machining (RUM), the vertically oscillating tool is able to revolve about the vertical center line of the tool. Instead of using an abrasive slurry to remove material, the surface of the tool is impregnated with diamonds that grind down the surface of the part. Rotary ultrasonic machines are specialized in machining advanced ceramics and alloys such as glass, quartz, structural ceramics, Ti-alloys, alumina, and silicon carbide. Rotary ultrasonic machines are used to produce deep holes with a high level of precision.\nRotary ultrasonic vibration machining is a relatively new manufacturing process that is still being extensively researched. Currently, researchers are trying to adapt this process to the micro level and to allow the machine to operate similar to a milling machine.\nChemical-assisted ultrasonic vibration machining.\nIn chemical-assisted ultrasonic machining (CUSM), a chemically reactive abrasive fluid is used to ensure greater machining of glass and ceramic materials. Using an acidic solution, such as hydrofluoric acid, machining characteristics such as material removal rate and surface quality can be improved greatly compared to traditional ultrasonic machining. While time spent machining and surface roughness decrease with CUSM, the entrance profile diameter is slightly larger than normal due to the additional chemical reactivity of the new slurry choice. In order to limit the extent of this enlargement, the acid content of the slurry must be carefully selected as to ensure user safety and a quality product.\nApplications.\nSince ultrasonic vibration machining does not use subtractive methods that may alter the physical properties of a workpiece, such as thermal, chemical, or electrical processes, it has many useful applications for materials that are more brittle and sensitive than traditional machining metals. Materials that are commonly machined using ultrasonic methods include ceramics, carbides, glass, precious stones and hardened steels. These materials are used in optical and electrical applications where more precise machining methods are required to ensure dimensional accuracy and quality performance of hard and brittle materials. Ultrasonic machining is precise enough to be used in the creation of microelectromechanical system components such as micro-structured glass wafers.\nIn addition to small-scale components, ultrasonic vibration machining is used for structural components because of the required precision and surface quality provided by the method. The process can safely and effectively create shapes out of high-quality single crystal materials that are often necessary but difficult to generate during normal crystal growth. As advanced ceramics become a greater part of the structural engineering realm, ultrasonic machining will continue to provide precise and effective methods of ensuring proper physical dimensions while maintaining crystallographic properties.\nAdvantages.\nUltrasonic vibration machining is a unique non-traditional manufacturing process because it can produce parts with high precision that are made of hard and brittle materials which are often difficult to machine. Additionally, ultrasonic machining is capable of manufacturing fragile materials such as glass and non-conductive metals that can not be machined by alternative methods such as electrical discharge machining and electrochemical machining. Ultrasonic machining is able to produce high-tolerance parts because there is no distortion of the worked material. The absence of distortion is due to no heat generation from the sonotrode against the work piece and is beneficial because the physical properties of the part will remain uniform throughout. Furthermore, no burrs are created in the process, thus fewer operations are required to produce a finished part.\nDisadvantages.\nBecause ultrasonic vibration machining is driven by microchipping or erosion mechanisms, the material removal rate of metals can be slow and the sonotrode tip can wear down quickly from the constant impact of abrasive particles on the tool. Moreover, drilling deep holes in parts can prove difficult as the abrasive slurry will not effectively reach the bottom of the hole. Note, rotary ultrasonic machining is efficient at drilling deep holes in ceramics because the absence of a slurry cutting fluid and the cutting tool is coated in harder diamond abrasives. In addition, ultrasonic vibration machining can only be used on materials with a hardness value of at least 45 HRC.", "Automation-Control": 0.8538782597, "Qwen2": "Yes"} {"id": "17000875", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=17000875", "title": "Grinding (abrasive cutting)", "text": "Grinding is a type of abrasive machining process which uses a grinding wheel as cutting tool.\nA wide variety of machines are used for grinding, best classified as portable or stationary:\nMilling practice is a large and diverse area of manufacturing and toolmaking. It can produce very fine finishes and very accurate dimensions; yet in mass production contexts, it can also rough out large volumes of metal quite rapidly. It is usually better suited to the machining of very hard materials than is \"regular\" machining (that is, cutting larger chips with cutting tools such as tool bits or milling cutters), and until recent decades it was the only practical way to machine such materials as hardened steels. Compared to \"regular\" machining, it is usually better suited to taking very shallow cuts, such as reducing a shaft's diameter by half a thousandth of an inch or 12.7 μm.\nGrinding is a subset of cutting, as grinding is a true metal-cutting process. Each grain of abrasive functions as a microscopic single-point cutting edge (although of high negative rake angle), and shears a tiny chip that is analogous to what would conventionally be called a \"cut\" chip (turning, milling, drilling, tapping, etc.) . However, among people who work in the machining fields, the term \"cutting\" is often understood to refer to the macroscopic cutting operations, and \"grinding\" is often mentally categorized as a \"separate\" process. This is why the terms are usually used separately in shop-floor practice.\nLapping and sanding are subsets of grinding.\nProcesses.\nSelecting which of the following grinding operations to be used is determined by the size, shape, features and the desired production rate.\nCreep-feed grinding.\nCreep-feed grinding (CFG) was a grinding process which was invented in Germany in the late 1950s by Edmund and Gerhard Lang. Normal grinding is used primarily to finish surfaces. But CFG is used for high rates of material removal, competing with milling and turning as a manufacturing process choice. CFG has grinding depth up to 6 mm (0.236 inches) and workpiece speed is low. Surfaces with a softer-grade resin bond are used to keep workpiece temperature low and an improved surface finish up to 1.6 μm Rmax.\nCFG can take 117 s to remove of material. Precision grinding would take more than 200 s to do the same. CFG has the disadvantage of a wheel that is constantly degrading, requires high spindle power , and is limited in the length of part it can machine.\nTo address the problem of wheel sharpness, continuous-dress creep-feed grinding (CDCF) was developed in 1970s. The wheel is dressed constantly during machining in CDCF process and keeps the wheel in a state of specified sharpness. It takes only 17 s to remove of material, a huge gain in productivity. 38 hp (28 kW) spindle power is required, with a low to conventional spindle speeds. The limit on part length was erased.\nHigh-efficiency deep grinding (HEDG) is another type of grinding. This process uses plated superabrasive wheels. These wheels never need dressing and last longer than other wheels. This reduces capital equipment investment costs. HEDG can be used on long part lengths and removes material at a rate of in 83 s. HEDG requires high spindle power and high spindle speeds.\nPeel grinding, patented under the name of Quickpoint in 1985 by Erwin Junker Maschinenfabrik, GmbH in Nordrach, Germany, uses a thin superabrasive grinding disk oriented almost parallel to a cylindrical workpiece and operates somewhat like a lathe turning tool.\nUltra-high speed grinding (UHSG) can run at speeds higher than 40,000 fpm (200 m/s), taking 41 s to remove of material, but is still in the research and development (R&D) stage. It also requires high spindle power and high spindle speeds.\nCylindrical grinding.\nCylindrical grinding (also called center-type grinding) is used to grind the cylindrical surfaces and shoulders of the workpiece. The workpiece is mounted on centers and rotated by a device known as a lathe dog or center driver. The abrasive wheel and the workpiece are rotated by separate motors and at different speeds. The table can be adjusted to produce tapers. The wheel head can be swiveled. The five types of cylindrical grinding are: outside diameter (OD) grinding, inside diameter (ID) grinding, plunge grinding, creep feed grinding, and centerless grinding.\nA cylindrical grinder has a grinding (abrasive) wheel, two centers that hold the workpiece, and a chuck, grinding dog, or other mechanism to drive the work. Most cylindrical grinding machines include a swivel to allow the forming of tapered pieces. The wheel and workpiece move parallel to one another in both the radial and longitudinal directions. The abrasive wheel can have many shapes. Standard disk-shaped wheels can be used to create a tapered or straight workpiece geometry, while formed wheels are used to create a shaped workpiece. The process using a formed wheel creates less vibration than using a regular disk-shaped wheel.\nTolerances for cylindrical grinding are held within ± for diameter and ± for roundness. Precision work can reach tolerances as high as ± for diameter and ± for roundness. Surface finishes can range from to , with typical finishes ranging from .\nSurface grinding.\n\"Surface grinding\" uses a rotating abrasive wheel to remove material, creating a flat surface. The tolerances that are normally achieved with grinding are ± for grinding a flat material and ± for a parallel surface.\nThe surface grinder is composed of an abrasive wheel, a workholding device known as a chuck, either electromagnetic or vacuum, and a reciprocating table.\nGrinding is commonly used on cast iron and various types of steel. These materials lend themselves to grinding because they can be held by the magnetic chuck commonly used on grinding machines and do not melt into the cutting wheel, clogging it and preventing it from cutting. Materials that are less commonly ground are aluminum, stainless steel, brass, and plastics. These all tend to clog the cutting wheel more than steel and cast iron, but with special techniques it is possible to grind them.\nOthers.\nCenterless grinding is when the workpiece is supported by a blade instead of by centers or chucks. Two wheels are used. The larger one is used to grind the surface of the workpiece and the smaller wheel is used to regulate the axial movement of the workpiece. Types of centerless grinding include through-feed grinding, in-feed/plunge grinding, and internal centerless grinding.\nElectrochemical grinding is a type of grinding in which a positively charged workpiece in a conductive fluid is eroded by a negatively charged grinding wheel. The pieces from the workpiece are dissolved into the conductive fluid.\nElectrolytic in-process dressing (ELID) grinding is one of the most accurate grinding methods. In this ultra precision grinding technology the grinding wheel is dressed electrochemically and in-process to maintain the accuracy of the grinding. An ELID cell consists of a metal bonded grinding wheel, a cathode electrode, a pulsed DC power supply and electrolyte. The wheel is connected to the positive terminal of the DC power supply through a carbon brush whereas the electrode is connected to the negative pole of the power supply. Usually alkaline liquids are used as both electrolytes and coolant for grinding. A nozzle is used to inject the electrolyte into the gap between wheel and electrode. The gap is usually maintained to be approximately 0.1mm to 0.3 mm. During the grinding operation one side of the wheel takes part in the grinding operation whereas the other side of the wheel is being dressed by electrochemical reaction. The dissolution of the metallic bond material is caused by the dressing which in turns results continuous protrusion of new sharp grits.\n is a specialized type of cylindrical grinding where the grinding wheel has the exact shape of the final product. The grinding wheel does not traverse the workpiece.\nInternal grinding is used to grind the internal diameter of the workpiece. Tapered holes can be ground with the use of internal grinders that can swivel on the horizontal.\nPre-grinding - When a new tool has been built and has been heat-treated, it is pre-ground before welding or hardfacing commences. This usually involves grinding the outside diameter (OD) slightly higher than the finish grind OD to ensure the correct finish size.\nGrinding wheel.\nA grinding wheel is an expendable wheel used for various grinding and abrasive machining operations. It is generally made from a matrix of coarse abrasive particles pressed and bonded together to form a solid, circular shape, various profiles and cross sections are available depending on the intended usage for the wheel. Grinding wheels may also be made from a solid steel or aluminium disc with particles bonded to the surface.\nLubrication.\nThe use of fluids in a grinding process is often necessary to cool and lubricate the wheel and workpiece as well as remove the chips produced in the grinding process. The most common grinding fluids are water-soluble chemical fluids, water-soluble oils, synthetic oils, and petroleum-based oils. It is imperative that the fluid be applied directly to the cutting area to prevent the fluid being blown away from the piece due to rapid rotation of the wheel.\nThe workpiece.\nWorkholding methods.\nThe workpiece is manually clamped to a lathe dog, powered by the faceplate, that holds the piece in between two centers and rotates the piece. The piece and the grinding wheel rotate in opposite directions and small bits of the piece are removed as it passes along the grinding wheel. In some instances special drive centers may be used to allow the edges to be ground. The workholding method affects the production time as it changes set up times.\nWorkpiece materials.\nTypical workpiece materials include aluminum, brass, plastics, cast iron, mild steel, and stainless steel. Aluminum, brass and plastics can have poor to fair machinability characteristics for cylindrical grinding. Cast Iron and mild steel have very good characteristics for cylindrical grinding. Stainless steel is very difficult to grind due to its toughness and ability to work harden, but can be worked with the right grade of grinding wheels.\nWorkpiece geometry.\nThe final shape of a workpiece is the mirror image of the grinding wheel, with cylindrical wheels creating cylindrical pieces and formed wheels creating formed pieces. Typical sizes on workpieces range from 0.75 in to 20 in (18 mm to 1 m) and 0.80 in to 75 in (2 cm to 4 m) in length, although pieces from 0.25 in to 60 in (6 mm to 1.5 m) in diameter and 0.30 in to 100 in (8 mm to 2.5 m) in length can be ground. Resulting shapes can be straight cylinders, straight-edged conical shapes, or even crankshafts for engines that experience relatively low torque.\nEffects on workpiece materials.\nChemical property changes include an increased susceptibility to corrosion because of high surface stress.\nMechanical properties will change due to stresses put on the part during finishing. High grinding temperatures may cause a thin martensitic layer to form on the part, which will lead to reduced material strength from microcracks.\nPhysical property changes include the possible loss of magnetic properties on ferromagnetic materials.", "Automation-Control": 0.9661628604, "Qwen2": "Yes"} {"id": "30295206", "revid": "22651524", "url": "https://en.wikipedia.org/wiki?curid=30295206", "title": "Laser guided and stabilized arc welding", "text": "Laser guided and stabilized welding (LGS-welding) is a process in which a laser beam irradiates an electrical heated plasma arc to set a path of increased conductivity. Therefore, the arc's energy can be spatial directed and the plasma burns more stable. The process must be distinguished from laser-hybrid welding, since only low power laser energy of a couple hundred Watts is used and the laser does not contribute significantly to the welding process in terms of energy input.\nOperation.\nThe principle of laser enhanced welding is based on the interaction between the electrical arc and laser radiation. Due to the optogalvanic effect (OGE) a channel of higher conductivity in the plasma is established along the path of the laser. Therefore, a movement of the laser beam results in a movement of the electrical arc. This effect is limited to a range of some millimeters, but shows the influence of the radiation to the plasma. A raise of welding speed of over 100% is described by using a diode laser with a wavelength of 811 nm without a significant loss in penetration depth. Furthermore, this technique is used in cladding. Depending on the welded material argon or argon with CO2 is used as shielding gas. The laser source must be tuned to emit at a wavelength of 811 nm and is focused into the plasma.\nLaser guided and stabilized GMA-Welding.\nThe process is used for welding thin metal sheets up to about 2 mm when welding in overlap or butt joint. LGS-GMA-welding is most advantageous when welding fillet welds. The guidance effect of the laser radiation forces the arc into the fillet. Therefore, a steady seam can be reached. Furthermore, the stabilization of the plasma enables the GMA-process to weld thin sheets without burning holes in the material.\nEquipment and setup.\nThe setup requires the GMA welding head tilted at 60° to the work piece surface. In order to realize a maximum overlap between the electric arc and the laser beam in the process area, the laser is installed upright to the workpiece and focused in the electrical arc. Standard welding equipment can be used for the process. The laser source is described above.\nLaser guided and stabilized double head TIG-welding.\nIn laser guided and stabilized double head TIG-welding the laser forces two arcs together. The goal of this technique is to increase the welding speed of TIG-welding without compromising the quality.\nEquipment and setup.\nFor this process two TIG-sources are needed and the laser described above. The TIG-torches are set up with the laser beam perpendicular in the middle. All welding modes of the two torches are possible (DC/DC, AC/AC, AC/DC).\nLaser guided and stabiliszed GMA-Cladding.\nIn LGS-GMA-cladding the stabilization effect is used enable the GMA-process to work with low energy. This is needed to reduce the penetration depth and therefore the dilution of base and deposition material. The combination of GMA-welding and a diode laser leads to a cheap and energy efficient process.\nEquipment and setup.\nThe setup for the LGS-GMA-cladding is almost alike the one for LGS-GMA-welding beside that the GMA-source needs to have a \"Cold-MIG\" process. This means, that the welding current is controlled my microcontrollers and produced by power electronics. That way not only the current peaks can be controlled, but also the slopes.", "Automation-Control": 0.9761868715, "Qwen2": "Yes"} {"id": "30309352", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=30309352", "title": "Manfred Morari", "text": "Manfred Morari (born 1951) is a world-leading control theorist who has made pioneering contributions to the theory and applications of Model Predictive Control, Internal Model Control (IMC) and Hybrid Systems. His book on Robust Process Control is considered to be definitive text on the subject. He is currently Peter and Susanne Armstrong Faculty Fellow at the University of Pennsylvania. He received his Ph.D. in Chemical Engineering from the University of Minnesota in 1977. Dr. Morari held positions at the University of Wisconsin, Madison from 1977–1983, the California Institute of Technology from 1983-1991, and the Swiss Federal Institute of Technology in Zurich ETH Zurich. He is considered as pioneer in field of Model Predictive Control, Control of Hybrid Systems, Internal Model Control (IMC), and robust control.\nIn recognition of his research contributions he received numerous awards, among them the Donald P. Eckman Award and the John R. Ragazzini Award of the Automatic Control Council, the Allan P. Colburn Award and the Professional Progress Award of the AIChE, the Curtis W. McGraw Research Award of the ASEE, Doctor Honoris Causa from Babes-Bolyai University, Fellow of IEEE and IFAC, and the IEEE Control Systems Field Award. He was also elected a member of the US National Academy of Engineering in 1993 for analysis of the effects of design on process operability and the development of techniques for robust process control. Manfred Morari has held appointments with Exxon and ICI plc and serves on the technical advisory boards of several major corporations. He received in 2005 the IEEE Control Systems Award, and in 2011 the Richard E. Bellman Control Heritage Award.", "Automation-Control": 0.9741516113, "Qwen2": "Yes"} {"id": "491156", "revid": "17814338", "url": "https://en.wikipedia.org/wiki?curid=491156", "title": "Tournament selection", "text": "Tournament selection is a method of selecting an individual from a population of individuals in a genetic algorithm. Tournament selection involves running several \"tournaments\" among a few individuals (or \"chromosomes\") chosen at random from the population. The winner of each tournament (the one with the best fitness) is selected for crossover. \"Selection pressure\", a probabilistic measure of a chromosome's likelihood of participation in the tournament based on the participant selection pool size, is easily adjusted by changing the tournament size. The reason is that if the tournament size is larger, weak individuals have a smaller chance to be selected, because, if a weak individual is selected to be in a tournament, there is a higher probability that a stronger individual is also in that tournament.\nThe tournament selection method may be described in pseudo code:\n choose k (the tournament size) individuals from the population at random\n choose the best individual from the tournament with probability p\n choose the second best individual with probability p*(1-p)\n choose the third best individual with probability p*((1-p)^2)\n and so on\nDeterministic tournament selection selects the best individual (when \"p\" = 1) in any tournament. A 1-way tournament (\"k\" = 1) selection is equivalent to random selection. There are two variants of the selection: \"with\" and \"without\" replacement. The variant without replacement guarantees that when selecting \"N\" individuals from a population of \"N\" elements, each individual participates in exactly \"k\" tournaments. An algorithm is proposed in. Note that depending on the number of elements selected, selection \"without\" replacement does \"not\" guarantee that no individual is selected more than once. It just guarantees that each individual has an equal chance of participating in the same number of tournaments.\nIn comparison with the (stochastic) fitness proportionate selection method, tournament selection is often implemented in practice due to its lack of stochastic noise.\nTournament selection has several benefits over alternative selection methods for genetic algorithms (for example, fitness proportionate selection and reward-based selection): it is efficient to code, works on parallel architectures and allows the selection pressure to be easily adjusted. Tournament selection has also been shown to be independent of the scaling of the genetic algorithm fitness function (or 'objective function') in some classifier systems.", "Automation-Control": 0.8157203794, "Qwen2": "Yes"} {"id": "1527151", "revid": "42677165", "url": "https://en.wikipedia.org/wiki?curid=1527151", "title": "Speeds and feeds", "text": "The phrase speeds and feeds or feeds and speeds refers to two separate velocities in machine tool practice, cutting speed and feed rate. They are often considered as a pair because of their combined effect on the cutting process. Each, however, can also be considered and analyzed in its own right.\n\"Cutting speed\" (also called \"surface speed\" or simply \"speed\") is the speed difference (relative velocity) between the cutting tool and the surface of the workpiece it is operating on. It is expressed in units of distance across the workpiece surface per unit of time, typically surface feet per minute (sfm) or meters per minute (m/min). \"Feed rate\" (also often styled as a solid compound, \"feedrate\", or called simply \"feed\") is the relative velocity at which the cutter is advanced along the workpiece; its vector is perpendicular to the vector of cutting speed. Feed rate units depend on the motion of the tool and workpiece; when the workpiece rotates (\"e.g.\", in turning and boring), the units are almost always distance per spindle revolution (inches per revolution [in/rev or ipr] or millimeters per revolution [mm/rev]). When the workpiece does not rotate (\"e.g.\", in milling), the units are typically distance per time (inches per minute [in/min or ipm] or millimeters per minute [mm/min]), although distance per revolution or per cutter tooth are also sometimes used.\nIf variables such as cutter geometry and the rigidity of the machine tool and its tooling setup could be ideally maximized (and reduced to negligible constants), then only a lack of power (that is, kilowatts or horsepower) available to the spindle would prevent the use of the maximum possible speeds and feeds for any given workpiece material and cutter material. Of course, in reality those other variables are dynamic and not negligible, but there is still a correlation between power available and feeds and speeds employed. In practice, lack of rigidity is usually the limiting constraint.\nThe phrases \"speeds and feeds\" or \"feeds and speeds\" have sometimes been used metaphorically to refer to the execution details of a plan, which only skilled technicians (as opposed to designers or managers) would know.\nCutting speed.\nCutting speed may be defined as the rate at the workpiece surface, irrespective of the machining operation used. A cutting speed for mild steel of 100 ft/min is the same whether it is the speed of the cutter passing over the workpiece, such as in a turning operation, or the speed of the cutter moving past a workpiece, such as in a milling operation. The cutting conditions will affect the value of this surface speed for mild steel.\nSchematically, speed at the workpiece surface can be thought of as the tangential speed at the tool-cutter interface, that is, how fast the material moves past the cutting edge of the tool, although \"which surface to focus on\" is a topic with several valid answers. In drilling and milling, the outside diameter of the tool is the widely agreed surface. In turning and boring, the surface can be defined on either side of the depth of cut, that is, either the starting surface or the ending surface, with neither definition being \"wrong\" as long as the people involved understand the difference. An experienced machinist summed this up succinctly as \"the diameter I am turning from\" versus \"the diameter I am turning to.\" He uses the \"from\", not the \"to\", and explains why, while acknowledging that some others do not. The logic of focusing on the largest diameter involved (OD of drill or end mill, starting diameter of turned workpiece) is that this is where the highest tangential speed is, with the most heat generation, which is the main driver of tool wear.\nThere will be an optimum cutting speed for each material and set of machining conditions, and the spindle speed (RPM) can be calculated from this speed. Factors affecting the calculation of cutting speed are:\nCutting speeds are calculated on the assumption that optimum cutting conditions exist. These include:\nThe cutting \"speed\" is given as a set of constants that are available from the material manufacturer or supplier. The most common materials are available in reference books or charts, but will always be subject to adjustment depending on the cutting conditions. The following table gives the cutting speeds for a selection of common materials under one set of conditions. The conditions are a tool life of 1 hour, dry cutting (no coolant), and at medium feeds, so they may appear to be incorrect depending on circumstances. These cutting speeds may change if, for instance, adequate coolant is available or an improved grade of HSS is used (such as one that includes [cobalt]).\nMachinability rating.\nThe machinability rating of a material attempts to quantify the machinability of various materials. It is expressed as a percentage or a normalized value. The American Iron and Steel Institute (AISI) determined machinability ratings for a wide variety of materials by running turning tests at 180 surface feet per minute (sfpm). It then arbitrarily assigned 160 Brinell B1112 steel a machinability rating of 100%. The machinability rating is determined by measuring the weighed averages of the normal cutting speed, surface finish, and tool life for each material. Note that a material with a machinability rating less than 100% would be more difficult to machine than B1112 and material and a value more than 100% would be easier.\nMachinability ratings can be used in conjunction with the Taylor tool life equation, in order to determine cutting speeds or tool life. It is known that B1112 has a tool life of 60 minutes at a cutting speed of 100 sfpm. If a material has a machinability rating of 70%, it can be determined, with the above knowns, that in order to maintain the same tool life (60 minutes), the cutting speed must be 70 sfpm (assuming the same tooling is used).\nWhen calculating for copper alloys, the machine rating is arrived at by assuming the 100 rating of 600 SFM. For example, phosphorus bronze (grades A–D) has a machinability rating of 20. This means that phosphor bronze runs at 20% the speed of 600 SFM or 120 SFM. However, 165 SFM is generally accepted as the basic 100% rating for \"grading steels\".\nFormula\nCutting Speed (V)= [πDN]/1000 m/min\nWhere \nD=Diameter of Workpiece in meter or millimeter\nN=Spindle Speed in rpm\nSpindle speed.\nThe spindle speed is the rotational frequency of the spindle of the machine, measured in revolutions per minute (RPM). The preferred speed is determined by working backward from the desired surface speed (sfm or m/min) and incorporating the diameter (of workpiece or cutter).\nThe spindle may hold the:\nExcessive spindle speed will cause premature tool wear, breakages, and can cause tool chatter, all of which can lead to potentially dangerous conditions. Using the correct spindle speed for the material and tools will greatly enhance tool life and the quality of the surface finish.\nFor a given machining operation, the cutting speed will remain constant for most situations; therefore the spindle speed will also remain constant. However, facing, forming, parting off, and recess operations on a lathe or screw machine involve the machining of a constantly changing diameter. Ideally, this means changing the spindle speed as the cut advances across the face of the workpiece, producing constant surface speed (CSS). Mechanical arrangements to effect CSS have existed for centuries, but they were never applied commonly to machine tool control. In the pre-CNC era, the ideal of CSS was ignored for most work. For unusual work that demanded it, special pains were taken to achieve it. The introduction of CNC-controlled lathes has provided a practical, everyday solution via automated CSS Machining Process Monitoring and Control. By means of the machine's software and variable speed electric motors, the lathe can increase the RPM of the spindle as the cutter gets closer to the center of the part.\nGrinding wheels are designed to be run at a maximum safe speed, the spindle speed of the grinding machine may be variable but this should only be changed with due attention to the safe working speed of the wheel. As a wheel wears it will decrease in diameter, and its effective cutting speed will be reduced. Some grinders have the provision to increase the spindle speed, which corrects for this loss of cutting ability; however, increasing the speed beyond the wheels rating will destroy the wheel and create a serious hazard to life and limb.\nGenerally speaking, spindle speeds and feed rates are less critical in woodworking than metalworking. Most woodworking machines including power saws such as circular saws and band saws, jointers, Thickness planers rotate at a fixed RPM. In those machines, cutting speed is regulated through the feed rate. The required feed rate can be extremely variable depending on the power of the motor, the hardness of the wood or other material being machined, and the sharpness of the cutting tool.\nIn woodworking, the ideal feed rate is one that is slow enough not to bog down the motor, yet fast enough to avoid burning the material. Certain woods, such as black cherry and maple are more prone to burning than others. The right feed rate is usually obtained by \"feel\" if the material is hand fed, or by trial and error if a power feeder is used. In thicknessers (planers), the wood is usually fed automatically through rubber or corrugated steel rollers. Some of these machines allow varying the feed rate, usually by changing pulleys. A slower feed rate usually results in a finer surface as more cuts are made for any length of wood.\nSpindle speed becomes important in the operation of routers, spindle moulders or shapers, and drills. Older and smaller routers often rotate at a fixed spindle speed, usually between 20,000 and 25,000 rpm. While these speeds are fine for small router bits, using larger bits, say more than or 25 millimeters in diameter, can be dangerous and can lead to chatter. Larger routers now have variable speeds and larger bits require slower speed. Drilling wood generally uses higher spindle speeds than metal, and the speed is not as critical. However, larger diameter drill bits do require slower speeds to avoid burning.\nCutting feeds and speeds, and the spindle speeds that are derived from them, are the \"ideal\" cutting conditions for a tool. If the conditions are less than ideal then adjustments are made to the spindle's speed, this adjustment is usually a reduction in RPM to the closest available speed, or one that is deemed (through knowledge and experience) to be correct.\nSome materials, such as machinable wax, can be cut at a wide variety of spindle speeds, while others, such as stainless steel require much more careful control as the cutting speed is critical, to avoid overheating both the cutter and workpiece. Stainless steel is one material that hardens very easily under cold working, therefore insufficient feed rate or incorrect spindle speed can lead to less than ideal cutting conditions as the work piece will quickly harden and resist the tool's cutting action. The liberal application of cutting fluid can improve these cutting conditions; however, the correct selection of speeds is the critical factor.\nSpindle speed calculations.\nMost metalworking books have nomograms or tables of spindle speeds and feed rates for different cutters and workpiece materials; similar tables are also likely available from the manufacturer of the cutter used.\nThe spindle speeds may be calculated for all machining operations once the SFM or MPM is known. In most cases, we are dealing with a cylindrical object such as a milling cutter or a workpiece turning in a lathe so we need to determine the speed at the periphery of this round object. This speed at the periphery (of a point on the circumference, moving past a stationary point) will depend on the rotational speed (RPM) and diameter of the object.\nOne analogy would be a skateboard rider and a bicycle rider travelling side by side along the road. For a given surface speed (the speed of this pair along the road) the rotational speed (RPM) of their wheels (large for the skater and small for the bicycle rider) will be different. This rotational speed (RPM) is what we are calculating, given a fixed surface speed (speed along the road) and known values for their wheel sizes (cutter or workpiece).\nThe following formulae may be used to estimate this value.\nApproximation.\nThe exact RPM is not always needed, a close approximation will work (using 3 for the value of formula_1).\ne.g. for a cutting speed of 100 ft/min (a plain HSS steel cutter on mild steel) and diameter of 10 inches (the cutter or the work piece)\nand, for an example using metric values, where the cutting speed is 30 m/min and a diameter of 10 mm (0.01 m),\nAccuracy.\nHowever, for more accurate calculations, and at the expense of simplicity, this formula can be used:\nand using the same example\nand using the same example as above\nwhere:\nFeed rate.\nFeed rate is the velocity at which the cutter is fed, that is, advanced against the workpiece. It is expressed in units of distance per revolution for turning and boring (typically \"inches per revolution\" [\"ipr\"] or \"millimeters per revolution\"). It can be expressed thus for milling also, but it is often expressed in units of distance per time for milling (typically \"inches per minute\" [\"ipm\"] or \"millimeters per minute\"), with considerations of how many teeth (or flutes) the cutter has then determined what that means for each tooth.\nFeed rate is dependent on the:\nWhen deciding what feed rate to use for a certain cutting operation, the calculation is fairly straightforward for single-point cutting tools, because all of the cutting work is done at one point (done by \"one tooth\", as it were). With a milling machine or jointer, where multi-tipped/multi-fluted cutting tools are involved, then the desired feed rate becomes dependent on the number of teeth on the cutter, as well as the desired amount of material per tooth to cut (expressed as chip load). The greater the number of cutting edges, the higher the feed rate permissible: for a cutting edge to work efficiently it must remove sufficient material to cut rather than rub; it also must do its fair share of work.\nThe ratio of the spindle speed and the feed rate controls how aggressive the cut is, and the nature of the swarf formed.\nFormula to determine feed rate.\nThis formula can be used to figure out the feed rate that the cutter travels into or around the work. This would apply to cutters on a milling machine, drill press and a number of other machine tools. This is not to be used on the lathe for turning operations, as the feed rate on a lathe is given as \"feed per revolution.\"\nformula_8\nWhere:\nDepth of cut.\nCutting speed and feed rate come together with \"depth of cut\" to determine the \"material removal rate\", which is the volume of workpiece material (metal, wood, plastic, etc.) that can be removed per time unit.\nInterrelationship of theory and practice.\nSpeed-and-feed selection is analogous to other examples of applied science, such as meteorology or pharmacology, in that the theoretical modeling is necessary and useful but can never fully predict the reality of specific cases because of the massively multivariate environment. Just as weather forecasts or drug dosages can be modeled with fair accuracy, but never with complete certainty, machinists can predict with charts and formulas the approximate speed and feed values that will work best on a particular job, but cannot know the exact optimal values until running the job. In CNC machining, usually the programmer programs speeds and feedrates that are as maximally tuned as calculations and general guidelines can supply. The operator then fine-tunes the values while running the machine, based on sights, sounds, smells, temperatures, tolerance holding, and tool tip lifespan. Under proper management, the revised values are captured for future use, so that when a program is run again later, this work need not be duplicated.\nAs with meteorology and pharmacology, however, the interrelationship of theory and practice has been developing over decades as the theory part of the balance becomes more advanced thanks to information technology. For example, an effort called the Machine Tool Genome Project is working toward providing the computer modeling (simulation) needed to predict optimal speed-and-feed combinations for particular setups in any internet-connected shop with less local experimentation and testing. Instead of the only option being the measuring and testing of the behavior of its own equipment, it will benefit from others' experience and simulation; in a sense, rather than 'reinventing a wheel', it will be able to 'make better use of existing wheels already developed by others in remote locations'.\nAcademic research examples.\nSpeeds and feeds have been studied scientifically since at least the 1890s. The work is typically done in engineering laboratories, with the funding coming from three basic roots: corporations, governments (including their militaries), and universities. All three types of institution have invested large amounts of money in the cause, often in collaborative partnerships. Examples of such work are highlighted below.\nIn the 1890s through 1910s, Frederick Winslow Taylor performed turning experiments that became famous (and seminal). He developed Taylor's Equation for Tool Life Expectancy.\nScientific study by Holz and De Leeuw of the Cincinnati Milling Machine Company did for milling cutters what F. W. Taylor had done for single-point cutters.\n\"Following World War II, many new alloys were developed. New standards were needed to increase [U.S.] American productivity. Metcut Research Associates, with technical support from the Air Force Materials Laboratory and the Army Science and Technology Laboratory, published the first Machining Data Handbook in 1966. The recommended speeds and feeds provided in this book were the result of extensive testing to determine optimum tool life under controlled conditions for every material of the day, operation and hardness.\"\nA study on the effect of the variation of cutting parameters in the surface integrity in turning of an AISI 304 stainless steel revealed that the feed rate has the greatest impairing effect on the quality of the surface, and that besides the achievement of the desired roughness profile, it is necessary to analyze the effect of speed and feed on the creation of micropits and microdefects on the machined surface. Moreover, they found that the conventional empirical relation that relates feed rate to roughness value does not fit adequately for low cutting speeds.", "Automation-Control": 0.6620903015, "Qwen2": "Yes"} {"id": "53543792", "revid": "18779361", "url": "https://en.wikipedia.org/wiki?curid=53543792", "title": "Switching Kalman filter", "text": "The switching Kalman filtering (SKF) method is a variant of the Kalman filter. In its generalised form, it is often attributed to Kevin P. Murphy, but related switching state-space models have been in use.\nApplications.\nApplications of the switching Kalman filter include: Brain–computer interfaces and neural decoding, real-time decoding for continuous neural-prosthetic control, and sensorimotor learning in humans.\nIt also has application in econometrics, signal processing, tracking, computer vision, etc. It is an alternative to the Kalman filter when the system's state has a discrete component. The additional error when using a Kalman filter instead of a Switching Kalman filter may be quantified in terms of the switching system's parameters. For example, when an industrial plant has \"multiple discrete modes of behaviour, each of which having a linear (Gaussian) dynamics\".\nModel.\nThere are several variants of SKF discussed in.\nSpecial case.\nIn the simpler case, switching state-space models are defined based on a switching variable which evolves independent of the hidden variable. The probabilistic model of such variant of SKF is as the following:\n[This section is badly written: It does not explain the notation used below.]\nThe hidden variables include not only the continuous formula_2, but also a discrete *switch* (or switching) variable formula_3. The dynamics of the switch variable are defined by the term formula_4. The probability model of formula_2 and formula_6 can depend on formula_3.\nThe switch variable can take its values from a set formula_8. This changes the joint distribution formula_9 which is a separate multivariate Gaussian distribution in case of each value of formula_3.\nGeneral case.\nIn more generalised variants, the switch variable affects the dynamics of formula_11, e.g. through formula_12.\nThe filtering and smoothing procedure for general cases is discussed in.", "Automation-Control": 0.6301012039, "Qwen2": "Yes"} {"id": "17295260", "revid": "31737083", "url": "https://en.wikipedia.org/wiki?curid=17295260", "title": "Optimal projection equations", "text": "In control theory, optimal projection equations constitute necessary and sufficient conditions for a locally optimal reduced-order LQG controller.\nThe linear-quadratic-Gaussian (LQG) control problem is one of the most fundamental optimal control problems. It concerns uncertain linear systems disturbed by additive white Gaussian noise, incomplete state information (i.e. not all the state variables are measured and available for feedback) also disturbed by additive white Gaussian noise and quadratic costs. Moreover, the solution is unique and constitutes a linear dynamic feedback control law that is easily computed and implemented. Finally the LQG controller is also fundamental to the optimal perturbation control of non-linear systems.\nThe LQG controller itself is a dynamic system like the system it controls. Both systems have the same state dimension. Therefore, implementing the LQG controller may be problematic if the dimension of the system state is large. The reduced-order LQG problem (fixed-order LQG problem) overcomes this by fixing a-priori the number of states of the LQG controller. This problem is more difficult to solve because it is no longer separable. Also the solution is no longer unique. Despite these facts numerical algorithms are available to solve the associated optimal projection equations.\nMathematical problem formulation and solution.\nContinuous-time.\nThe reduced-order LQG control problem is almost identical to the conventional full-order LQG control problem. Let formula_1 represent the state of the reduced-order LQG controller. Then the only difference is that the state dimension formula_2 of the LQG controller is a-priori fixed to be smaller than formula_3, the state dimension of the controlled system.\nThe reduced-order LQG controller is represented by the following equations:\nThese equations are deliberately stated in a format that equals that of the conventional full-order LQG controller. For the reduced-order LQG control problem it is convenient to rewrite them as\nwhere\nThe matrices formula_9 and formula_10 of the reduced-order LQG controller are determined by the so-called optimal projection equations (OPE).\nThe square optimal projection matrix formula_11 with dimension formula_12 is central to the OPE. The rank of this matrix is almost everywhere equal to formula_13 The associated projection is an oblique projection: formula_14 The OPE constitute four matrix differential equations. The first two equations listed below are generalizations of the matrix Riccati differential equations associated to the conventional full-order LQG controller. In these equations formula_15 denotes formula_16 where formula_17 is the identity matrix of dimension formula_18.\nIf the dimension of the LQG controller is not reduced, that is if formula_21, then formula_22 and the two equations above become the uncoupled matrix Riccati differential equations associated to the conventional full-order LQG controller. If formula_23 the two equations are coupled by the oblique projection formula_24 This reveals why the reduced-order LQG problem is not separable. The oblique projection formula_11 is determined from two additional matrix differential equations which involve rank conditions. Together with the previous two matrix differential equations these are the OPE. To state the additional two matrix differential equations it is convenient to introduce the following two matrices:\nThen the two additional matrix differential equations that complete the OPE are as follows:\nwith\nHere * denotes the group generalized inverse or Drazin inverse that is unique and given by\nwhere + denotes the Moore–Penrose pseudoinverse.\nThe matrices formula_34 must all be nonnegative symmetric. Then they constitute a solution of the OPE that determines the reduced-order LQG controller matrices formula_35 and formula_10:\nIn the equations above the matrices formula_41 are two matrices with the following properties:\nThey can be obtained from a projective factorization of formula_43.\nThe OPE can be stated in many different ways that are all equivalent. To identify the equivalent representations the following identities are especially useful:\nUsing these identities one may for instance rewrite the first two of the optimal projection equations as follows:\nThis representation is both relatively simple and suitable for numerical computations.\nIf all the matrices in the reduced-order LQG problem formulation are time-invariant and if the horizon formula_49 tends to infinity, the optimal reduced-order LQG controller becomes time-invariant and so do the OPE. In that case the derivatives on the left hand side of the OPE are zero.\nDiscrete-time.\nSimilar to the continuous-time case, in the discrete-time case the difference with the conventional discrete-time full-order LQG problem is the a-priori fixed reduced-order formula_50 of the LQG controller state dimension. As in continuous-time, to state the discrete-time OPE it is convenient to introduce the following two matrices:\nThen the discrete-time OPE is\nThe oblique projection matrix is given by\nThe nonnegative symmetric matrices formula_60 that solve the discrete-time OPE determine the reduced-order LQG controller matrices formula_61 and formula_62:\nIn the equations above the matrices formula_67 are two matrices with the following properties:\nThey can be obtained from a projective factorization of formula_69. To identify equivalent representations of the discrete-time OPE the following identities are especially useful:\nAs in the continuous-time case if all the matrices in the problem formulation are time-invariant and if the horizon formula_71 tends to infinity the reduced-order LQG controller becomes time-invariant. Then the discrete-time OPE converge to a steady state solution that determines the time-invariant reduced-order LQG controller.\nThe discrete-time OPE apply also to discrete-time systems with variable state, input and output dimensions (discrete-time systems with time-varying dimensions). Such systems arise in the case of digital controller design if the sampling occurs asynchronously.", "Automation-Control": 0.9960323572, "Qwen2": "Yes"} {"id": "43973770", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=43973770", "title": "Future Immersive Training Environment", "text": "The Future Immersive Training Environment (FITE) Joint Capability Technology Demonstration (JCTD) was a three-year $36-million Department of Defense initiative to demonstrate the value of advanced small unit immersive infantry training systems. It demonstrated infantry applications of virtual reality, mixed reality, and augmented reality.", "Automation-Control": 0.8066146374, "Qwen2": "Yes"} {"id": "33100241", "revid": "37823666", "url": "https://en.wikipedia.org/wiki?curid=33100241", "title": "Hinge loss", "text": "In machine learning, the hinge loss is a loss function used for training classifiers. The hinge loss is used for \"maximum-margin\" classification, most notably for support vector machines (SVMs).\nFor an intended output and a classifier score , the hinge loss of the prediction is defined as\nNote that formula_2 should be the \"raw\" output of the classifier's decision function, not the predicted class label. For instance, in linear SVMs, formula_3, where formula_4 are the parameters of the hyperplane and formula_5 is the input variable(s).\nWhen and have the same sign (meaning predicts the right class) and formula_6, the hinge loss formula_7. When they have opposite signs, formula_8 increases linearly with , and similarly if formula_9, even if it has the same sign (correct prediction, but not by enough margin).\nExtensions.\nWhile binary SVMs are commonly extended to multiclass classification in a one-vs.-all or one-vs.-one fashion,\nit is also possible to extend the hinge loss itself for such an end. Several different variations of multiclass hinge loss have been proposed. For example, Crammer and Singer\ndefined it for a linear classifier as\nWhere formula_11 is the target label, formula_12 and formula_13 are the model parameters.\nWeston and Watkins provided a similar definition, but with a sum rather than a max:\nIn structured prediction, the hinge loss can be further extended to structured output spaces. Structured SVMs with margin rescaling use the following variant, where denotes the SVM's parameters, the SVM's predictions, the joint feature function, and the Hamming loss:\nOptimization.\nThe hinge loss is a convex function, so many of the usual convex optimizers used in machine learning can work with it. It is not differentiable, but has a subgradient with respect to model parameters of a linear SVM with score function formula_16 that is given by\nHowever, since the derivative of the hinge loss at formula_18 is undefined, smoothed versions may be preferred for optimization, such as Rennie and Srebro's\nor the quadratically smoothed\nsuggested by Zhang. The modified Huber loss formula_21 is a special case of this loss function with formula_22, specifically formula_23.", "Automation-Control": 0.9773969054, "Qwen2": "Yes"} {"id": "2945851", "revid": "2414730", "url": "https://en.wikipedia.org/wiki?curid=2945851", "title": "Progressive stamping", "text": "Progressive Die is a metalworking method that can encompass punching, coining, bending and several other ways of modifying metal raw material, combined with an automatic feeding system.\nThe feeding system pushes a strip of metal (as it unrolls from a coil) through all of the stations of a progressive stamping die. Each station performs one or more operations until a finished part is made. The final station is a cutoff operation, which separates the finished part from the carrying web. The carrying web, along with metal that is punched away in previous operations, is treated as scrap metal. Both are cut away, knocked down (or out of the dies) and then ejected from the die set, and in mass production are often transferred to scrap bins via underground scrap material conveyor belts.\nThe progressive stamping die is placed into a reciprocating stamping press. As the press moves up, the top die moves with it, which allows the material to feed. When the press moves down, the die closes and performs the stamping operation. With each stroke of the press, a completed part is removed from the die.\nSince additional work is done in each \"station\" of the die, it is important that the strip be advanced very precisely so that it aligns within a few thousandths of an inch as it moves from station to station. Bullet shaped or conical \"pilots\" enter previously pierced round holes in the strip to assure this alignment since the feeding mechanism usually cannot provide the necessary precision in feed length.\nProgressive stamping can also be produced on transfer presses. These are presses that transfer the components from one station to the next with the use of mechanical \"fingers\". For mass production of stamped parts which do require complicated in-press operations, it is always advisable to use a progressive press. One of the advantages of this type of press is the production cycle time. Depending upon the part, productions can easily run well over 800 parts/minute. One of the disadvantages of this type of press is that it is not suitable for high precision deep drawing which is when the depth of the stamping exceeds the diameter of the part. When necessary, this process is performed upon a transfer press, which run at slower speeds, and rely on the mechanical fingers to hold the component in place during the entire forming cycle. In the case of the progressive press, only part of the forming cycle can be guided by spring-loaded sleeves or similar, which result in concentricity and ovality issues and non uniform material thickness. \nOther disadvantages of progressive presses compared to transfer presses are: increased raw material input required to transfer parts, tools are much more expensive because they are made in blocks with very little independent regulation per station; impossibility to perform processes in the press that require the part leave the strip (example beading, necking, flange curling, thread rolling, rotary stamping etc.).\nThe dies are usually made of tool steel to withstand the high shock loading involved, retain the necessary sharp cutting edge, and resist the abrasive forces involved. \nThe cost is determined by the number of features, which determine what tooling will need to be used. Engineers keep the features as simple as possible to keep the cost of tooling to a minimum. Features that are close together produce a problem because it may not provide enough clearance for the punch, which could result in another station. It can also be problematic to have narrow cuts and protrusions.\nApplications.\nA representative example of the product of a progressive die is the lid of a beverage can. The pull tab is made in one progressive stamping process and the lid & assembly is made in another, the pull tab simultaneously feeding at a right angle into the lid & assembly process. Also various car brake calipers have plates that are bent into shape, possibly cut too using these methods.", "Automation-Control": 0.9645199776, "Qwen2": "Yes"} {"id": "2954049", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=2954049", "title": "Iterative learning control", "text": "Iterative Learning Control (ILC) is a method of tracking control for systems that work in a repetitive mode. Examples of systems that operate in a repetitive manner include robot arm manipulators, chemical batch processes and reliability testing rigs. In each of these tasks the system is required to perform the same action over and over again with high precision. This action is represented by the objective of accurately tracking a chosen reference signal formula_1 on a finite time interval.\nRepetition allows the system to improve tracking accuracy from repetition to repetition, in effect learning the required input needed to track the reference exactly. The learning process uses information from previous repetitions to improve the control signal ultimately enabling a suitable control action can be found iteratively. The internal model principle yields conditions under which perfect tracking can be achieved but the design of the control algorithm still leaves many decisions to be made to suit the application. A typical, simple control law is of the form:\nwhere formula_3 is the input to the system during the pth repetition, formula_4 is the tracking error during the pth repetition and K is a design parameter representing operations on formula_4. Achieving perfect tracking through iteration is represented by the mathematical requirement of convergence of the input signals as formula_6 becomes large whilst the rate of this convergence represents the desirable practical need for the learning process to be rapid. There is also the need to ensure good algorithm performance even in the presence of uncertainty about the details of process dynamics. The operation formula_7 is crucial to achieving design objectives and ranges from simple scalar gains to sophisticated optimization computations.", "Automation-Control": 0.9789548516, "Qwen2": "Yes"} {"id": "100638", "revid": "1152974559", "url": "https://en.wikipedia.org/wiki?curid=100638", "title": "Controllability", "text": "Controllability is an important property of a control system and plays a crucial role in many control problems, such as stabilization of unstable systems by feedback, or optimal control.\nControllability and observability are dual aspects of the same problem.\nRoughly, the concept of controllability denotes the ability to move a system around in its entire configuration space using only certain admissible manipulations. The exact definition varies slightly within the framework or the type of models applied.\nThe following are examples of variations of controllability notions which have been introduced in the systems and control literature:\nState controllability.\nThe state of a deterministic system, which is the set of values of all the system's state variables (those variables characterized by dynamic equations), completely describes the system at any given time. In particular, no information on the past of a system is needed to help in predicting the future, if the states at the present time are known and all current and future values of the control variables (those whose values can be chosen) are known.\n\"Complete state controllability\" (or simply \"controllability\" if no other context is given) describes the ability of an external input (the vector of control variables) to move the internal state of a system from any initial state to any final state in a finite time interval.\nThat is, we can informally define controllability as follows:\nIf for any initial state formula_1 and any final state formula_2 there exists an input sequence to transfer the system state from formula_1 to formula_2 in a finite time interval, then the system modeled by the state-space representation is controllable. For the simplest example of a continuous, LTI system, the row dimension of the state space expression formula_5 determines the interval; each row contributes a vector in the state space of the system. If there are not enough such vectors to span the state space of formula_6, then the system cannot achieve controllability. It may be necessary to modify formula_7 and formula_8 to better approximate the underlying differential relationships it estimates to achieve controllability.\nControllability does not mean that a reached state can be maintained, merely that any state can be reached.\nControllability does not mean that arbitrary paths can be made through state space, only that there exists a path within the prescribed finite time interval.\nContinuous linear systems.\nConsider the continuous linear system \nThere exists a control formula_11 from state formula_12 at time formula_13 to state formula_14 at time formula_15 if and only if formula_16 is in the column space of\nwhere formula_18 is the state-transition matrix, and formula_19 is the Controllability Gramian.\nIn fact, if formula_20 is a solution to formula_21 then a control given by formula_22 would make the desired transfer.\nNote that the matrix formula_23 defined as above has the following properties:\nRank condition for controllability.\nThe Controllability Gramian involves integration of the state-transition matrix of a system. A simpler condition for controllability is a rank condition analogous to the Kalman rank condition for time-invariant systems. \nConsider a continuous-time linear system formula_31 smoothly varying in an interval formula_32 of formula_33:\nThe state-transition matrix formula_18 is also smooth. Introduce the n x m matrix-valued function formula_37 and define \nConsider the matrix of matrix-valued functions obtained by listing all the columns of the formula_40, formula_41:\nformula_42.\nIf there exists a formula_43 and a nonnegative integer k such that formula_44, then formula_31 is controllable.\nIf formula_31 is also analytically varying in an interval formula_32, then formula_31 is controllable on every nontrivial subinterval of formula_32 if and only if there exists a formula_43 and a nonnegative integer k such that formula_51. \nThe above methods can still be complex to check, since it involves the computation of the state-transition matrix formula_18. Another equivalent condition is defined as follow. Let formula_53, and for each formula_54, define \nIn this case, each formula_57 is obtained directly from the data formula_58 The system is controllable if there exists a formula_43 and a nonnegative integer formula_60 such that formula_61.\nExample.\nConsider a system varying analytically in formula_62 and matrices\nformula_63, formula_64 \nThen \nformula_65 and since this matrix has rank 3, the system is controllable on every nontrivial interval of formula_33.\nContinuous linear time-invariant (LTI) systems.\nConsider the continuous linear time-invariant system\nwhere \nThe formula_83 controllability matrix is given by\nThe system is controllable if the controllability matrix has full row rank (i.e. formula_85).\nDiscrete linear time-invariant (LTI) systems.\nFor a discrete-time linear state-space system (i.e. time variable formula_86) the state equation is\nwhere formula_75 is an formula_76 matrix and formula_77 is a formula_78 matrix (i.e. formula_73 is formula_93 inputs collected in a formula_74 vector). The test for controllability is that the formula_83 matrix\nhas full row rank (i.e., formula_97). That is, if the system is controllable, formula_98 will have formula_99 columns that are linearly independent; if formula_99 columns of formula_98 are linearly independent, each of the formula_99 states is reachable by giving the system proper inputs through the variable formula_103.\nDerivation.\nGiven the state formula_104 at an initial time, arbitrarily denoted as \"k\"=0, the state equation gives formula_105 then formula_106 and so on with repeated back-substitutions of the state variable, eventually yielding\nor equivalently\nImposing any desired value of the state vector formula_109 on the left side, this can always be solved for the stacked vector of control vectors if and only if the matrix of matrices at the beginning of the right side has full row rank.\nExample.\nFor example, consider the case when formula_110 and formula_111 (i.e. only one control input). Thus, formula_77 and formula_113 are formula_114 vectors. If formula_115 has rank 2 (full rank), and so formula_77 and formula_117 are linearly independent and span the entire plane. If the rank is 1, then formula_77 and formula_117 are collinear and do not span the plane.\nAssume that the initial state is zero.\nAt time formula_120: formula_121\nAt time formula_122: formula_123\nAt time formula_120 all of the reachable states are on the line formed by the vector formula_77.\nAt time formula_122 all of the reachable states are linear combinations of formula_117 and formula_77.\nIf the system is controllable then these two vectors can span the entire plane and can be done so for time formula_129.\nThe assumption made that the initial state is zero is merely for convenience.\nClearly if all states can be reached from the origin then any state can be reached from another state (merely a shift in coordinates).\nThis example holds for all positive formula_99, but the case of formula_110 is easier to visualize.\nAnalogy for example of \"n\" = 2.\nConsider an analogy to the previous example system.\nYou are sitting in your car on an infinite, flat plane and facing north.\nThe goal is to reach any point in the plane by driving a distance in a straight line, come to a full stop, turn, and driving another distance, again, in a straight line.\nIf your car has no steering then you can only drive straight, which means you can only drive on a line (in this case the north-south line since you started facing north).\nThe lack of steering case would be analogous to when the rank of formula_79 is 1 (the two distances you drove are on the same line).\nNow, if your car did have steering then you could easily drive to any point in the plane and this would be the analogous case to when the rank of formula_79 is 2.\nIf you change this example to formula_134 then the analogy would be flying in space to reach any position in 3D space (ignoring the orientation of the aircraft).\nYou are allowed to:\nAlthough the 3-dimensional case is harder to visualize, the concept of controllability is still analogous.\nNonlinear systems.\nNonlinear systems in the control-affine form\nare locally accessible about formula_12 if the accessibility distribution formula_137 spans formula_99 space, when formula_99 equals the rank of formula_140 and R is given by:\nHere, formula_142 is the repeated Lie bracket operation defined by\nThe controllability matrix for linear systems in the previous section can in fact be derived from this equation.\nNull Controllability.\nIf a discrete control system is null-controllable, it means that there exists a controllable formula_103 so that formula_145 for some initial state formula_146. In other words, it is equivalent to the condition that there exists a matrix formula_147 such that formula_148 is nilpotent.\nThis can be easily shown by controllable-uncontrollable decomposition.\nOutput controllability.\n\"Output controllability\" is the related notion for the output of the system (denoted \"y\" in the previous equations); the output controllability describes the ability of an external input to move the output from any initial condition to any final condition in a finite time interval. It is not necessary that there is any relationship between state controllability and output controllability. In particular:\nFor a linear continuous-time system, like the example above, described by matrices formula_75, formula_77, formula_79, and formula_81, the formula_153 \"output controllability matrix\"\nhas full row rank (i.e. rank formula_155) if and only if the system is output controllable.\nControllability under input constraints.\nIn systems with limited control authority, it is often no longer possible to move any initial state to any final state inside the controllable subspace. This phenomenon is caused by constraints on the input that could be inherent to the system (e.g. due to saturating actuator) or imposed on the system for other reasons (e.g. due to safety-related concerns). The controllability of systems with input and state constraints is studied in the context of reachability and viability theory.\nControllability in the behavioral framework.\nIn the so-called behavioral system theoretic approach due to Willems (see people in systems and control), models considered do not directly define an input–output structure. In this framework systems are described by admissible trajectories of a collection of variables, some of which might be interpreted as inputs or outputs.\nA system is then defined to be controllable in this setting, if any past part of a behavior (trajectory of the external variables) can be concatenated with any future trajectory of the behavior in such a way that the concatenation is contained in the behavior, i.e. is part of the admissible system behavior.\nStabilizability.\nA slightly weaker notion than controllability is that of stabilizability. A system is said to be stabilizable when all uncontrollable state variables can be made to have stable dynamics. Thus, even though some of the state variables cannot be controlled (as determined by the controllability test above) all the state variables will still remain bounded during the system's behavior.\nReachable set.\nLet T ∈ Т and x ∈ \"X\" (where X is the set of all possible states and Т is an interval of time). The reachable set from x in time T is defined as: \nformula_156, where xz denotes that there exists a state transition from x to z in time T.\nFor autonomous systems the reachable set is given by :\nwhere R is the controllability matrix.\nIn terms of the reachable set, the system is controllable if and only if formula_158. \nProof \nWe have the following equalities:\nConsidering that the system is controllable, the columns of R should be linearly independent. So:\nA related set to the reachable set is the controllable set, defined by:\nThe relation between reachability and controllability is presented by Sontag:\n(a) An n-dimensional discrete linear system is controllable if and only if:\n(b) A continuous-time linear system is controllable if and only if:\nif and only if formula_168 for all e>0. \nExample\nLet the system be an n dimensional discrete-time-invariant system from the formula:\nIt follows that the future state is in formula_170 ⇔ it is in the image of the linear map:\nwhich maps,\nWhen formula_173 and formula_174 we identify R(A,B) with a n by nm matrix whose columns are the columns of formula_175 in that order. If the system is controllable the rank of formula_171 is n. If this is truth, the image of the linear map R is all of X. Based on that, we have:", "Automation-Control": 0.9977129698, "Qwen2": "Yes"} {"id": "100643", "revid": "46165493", "url": "https://en.wikipedia.org/wiki?curid=100643", "title": "List of people in systems and control", "text": "This is an alphabetical list of people who have made significant contributions in the fields of system analysis and control theory.\nEminent researchers.\nThe eminent researchers (born after 1920) include the winners of at least one award of the IEEE Control Systems Award, the Giorgio Quazza Medal, the Hendrik W. Bode Lecture Prize, the Richard E. Bellman Control Heritage Award, the Rufus Oldenburger Medal, or higher awards such as the IEEE Medal of Honor and the National Medal of Science. The earlier pioneers such as Nicolas Minorsky (1885–1970), Harry Nyquist (1889–1976), Harold Locke Hazen (1901–1980), Charles Stark Draper (1901–1987), Hendrik Wade Bode (1905–1982), Gordon S. Brown (1907–1996), John F. Coales (1907–1999), Rufus Oldenburger (1908–1969), John R. Ragazzini (1912–1988), Nathaniel B. Nichols (1914–1997), John Zaborszky (1914–2008) and Harold Chestnut (1917–2001) are not included.\nHistorical figures in systems and control.\nThese people have made outstanding historical contributions to systems and control.", "Automation-Control": 0.9552193284, "Qwen2": "Yes"} {"id": "4515690", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=4515690", "title": "Gun drill", "text": "Gun drills (through coolant drill) are straight fluted drills which allow cutting fluid (either compressed air or a suitable liquid) to be injected through the drill's hollow body to the cutting face. They are used for deep hole drilling—a depth-to-diameter ratio of 300:1 or more is possible. Gun barrels are the obvious example; hence the name. Other uses include moldmaking, diemaking, and the manufacture of combustion engine parts such as crankcase, cylinder head, and woodwind musical instruments, such as uilleann pipes, as gun drills can drill long straight holes in metal, wood, and some plastics. The coolant provides lubrication and cooling to the cutting edges and removes the swarf or chips from the hole. Modern gun drills use carbide tips to prolong life and reduce total cost when compared with steel tips. Speed of drilling depends on the material being drilled, rotational speed, and the drill diameter; a high speed drill can cut a hole in P20 steel at 30 inches per minute.\nGun drilling can be done on several kinds of machine tools. On lathes, it is generally practical with hole depths of less than 50 diameters. There are also purpose-built gun drilling machines, where longer aspect ratios can be drilled.\nRequirement.\nWith a standard twist drill, it is difficult to drill a straight and accurately sized hole of a depth more than about 5 times the diameter. This is a problem in many manufacturing processes, especially the firearms industry: the barrel of a gun must be very straight and accurately sized. Gun barrels are far longer than their inside diameter; as an example, the caliber barrel of the M16 rifle is long, nearly 90 times the diameter of the bore. The gun drill was developed to drill such long, straight holes.\nGun drilling is possible over a range of depths and diameters. For diameters between , gun drilling can be performed successfully with special equipment. It is a common process between in diameter. It is also possible for the range, however less efficient than BTA deep hole drilling.\nTypes.\nThere are three basic types of deep hole drilling. Processes are categorized by how the cutting coolant flushes heat and chips from the cutting face. The three types of deep drilling are:", "Automation-Control": 0.7980799675, "Qwen2": "Yes"} {"id": "7423263", "revid": "1150942572", "url": "https://en.wikipedia.org/wiki?curid=7423263", "title": "Control reconfiguration", "text": "Control reconfiguration is an active approach in control theory to achieve fault-tolerant control for dynamic systems. It is used when severe faults, such as actuator or sensor outages, cause a break-up of the control loop, which must be restructured to prevent failure at the system level. In addition to loop restructuring, the controller parameters must be adjusted to accommodate changed plant dynamics. Control reconfiguration is a building block toward increasing the dependability of systems under feedback control.\nReconfiguration problem.\nFault modelling.\nThe figure to the right shows a plant controlled by a controller in a standard control loop.\nThe nominal linear model of the plant is\nformula_1\nThe plant subject to a fault (indicated by a red arrow in the figure) is modelled in general by\nformula_2\nwhere the subscript formula_3 indicates that the system is faulty. This approach models multiplicative faults by modified system matrices. Specifically, actuator faults are represented by the new input matrix formula_4, sensor faults are represented by the output map formula_5, and internal plant faults are represented by the system matrix formula_6.\nThe upper part of the figure shows a supervisory loop consisting of \"fault detection and isolation\" (FDI) and \"reconfiguration\" which changes the loop by\nTo this end, the vectors of inputs and outputs contain \"all available signals\", not just those used by the controller in fault-free operation.\nAlternative scenarios can model faults as an additive external signal formula_9 influencing the state derivatives and outputs as follows:\nformula_10\nReconfiguration goals.\nThe goal of reconfiguration is to keep the reconfigured control-loop performance sufficient for preventing plant shutdown. The following goals are distinguished:\nInternal stability of the reconfigured closed loop is usually the minimum requirement. The equilibrium recovery goal (also referred to as weak goal) refers to the steady-state output equilibrium which the reconfigured loop reaches after a given constant input. This equilibrium must equal the nominal equilibrium under the same input (as time tends to infinity). This goal ensures steady-state reference tracking after reconfiguration. The output trajectory recovery goal (also referred to as strong goal) is even stricter. It requires that the dynamic response to an input must equal the nominal response at all times. Further restrictions are imposed by the state trajectory recovery goal, which requires that the state trajectory be restored to the nominal case by the reconfiguration under any input.\nUsually a combination of goals is pursued in practice, such as the equilibrium-recovery goal with stability.\nThe question whether or not these or similar goals can be reached for specific faults is addressed by reconfigurability analysis.\nReconfiguration approaches.\nFault hiding.\nThis paradigm aims at keeping the nominal controller in the loop. To this end, a reconfiguration block can be placed between the faulty plant and the nominal controller. Together with the faulty plant, it forms the reconfigured plant. The reconfiguration block has to fulfill the requirement that the behaviour of the reconfigured plant matches the behaviour of the nominal, that is fault-free plant.\nLinear model following.\nIn linear model following, a formal feature of the nominal closed loop is attempted to be recovered. In the classical pseudo-inverse method, the closed loop system matrix formula_11 of a state-feedback control structure is used. The new controller formula_12 is found to approximate formula_13 in the sense of an induced matrix norm.\nIn perfect model following, a dynamic compensator is introduced to allow for the exact recovery of the complete loop behaviour under certain conditions.\nIn eigenstructure assignment, the nominal closed loop eigenvalues and eigenvectors (the eigenstructure) is recovered to the nominal case after a fault.\nOptimisation-based control schemes.\nOptimisation control schemes include: linear-quadratic regulator design (LQR), model predictive control (MPC) and eigenstructure assignment methods.\nProbabilistic approaches.\nSome probabilistic approaches have been developed.\nLearning control.\nThere are learning automata, neural networks, etc.\nMathematical tools and frameworks.\nThe methods by which reconfiguration is achieved differ considerably. The following list gives an overview of mathematical approaches that are commonly used.\nSee also.\nPrior to control reconfiguration, it must be at least determined whether a fault has occurred (fault detection) and if so, which components are affected (fault isolation). Preferably, a model of the faulty plant should be provided (fault identification). These questions are addressed by fault diagnosis methods.\nFault accommodation is another common approach to achieve fault tolerance. In contrast to control reconfiguration, accommodation is limited to internal controller changes. The sets of signals manipulated and measured by the controller are fixed, which means that the loop cannot be restructured.", "Automation-Control": 0.9919374585, "Qwen2": "Yes"} {"id": "44493625", "revid": "23646674", "url": "https://en.wikipedia.org/wiki?curid=44493625", "title": "Stabilized soil mixing plant", "text": "A stabilized soil mixing plant is a combination of kinds of machines used for mixing stabilized soil, which is used for highway construction, municipal road projects, and fertile airport areas. The plant produces stabilized soil with different gradings in a continuous way. Such a plant usually contains a cement silo, measuring and conveying system, and mixing devices.\nStabilized soil.\nStabilized soil is a mixture of lime, cement, coal ash, soil, sand, and other aggregates.\nClassification.\nStabilized soil mixing plants are of two kinds: the portable stabilized soil mixing plant and the stationary stabilized soil mixing plant. The portable stabilized soil mixing plant has wheels on each part and can be driven by a trailer, but has low productivity. The stationary plant has larger productivity but is less flexible, and needs a firm groundwork.\nOperating principle.\nAll aggregates like lime, sand, soil, coal ash, and other materials are loaded into batching hoppers by a loading machine. After measuring, the belt feeder transports the aggregates into a mixing device. Meanwhile, stabilizing powders like lime or cement are transferred from a powder material warehouse to the batch hopper by a spiral conveyor, and then moved to the belt feeder by a powder material feeder. All ingredients then go into the mixing device for final processing. Finally, the feeding belt conveyor takes the final product and delivers it to the storage warehouse.", "Automation-Control": 0.7140331864, "Qwen2": "Yes"} {"id": "44501478", "revid": "20611691", "url": "https://en.wikipedia.org/wiki?curid=44501478", "title": "Roll bonding", "text": "Roll bonding is a solid state, cold welding process, obtained through flat rolling of sheet metals. In roll bonding, two or more layers of different metals are passed through a pair of flat rollers under sufficient pressure to bond the layers. The pressure is high enough to deform the metals and reduce the combined thickness of the clad material. The mating surfaces must be previously prepared (scratched, cleaned, degreased) in order to increase their friction coefficient and remove any oxide layers.\nThe process can be performed at room temperature or at warm conditions. In warm roll bonding, heat is applied to pre-heat the sheets just before rolling, in order to increase their ductility and improve the strength of the weld. The strength of the rolled bonds depends on the main process parameters, including the rolling conditions (entry temperature of the sheets, amount of thickness reduction, rolling speed, etc.), the pre-rolling treatment conditions (annealing temperature and time, surface preparation techniques, etc.) and the post-rolling heat treatments.\nApplications.\nThe applications of roll bonding can be used for cladding of metal sheets, or as a sub-step of the accumulative roll bonding. Bonding of the sheets can be controlled by painting a pattern on one sheet; only the bare metal surfaces bond, and the un-bonded portion can be inflated if the sheet is heated and the coating vaporizes. This is used to make heat exchangers for refrigeration equipment. ", "Automation-Control": 0.6160075068, "Qwen2": "Yes"} {"id": "54023783", "revid": "18872885", "url": "https://en.wikipedia.org/wiki?curid=54023783", "title": "Augmented marked graph", "text": "An augmented marked graph is basically a Petri net with a specific set of places called resource places.\nIf removing these resource places and their associated arcs, it will become a marked graph where every cycle is marked. For each resource place, there are pairs of outgoing and incoming transitions connected by elementary paths.\nApplication.\nAugmented marked graphs are often used for modelling systems with shared resources, such as manufacturing systems. Based on the special properties of augmented marked graphs, the properties of the modelled systems, such as liveness, boundedness and reversibility, can be effectively analyzed.", "Automation-Control": 0.9996834993, "Qwen2": "Yes"} {"id": "8726298", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=8726298", "title": "Clinching", "text": "In metalworking, clinching or press-joining is a bulk sheet metal forming process aimed at joining thin metal sheets without additional components, using special tools to plastically form an interlock between two or more sheets. The process is generally performed at room temperature, but in some special cases the sheets can be pre-heated to improve the material ductility and thereby avoid the formation of cracks during the process. Clinching is characterized by a series of advantages over competitive technologies:\nTools.\nBecause the process involves relatively low forces (ranging from 5 to 50 kN depending on the material to join, type of tools and sheet thicknesses), clinching generally involves reduced size (often portable) machines. The tools typically consist of a punch and a die. Different tools have been developed so far, which can be classified in round and rectangular tools. Round clinching tools include: fixed grooved dies, split dies (with 2–4 movable sectors) and flat dies. Such tools produce round joints which show almost identical mechanical behaviors in all plane directions. When round tools are adopted, the integrity of the sheet in the joint must be guaranteed in order to preserve a good mechanical behavior of the joints.\nOn the other hand, rectangular clinched joints exhibit behaviors which depend on the loading direction and both sheets are intentionally sheared along the \"long direction\" in order to produce the interlock. The choice of the tools is highly influenced by:\nIn addition, the choice of the clinching tools highly affects the joining strength and the absorbed energy of a clinched connection other than the joining force. Rectangular tools, for example, require lower joining forces than round tools since the material shearing, while among the round clinching tools split dies require the minimum joining force and the largest interlock.\nOne benefit of clinching is the capability to join prepainted sheet metal commonly used in the appliance industry without damaging the painted surface. Clinching is an important means of fastening aluminum panels, such as hoods and decklids, in the automotive industry, due to the difficulty of spot welding of aluminum.\nMain advantages as compared to welding.\nClinching is used primarily in the automotive, appliance and electronic industries, where it often replaces spot welding. Clinching does not require electricity or cooling of the electrodes commonly associated with spot welding. Being a mechanical joining process, clinching can be used to join materials showing no electrical conductivity such as polymers or plastic-metal composites. In addition, it does not require a substrate preparation such as pre-cleaning of surfaces which is required for welding processes. This fact contributes to reduce the joining costs and the environmental impact (since chemical cleaning is not required).\nClinching does not generate sparks or fumes. The strength of a clinched joint can be tested non-destructively using a simple measuring instrument to measure the remaining thickness at the bottom of the joint, of the diameter of the produced button depending on the type of tools employed. Life expectancy for clinching tools is in the hundreds of thousands of cycles, making it an economical process. Clinched connections performed on aluminum sheets have higher fatigue life as compared to spot welding.\nMain advantages as compared to adhesive joining.\nClinching does not require a pre-cleaning of the surfaces, which is needed before applying adhesives. Clinching is almost an instant joining process (the required joining time is lower than a second) while adhesive joining often requires a much longer time mainly owing to the curing of the joint (up to many hours). Clinched joints are less affected by environmental agents and effect of aging.\nMain limitations.\nBecause it is based on the plastic deformation of the sheets, clinching is limited by the sheet material formability (ductility). Metal ductility increases with temperature, so heat assisted clinching processes have been developed, extending the clinching \"joinability\". Increasing the joining temperature reduces the material's yield stress, so that less joining force is required. Different heating systems are used to heat the sheets before clinching:\nProlonged heating can increase the grain size or cause metallurgical changes in alloys, which can alter the mechanical behavior of the material at the joint site.\nMaterials.\nClinching has been widely employed for joining ductile metals, including the following:\nIt has recently extended to other metals, such as:\nIt has also extended to non-metallic materials, such as:", "Automation-Control": 0.9909584522, "Qwen2": "Yes"} {"id": "49030367", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=49030367", "title": "Negative imaginary systems", "text": "Negative imaginary (NI) systems theory was introduced by Lanzon and Petersen in. A generalization of the theory was presented in \nIn the single-input single-output (SISO) case, such systems are defined by considering the properties of the imaginary part of the frequency response G(jω) and require the system to have no poles in the right half plane and formula_1 > 0 for all ω in (0, ∞). This means that a system is Negative imaginary if it is both stable and a nyquist plot will have a phase lag between [-π 0] for all ω > 0.\nNegative Imaginary Definition.\nA square transfer function matrix formula_2 is NI if the following conditions are satisfied:\nThese conditions can be summarized as:\nNegative Imaginary Lemma.\nLet formula_18 be a minimal realization of the transfer function matrix formula_13. Then, formula_13 is NI if and only if formula_21 and there exists a matrix\nformula_22 such that the following LMI is satisfied:\nformula_23\nThis result comes from positive real theory after converting the negative imaginary system to a positive real system for analysis.", "Automation-Control": 0.9996912479, "Qwen2": "Yes"} {"id": "6586031", "revid": "1071746053", "url": "https://en.wikipedia.org/wiki?curid=6586031", "title": "Current reality tree (theory of constraints)", "text": "One of the thinking processes in the theory of constraints, a current reality tree (CRT) is a tool to analyze many systems or organizational problems at once. By identifying root causes common to most or all of the problems, a CRT can greatly aid focused improvement of the system. A current reality tree is a directed graph.\nSimplified explanation.\nA CRT is a focusing procedure formulated by Eliyahu Goldratt, developer of the theory of constraints. This process is intended to help leaders gain understanding of cause and effect in a situation they want to improve. It treats multiple problems in a system as symptoms arising from one or a few ultimate root causes or systemic core problems. It describes, in a visual (cause-and-effect network) diagram, the main perceived symptoms (along with secondary or hidden ones that lead up to the perceived symptoms) of a problem scenario and ultimately the apparent root causes or core conflict. The benefit of building a CRT is that it identifies the connections or dependencies between perceived symptoms (effects) and root causes (core problems or conflicts) explicitly. If core problems are identified, prioritized, and tackled well, multiple undesirable effects in the system will disappear. Leaders may then focus on solving the few core problems which would cause the biggest positive systemic changes.\nContextual explanation.\nA CRT is a statement of an underlying core problem and the symptoms that arise from it. It maps out a sequence of cause and effect from the core problem to the symptoms. Most of the symptoms will arise from the one core problem or a core conflict. Removing the core problem may well lead to removing each of the symptoms as well. Operationally working backwards from the apparent undesirable effects or symptoms to uncover or discover the underlying core cause.\nExample.\nA CRT begins with a list of problems, known as undesirable effects (UDEs.) These are assumed to be symptoms of a deeper common cause. To take a somewhat frivolous example, a car owner may have the following UDEs:\nThe CRT depicts a chain of cause-and-effect reasoning (if, and, then) in graphical form, where ellipses or circles represent an \"and\". The graphic is constructed by:\nThis approach tends to converge on a single root cause. In the illustrated case, the root cause of the above UDEs is seen as being a faulty handbrake.", "Automation-Control": 0.6302586794, "Qwen2": "Yes"} {"id": "6592539", "revid": "1461430", "url": "https://en.wikipedia.org/wiki?curid=6592539", "title": "Computer Aided Verification", "text": "In computer science, the International Conference on Computer-Aided Verification (CAV) is an annual academic conference on the theory and practice of computer-aided formal analysis of software and hardware systems, broadly known as formal methods. It is one of the highest-ranked conferences in computer science. Among the important results originally published in CAV are breakthrough techniques in model checking, such as Counterexample-Guided Abstraction Refinement (CEGAR) and partial order reduction.\nThe first CAV was held in 1989 in Grenoble, France. The CAV proceedings (1989-present) are published by Springer Science+Business Media and are open access.", "Automation-Control": 0.7145098448, "Qwen2": "Yes"} {"id": "38360943", "revid": "1170508415", "url": "https://en.wikipedia.org/wiki?curid=38360943", "title": "Milling (machining)", "text": "Milling is the process of machining using rotary cutters to remove material by advancing a cutter into a workpiece. This may be done by varying directions on one or several axes, cutter head speed, and pressure. Milling covers a wide variety of different operations and machines, on scales from small individual parts to large, heavy-duty gang milling operations. It is one of the most commonly used processes for machining custom parts to precise tolerances.\nMilling can be done with a wide range of machine tools. The original class of machine tools for milling was the milling machine (often called a mill). After the advent of computer numerical control (CNC) in the 1960s, milling machines evolved into \"machining centers\": milling machines augmented by automatic tool changers, tool magazines or carousels, CNC capability, coolant systems, and enclosures. Milling centers are generally classified as vertical machining centers (VMCs) or horizontal machining centers (HMCs).\nThe integration of milling into turning environments, and vice versa, began with live tooling for lathes and the occasional use of mills for turning operations. This led to a new class of machine tools, multitasking machines (MTMs), which are purpose-built to facilitate milling and turning within the same work envelope.\nProcess.\nMilling is a cutting process that uses a milling cutter to remove material from the surface of a work piece. The milling cutter is a rotary cutting tool, often with multiple cutting points. As opposed to drilling, where the tool is advanced along its rotation axis, the cutter in milling is usually moved perpendicular to its axis so that cutting occurs on the circumference of the cutter. As the milling cutter enters the work piece, the cutting edges (flutes or teeth) of the tool repeatedly cut into and exit from the material, shaving off chips (swarf) from the work piece with each pass. The cutting action is shear deformation; material is pushed off the work piece in tiny clumps that hang together to a greater or lesser extent (depending on the material) to form chips. This makes metal cutting somewhat different (in its mechanics) from slicing softer materials with a blade.\nThe milling process removes material by performing many separate, small cuts. This is accomplished by using a cutter with many teeth, spinning the cutter at high speed, or advancing the material through the cutter slowly; most often it is some combination of these three approaches. The speeds and feeds used are varied to suit a combination of variables. The speed at which the piece advances through the cutter is called feed rate, or just feed; it is most often measured as distance per time (inches per minute [in/min or ipm] or millimeters per minute [mm/min]), although distance per revolution or per cutter tooth are also sometimes used.\nThere are two major classes of milling process:\nMilling cutters.\nMany different types of cutting tools are used in the milling process. Milling cutters such as end mills may have cutting surfaces across their entire end surface, so that they can be drilled into the work piece (plunging). Milling cutters may also have extended cutting surfaces on their sides to allow for peripheral milling. Tools optimized for face milling tend to have only small cutters at their end corners.\nThe cutting surfaces of a milling cutter are generally made of a hard and temperature-resistant material, so that they wear slowly. A low cost cutter may have surfaces made of high speed steel. More expensive but slower-wearing materials include cemented carbide. Thin film coatings may be applied to decrease friction or further increase hardness.\nThere are cutting tools typically used in milling machines or machining centers to perform milling operations (and occasionally in other machine tools). They remove material by their movement within the machine (e.g., a ball nose mill) or directly from the cutter's shape (e.g., a form tool such as a hobbing cutter).\nAs material passes through the cutting area of a milling machine, the blades of the cutter take swarfs of material at regular intervals. Surfaces cut by the side of the cutter (as in peripheral milling) therefore always contain regular ridges. The distance between ridges and the height of the ridges depend on the feed rate, number of cutting surfaces, the cutter diameter. With a narrow cutter and rapid feed rate, these revolution ridges can be significant variations in the surface finish.\nThe face milling process can in principle produce very flat surfaces. However, in practice the result always shows visible trochoidal marks following the motion of points on the cutter's end face. These revolution marks give the characteristic finish of a face milled surface. Revolution marks can have significant roughness depending on factors such as flatness of the cutter's end face and the degree of perpendicularity between the cutter's rotation axis and feed direction. Often a final pass with a slow feed rate is used to improve the surface finish after the bulk of the material has been removed. In a precise face milling operation, the revolution marks will only be microscopic scratches due to imperfections in the cutting edge.\nGang milling refers to the use of two or more milling cutters mounted on the same arbor (that is, ganged) in a horizontal-milling setup. All of the cutters may perform the same type of operation, or each cutter may perform a different type of operation. For example, if several workpieces need a slot, a flat surface, and an angular groove, a good method to cut these (within a non-CNC context) would be gang milling. All the completed workpieces would be the same, and milling time per piece would be minimized.\nGang milling was especially important before the CNC era, because for duplicate part production, it was a substantial efficiency improvement over manual-milling one feature at an operation, then changing machines (or changing setup of the same machine) to cut the next op. Today, CNC mills with automatic tool change and 4- or 5-axis control obviate gang-milling practice to a large extent.\nEquipment.\nMilling is performed with a milling cutter in various forms, held in a collet or similar which, in turn, is held in the spindle of a milling machine.\nTypes and nomenclature.\nMill orientation is the primary classification for milling machines. The two basic configurations are vertical and horizontal – referring to the orientation of the rotating spindle upon which the cutter is mounted. However, there are alternative classifications according to method of control, size, purpose and power source.\nMill orientation.\nVertical.\nIn the vertical milling machine the spindle axis is vertically oriented. Milling cutters are held in the spindle and rotate on its axis. The spindle can generally be lowered (or the table can be raised, giving the same relative effect of bringing the cutter closer or deeper into the work), allowing plunge cuts and drilling. The depth to which blades cut into the work can be controlled with a micrometer adjustment nut. There are two subcategories of vertical mills: the bed mill and the turret mill.\nTurret mills are generally considered by some to be more versatile of the two designs.\nA third type also exists, a lighter, more versatile machine, called a mill-drill. The mill-drill is a close relative of the vertical mill and quite popular in light industry; and with hobbyists. A mill-drill is similar in basic configuration to a very heavy drill press, but equipped with an X-Y table and a much larger column. They also typically use more powerful motors than a comparably sized drill press, most are muti-speed belt driven with some models having a geared head or electronic speed control. They generally have quite heavy-duty spindle bearings to deal with the lateral loading on the spindle that is created by a milling operation. A mill drill also typically raises and lowers the entire head, including motor, often on a dovetailed (sometimes round with rack and pinion) vertical column. A mill drill also has a large quill that is generally locked during milling operations and released to facilitate drilling functions. Other differences that separate a mill-drill from a drill press may be a fine tuning adjustment for the Z-axis, a more precise depth stop, the capability to lock the X, Y or Z axis, and often a system of tilting the head or the entire vertical column and powerhead assembly to allow angled cutting-drilling. Aside from size, the principal difference between these lighter machines and larger vertical mills is that the X-Y table is at a fixed elevation; the Z-axis is controlled by moving the head or quill down toward the X,Y table. A mill drill typically has an internal taper fitting in the quill to take a collet chuck, face mills, or a Jacobs chuck similar to the vertical mill.\nHorizontal.\nA horizontal mill has the same sort but the cutters are mounted on a horizontal spindle, or arbor, mounted across the table. Many horizontal mills also feature a built-in rotary table that allows milling at various angles; this feature is called a \"universal table\". While endmills and the other types of tools available to a vertical mill may be used in a horizontal mill, their real advantage lies in arbor-mounted cutters, called side and face mills, which have a cross section rather like a circular saw, but are generally wider and smaller in diameter. Because the cutters have good support from the arbor and have a larger cross-sectional area than an end mill, quite heavy cuts can be taken enabling rapid material removal rates. These are used to mill grooves and slots. Plain mills are used to shape flat surfaces. Several cutters may be ganged together on the arbor to mill a complex shape of slots and planes. Special cutters can also cut grooves, bevels, radii, or indeed any section desired. These specialty cutters tend to be expensive. Simplex mills have one spindle, and duplex mills have two. It is also easier to cut gears on a horizontal mill. Some horizontal milling machines are equipped with a power-take-off provision on the table. This allows the table feed to be synchronized to a rotary fixture, enabling the milling of spiral features such as hypoid gears.\nUniversal.\nIs a milling machine with the facility to either have a horizontal spindle or a vertical spindle. The latter sometimes being on a two-axis turret enabling the spindle to be pointed in any direction on desires. The two options may be driven independently or from one motor through gearing. In either case, as the work is generally placed in the same place for either type of operation, the mechanism for the method not being used is moved out of the way. In smaller machines, \"spares\" may be lifted off while larger machines offer a system to retract those parts not in use.\nComparative merits.\nThe choice between vertical and horizontal spindle orientation in milling machine design usually hinges on the shape and size of a workpiece and the number of sides of the workpiece that require machining. Work in which the spindle's axial movement is normal to one plane, with an endmill as the cutter, lends itself to a vertical mill, where the operator can stand before the machine and have easy access to the cutting action by looking down upon it. Thus vertical mills are most favored for diesinking work (machining a mould into a block of metal). Heavier and longer workpieces lend themselves to placement on the table of a horizontal mill.\nPrior to numerical control, horizontal milling machines evolved first, because they evolved by putting milling tables under lathe-like headstocks. Vertical mills appeared in subsequent decades, and accessories in the form of add-on heads to change horizontal mills to vertical mills (and later vice versa) have been commonly used. Even in the CNC era, a heavy workpiece needing machining on multiple sides lends itself to a horizontal machining center, while diesinking lends itself to a vertical one.\nAlternative classifications.\nIn addition to horizontal versus vertical, other distinctions are also important:\nAlternative terminology.\nA milling machine is often called a mill by machinists. The archaic term miller was commonly used in the 19th and early 20th centuries.\nSince the 1960s there has developed an overlap of usage between the terms milling machine and machining center. NC/CNC machining centers evolved from milling machines, which is why the terminology evolved gradually with considerable overlap that still persists. The distinction, when one is made, is that a machining center is a mill with features that pre-CNC mills never had, especially an automatic tool changer (ATC) that includes a tool magazine (carousel), and sometimes an automatic pallet changer (APC). In typical usage, all machining centers are mills, but not all mills are machining centers; only mills with ATCs are machining centers.\nComputer numerical control.\nMost CNC milling machines (also called \"machining centers\") are computer controlled vertical mills with the ability to move the spindle vertically along the Z-axis. This extra degree of freedom permits their use in diesinking, engraving applications, and 2.5D surfaces such as relief sculptures. When combined with the use of conical tools or a ball nose cutter, it also significantly improves milling precision without impacting speed, providing a cost-efficient alternative to most flat-surface hand-engraving work.\nCNC machines can exist in virtually any of the forms of manual machinery, like horizontal mills. The most advanced CNC milling-machines, the multiaxis machine, add two more axes in addition to the three normal axes (XYZ). Horizontal milling machines also have a C or Q axis, allowing the horizontally mounted workpiece to be rotated, essentially allowing asymmetric and eccentric turning. The fifth axis (B axis) controls the tilt of the tool itself. When all of these axes are used in conjunction with each other, extremely complicated geometries, even organic geometries such as a human head can be made with relative ease with these machines. But the skill to program such geometries is beyond that of most operators. Therefore, 5-axis milling machines are practically always programmed with CAM.\nThe operating system of such machines is a closed loop system and functions on feedback.\nThese machines have developed from the basic NC (NUMERIC CONTROL) machines. A computerized form of NC machines is known as CNC machines. A set of instructions (called a program) is used to guide the machine for desired operations. There are over 100 different G-codes and M-codes. Some very commonly used codes, which are used in the program are:\n G00 – rapid traverse\n G01 – linear interpolation of tool\n G02 - circular arc clockwise (cw)\n G03 - circular arc counter-clockwise (ccw)\n G20 - dimensions in inch\n G21 – dimensions in mm\n G28 - return to reference point\n G40 - Tool compensation cancel\n G41 - Tool compensation left\n G42 - Tool compensation right\n G43 - Tool length compensation\n G54 - Select coordinate system #1\n M03 – spindle start (clockwise)\n M04 – spindle start (counter-clockwise)\n M05 - spindle stop\n M06 - tool change\n M08 - coolant on\n M09 - coolant off\n M30 – program end\nVarious other codes are also used. A CNC machine is operated by a single operator called a programmer. This machine is capable of performing various operations automatically and economically.\nWith the declining price of computers and open source CNC software, the entry price of CNC machines has plummeted.\nTooling.\nThe accessories and cutting tools used on machine tools (including milling machines) are referred to in aggregate by the mass noun \"tooling\". There is a high degree of standardization of the tooling used with CNC milling machines, and a lesser degree with manual milling machines. To ease up the organization of the tooling in CNC production many companies use a tool management solution.\nMilling cutters for specific applications are held in various tooling configurations.\nCNC milling machines nearly always use SK (or ISO), CAT, BT or HSK tooling. SK tooling is the most common in Europe, while CAT tooling, sometimes called V-Flange Tooling, is the oldest and probably most common type in the USA. CAT tooling was invented by Caterpillar Inc. of Peoria, Illinois, in order to standardize the tooling used on their machinery. CAT tooling comes in a range of sizes designated as CAT-30, CAT-40, CAT-50, etc. The number refers to the Association for Manufacturing Technology (formerly the National Machine Tool Builders Association (NMTB)) taper size of the tool.\nAn improvement on CAT Tooling is BT Tooling, which looks similar and can easily be confused with CAT tooling. Like CAT Tooling, BT Tooling comes in a range of sizes and uses the same NMTB body taper. However, BT tooling is symmetrical about the spindle axis, which CAT tooling is not. This gives BT tooling greater stability and balance at high speeds. One other subtle difference between these two toolholders is the thread used to hold the pull stud. CAT Tooling is all Imperial thread and BT Tooling is all Metric thread. Note that this affects the pull stud only; it does not affect the tool that they can hold. Both types of tooling are sold to accept both Imperial and metric sized tools.\nSK and HSK tooling, sometimes called \"Hollow Shank Tooling\", is much more common in Europe where it was invented than it is in the United States. It is claimed that HSK tooling is even better than BT Tooling at high speeds. The holding mechanism for HSK tooling is placed within the (hollow) body of the tool and, as spindle speed increases, it expands, gripping the tool more tightly with increasing spindle speed. There is no pull stud with this type of tooling.\nFor manual milling machines, there is less standardization, because a greater plurality of formerly competing standards exist. Newer and larger manual machines usually use NMTB tooling. This tooling is somewhat similar to CAT tooling but requires a drawbar within the milling machine. Furthermore, there are a number of variations with NMTB tooling that make interchangeability troublesome. The older a machine, the greater the plurality of standards that may apply (e.g., Morse, Jarno, Brown & Sharpe, Van Norman, and other less common builder-specific tapers). However, two standards that have seen especially wide usage are the Morse #2 and the R8, whose prevalence was driven by the popularity of the mills built by Bridgeport Machines of Bridgeport, Connecticut. These mills so dominated the market for such a long time that \"Bridgeport\" is virtually synonymous with \"manual milling machine\". Most of the machines that Bridgeport made between 1938 and 1965 used a Morse taper #2, and from about 1965 onward most used an R8 taper.\nCNC pocket milling.\nPocket milling has been regarded as one of the most widely used operations in machining. It is extensively used in aerospace and shipyard industries. In pocket milling the material inside an arbitrarily closed boundary on a flat surface of a work piece is removed to a fixed depth. Generally flat bottom end mills are used for pocket milling. Firstly roughing operation is done to remove the bulk of material and then the pocket is finished by a finish end mill.\nMost of the industrial milling operations can be taken care of by 2.5 axis CNC milling. This type of path control can machine up to 80% of all mechanical parts. Since the importance of pocket milling is very relevant, therefore effective pocketing approaches can result in reduction in machining time and cost.\nNC pocket milling can be carried out mainly by two tool paths, viz. linear and non-linear.\nLinear tool path.\nIn this approach, the tool movement is unidirectional. Zig-zag and zig tool paths are the examples of linear tool path.\nZig-zag.\nIn zig-zag milling, material is removed both in forward and backward paths. In this case, cutting is done both with and against the rotation of the spindle. This reduces the machining time but increases machine chatter and tool wear.\nZig.\nIn zig milling, the tool moves only in one direction. The tool has to be lifted and retracted after each cut, due to which machining time increases. However, in case of zig milling surface quality is better.\nNon-linear tool path.\nIn this approach, tool movement is multi-directional. One example of non-linear tool path is contour-parallel tool path.\nContour-parallel.\nIn this approach, the required pocket boundary is used to derive the tool path. In this case the cutter is always in contact with the work material. Hence the idle time spent in positioning and retracting the tool is avoided. For large-scale material removal, contour-parallel tool path is widely used because it can be consistently used with up-cut or down-cut method during the entire process. There are three different approaches that fall into the category of contour-parallel tool path generation. They are:\nCurvilinear.\nIn this approach, the tool travels along a gradually evolving spiral path. The spiral starts at the center of the pocket to be machined and the tool gradually moves towards the pocket boundary. The direction of the tool path changes progressively and local acceleration and deceleration of the tool are minimized. This reduces tool wear.\nHistory.\n1780-1810.\nMilling machines evolved from the practice of rotary filing—that is, running a circular cutter with file-like teeth in the headstock of a lathe. Rotary filing and, later, true milling were developed to reduce time and effort spent hand-filing. The full story of milling machine development may never be known, because much early development took place in individual shops where few records were kept for posterity. However, the broad outlines are known, as summarized below. From a history-of-technology viewpoint, it is clear that the naming of this new type of machining with the term \"milling\" was an extension from that word's earlier senses of processing materials by abrading them in some way (cutting, grinding, crushing, etc.).\nRotary filing long predated milling. A rotary file by Jacques de Vaucanson, circa 1760, is well known.\nIn 1783, Samuel Rehe invented a true milling machine. In 1795, Eli Terry began using a milling machine at Plymouth Connecticut in the production of tall case clocks. With the use of his milling machine, Terry was the first to accomplish Interchangeable parts in the clock industry. Milling wooden parts was efficient in interchangeable parts, but inefficient in high yields. Milling wooden blanks results in a low yield of parts because the machines single blade would cause loss of gear teeth when the cutter hit parallel grains in the wood. Terry later invented a spindle cutting machine to mass produce parts in 1807. Other Connecticut clockmakers like James Harrison of Waterbury, Thomas Barnes of Litchfield, and Gideon Roberts of Bristol, also used milling machines to produce their clocks.\n1810s–1830s.\nIt is clear that milling machines as a distinct class of machine tool (separate from lathes running rotary files) first appeared between 1814 and 1818. The centers of earliest development of true milling machines were two federal armories of the U.S. (Springfield and Harpers Ferry) together with the various private armories and inside contractors that shared turnover of skilled workmen with them. \nBetween 1912 and 1916, Joseph W. Roe, a respected founding father of machine tool historians, credited Eli Whitney (one of the private arms makers mentioned above) with producing the first true milling machine. By 1918, he considered it \"Probably the first milling machine ever built—certainly the oldest now in existence […].\" However, subsequent scholars, including Robert S. Woodbury and others, have improved upon Roe's early version of the history and suggest that just as much credit—in fact, probably more—belongs to various other inventors, including Robert Johnson of Middletown, Connecticut; Captain John H. Hall of the Harpers Ferry armory; Simeon North of the Staddle Hill factory in Middletown; Roswell Lee of the Springfield armory; and Thomas Blanchard. (Several of the men mentioned above are sometimes described on the internet as \"the inventor of the first milling machine\" or \"the inventor of interchangeable parts\". Such claims are oversimplified, as these technologies evolved over time among many people.)\nPeter Baida, citing Edward A. Battison's article \"Eli Whitney and the Milling Machine,\" which was published in the \"Smithsonian Journal of History\" in 1966, exemplifies the dispelling of the \"Great Man\" image of Whitney by historians of technology working in the 1950s and 1960s. He quotes Battison as concluding that \"There is no evidence that Whitney developed or used a true milling machine.\" Baida says, \"The so-called Whitney machine of 1818 seems actually to have been made after Whitney's death in 1825.\" Baida cites Battison's suggestion that the first true milling machine was made not by Whitney, but by Robert Johnson of Middletown.\nThe late teens of the 19th century were a pivotal time in the history of machine tools, as the period of 1814 to 1818 is also the period during which several contemporary pioneers (Fox, Murray, and Roberts) were developing the planer, and as with the milling machine, the work being done in various shops was undocumented for various reasons (partially because of proprietary secrecy, and also simply because no one was taking down records for posterity).\nJames Nasmyth built a milling machine very advanced for its time between 1829 and 1831. It was tooled to mill the six sides of a hex nut that was mounted in a six-way indexing fixture.\nA milling machine built and used in the shop of Gay & Silver (aka Gay, Silver, & Co) in the 1830s was influential because it employed a better method of vertical positioning than earlier machines. For example, Whitney's machine (the one that Roe considered the very first) and others did not make provision for vertical travel of the knee. Evidently, the workflow assumption behind this was that the machine would be set up with shims, vise, etc. for a certain part design, and successive parts did not require vertical adjustment (or at most would need only shimming). This indicates that early thinking about milling machines was as production and not as toolroom machines.\nIn these early years, milling was often viewed as only a roughing operation to be followed by finishing with a hand file. The idea of \"reducing\" hand filing was more important than \"replacing\" it.\n1840s–1860.\nSome of the key men in milling machine development during this era included Frederick W. Howe, Francis A. Pratt, Elisha K. Root, and others. (These same men during the same era were also busy developing the state of the art in turret lathes. Howe's experience at Gay & Silver in the 1840s acquainted him with early versions of both machine tools. His machine tool designs were later built at Robbins & Lawrence, the Providence Tool Company, and Brown & Sharpe.) The most successful milling machine design to emerge during this era was the , which rather than being a specific make and model of machine tool is truly a family of tools built by various companies on a common configuration over several decades. It took its name from the first company to put one on the market, George S. Lincoln & Company (formerly the Phoenix Iron Works), whose first one was built in 1855 for the Colt armory.\nDuring this era there was a continued blind spot in milling machine design, as various designers failed to develop a truly simple and effective means of providing slide travel in all three of the archetypal milling axes (X, Y, and Z—or as they were known in the past, longitudinal, traverse, and vertical). Vertical positioning ideas were either absent or underdeveloped. The Lincoln miller's spindle could be raised and lowered, but the original idea behind its positioning was to be set up in position and then run, as opposed to being moved frequently while running. Like a turret lathe, it was a repetitive-production machine, with each skilled setup followed by extensive fairly low skill operation.\n1860s.\nIn 1861, Frederick W. Howe, while working for the Providence Tool Company, asked Joseph R. Brown of Brown & Sharpe for a solution to the problem of milling spirals, such as the flutes of twist drills. These were usually filed by hand at the time. (Helical planing existed but was by no means common.) Brown designed a \"universal milling machine\" that, starting from its first sale in March 1862, was wildly successful. It solved the problem of 3-axis travel (i.e., the axes that we now call XYZ) much more elegantly than had been done in the past, and it allowed for the milling of spirals using an indexing head fed in coordination with the table feed. The term \"universal\" was applied to it because it was ready for any kind of work, including toolroom work, and was not as limited in application as previous designs. (Howe had designed a \"universal miller\" in 1852, but Brown's of 1861 is the one considered a groundbreaking success.)\nBrown also developed and patented (1864) the design of formed milling cutters in which successive sharpenings of the teeth do not disturb the geometry of the form.\nThe advances of the 1860s opened the floodgates and ushered in modern milling practice.\n1870s to World War I.\nIn these decades, Brown & Sharpe and the Cincinnati Milling Machine Company dominated the American milling machine field. However, hundreds of other firms also built milling machines at the time, and many were significant in various ways. Besides a wide variety of specialized production machines, the archetypal multipurpose milling machine of the late 19th and early 20th centuries was a heavy knee-and-column horizontal-spindle design with power table feeds, indexing head, and a stout overarm to support the arbor. The evolution of machine design was driven not only by inventive spirit but also by the constant evolution of milling cutters that saw milestone after milestone from 1860 through World War I.\nWorld War I and interwar period.\nAround the end of World War I, machine tool control advanced in various ways that laid the groundwork for later CNC technology. The jig borer popularized the ideas of coordinate dimensioning (dimensioning of all locations on the part from a single reference point); working routinely in \"tenths\" (ten-thousandths of an inch, 0.0001\") as an everyday machine capability; and using the control to go straight from drawing to part, circumventing jig-making. In 1920 the new tracer design of J.C. Shaw was applied to Keller tracer milling machines for die sinking via the three dimensional copying of a template. This made die sinking faster and easier just as dies were in higher demand than ever before, and was very helpful for large steel dies such as those used to stamp sheets in automobile manufacturing. Such machines translated the tracer movements to input for servos that worked the machine leadscrews or hydraulics. They also spurred the development of antibacklash leadscrew nuts. All of the above concepts were new in the 1920s but became routine in the NC/CNC era. By the 1930s, incredibly large and advanced milling machines existed, such as the Cincinnati Hydro-Tel, that presaged today's CNC mills in every respect except for CNC control itself.\nBridgeport milling machine.\nIn 1936, Rudolph Bannow (1897–1962) conceived of a major improvement to the milling machine. His company commenced manufacturing a new knee-and-column vertical mill in 1938. This was the Bridgeport milling machine, often called a ram-type or turret-type mill because its head has sliding-ram and rotating-turret mounting. The machine became so popular that many other manufacturers created copies and variants. Furthermore, its name came to connote any such variant. The Bridgeport offered enduring advantages over previous models. It was small enough, light enough, and affordable enough to be a practical acquisition for even the smallest machine shop businesses, yet it was also smartly designed, versatile, well-built, and rigid. Its various directions of sliding and pivoting movement allowed the head to approach the work from any angle. The Bridgeport's design became the dominant form for manual milling machines used by several generations of small- and medium-enterprise machinists. By the 1980s an estimated quarter-million Bridgeport milling machines had been built, and they (and their clones) are still being produced today. \n1940s–1970s.\nBy 1940, automation via cams, such as in screw machines and automatic chuckers, had already been very well developed for decades. Beginning in the 1930s, ideas involving servomechanisms had been in the air, but it was especially during and immediately after World War II that they began to germinate (see also Numerical control > History). These were soon combined with the emerging technology of digital computers. This technological development milieu, spanning from the immediate pre–World War II period into the 1950s, was powered by the military capital expenditures that pursued contemporary advancements in the directing of gun and rocket artillery and in missile guidance—other applications in which humans wished to control the kinematics/dynamics of large machines quickly, precisely, and automatically. Sufficient R&D spending probably would not have happened within the machine tool industry alone; but it was for the latter applications that the will and ability to spend was available. Once the development was underway, it was eagerly applied to machine tool control in one of the many post-WWII instances of technology transfer.\nIn 1952, numerical control reached the developmental stage of laboratory reality. The first NC machine tool was a Cincinnati Hydrotel milling machine retrofitted with a scratch-built NC control unit. It was reported in \"Scientific American\", just as another groundbreaking milling machine, the Brown & Sharpe universal, had been in 1862.\nDuring the 1950s, numerical control moved slowly from the laboratory into commercial service. For its first decade, it had rather limited impact outside of aerospace work. But during the 1960s and 1970s, NC evolved into CNC, data storage and input media evolved, computer processing power and memory capacity steadily increased, and NC and CNC machine tools gradually disseminated from an environment of huge corporations and mainly aerospace work to the level of medium-sized corporations and a wide variety of products. NC and CNC's drastic advancement of machine tool control deeply transformed the culture of manufacturing. The details (which are beyond the scope of this article) have evolved immensely with every passing decade.\n1980s–present.\nComputers and CNC machine tools continue to develop rapidly. The personal computer revolution has a great impact on this development. By the late 1980s small machine shops had desktop computers and CNC machine tools. Soon after, hobbyists, artists, and designers began obtaining CNC mills and lathes. Manufacturers have started producing economically priced CNCs machines small enough to sit on a desktop which can cut at high resolution materials softer than stainless steel. They can be used to make anything from jewelry to printed circuit boards to gun parts, even fine art.\nStandards.\nNational and international standards are used to standardize the definitions, environmental requirements, and test methods used for milling. Selection of the standard to be used is an agreement between the supplier and the user and has some significance in the design of the mill. In the United States, ASME has developed the standards B5.45-1972 \"Milling Machines\" and B94.19-1997 \"Milling Cutters and End Mills\".\nGeneral tolerances include: +/-0.005\" (~0.1mm) for local tolerances across most geometries, +/-0.010\" (~0.25mm) for plastics with variation depending on the size of the part, 0.030\" (~0.75mm) minimum wall thickness for metals, and 0.060\" (~1.5mm) minimum wall thickness for plastics.", "Automation-Control": 0.6824686527, "Qwen2": "Yes"} {"id": "38364055", "revid": "1134883649", "url": "https://en.wikipedia.org/wiki?curid=38364055", "title": "Linear parameter-varying control", "text": "Linear parameter-varying control (LPV control) deals with the control of linear parameter-varying systems, a class of nonlinear systems which can be modelled as parametrized linear systems whose parameters change with their state.\nGain scheduling.\nIn designing feedback controllers for dynamical systems a variety of modern, multivariable controllers are used. In general, these controllers are often designed at various operating points using linearized models of the system dynamics and are scheduled as a function of a parameter or parameters for operation at intermediate conditions. It is an approach for the control of non-linear systems that uses a family of linear controllers, each of which provides satisfactory control for a different operating point of the system. One or more observable variables, called the scheduling variables, are used to determine the current operating region of the system and to enable the appropriate linear controller. For example, in case of aircraft control, a set of controllers are designed at different gridded locations of corresponding parameters such as AoA, Mach, dynamic pressure, CG etc. In brief, gain scheduling is a control design approach that constructs a nonlinear controller for a nonlinear plant by patching together a collection of linear controllers. These linear controllers are blended in real-time via switching or interpolation.\nScheduling multivariable controllers can be very tedious and time-consuming task. A new paradigm is the linear parameter-varying (LPV) techniques which synthesize of automatically scheduled multivariable controller.\nDrawbacks of classical gain scheduling.\nThough the approach is simple and the computational burden of linearization scheduling approaches is often much less than for other nonlinear design approaches, its inherent drawbacks sometimes outweigh its advantages and necessitates a new paradigm for the control of dynamical systems. New methodologies such as Adaptive control based on Artificial Neural Networks (ANN), Fuzzy logic etc. try to address such problems, the lack of proof of stability and performance of such approaches over entire operating parameter regime requires design of a parameter dependent controller with guaranteed properties for which, a Linear Parameter Varying controller could be an ideal candidate.\nLinear parameter-varying systems.\nLPV systems are a very special class of nonlinear systems which appears to be well suited for control of dynamical systems with parameter variations. In general, LPV techniques provide a systematic design procedure for gain-scheduled multivariable controllers. This methodology allows performance, robustness and bandwidth limitations to be incorporated into a unified framework. A brief introduction on the LPV systems and the explanation of terminologies are given below.\nParameter dependent systems.\nIn control engineering, a state-space representation is a mathematical model of a physical system as a set of input, formula_1 output, formula_2 and state variables, formula_3 related by first-order differential equations. The dynamic evolution of a nonlinear, non-autonomous system is represented by\nIf the system is time variant\nThe state variables describe the mathematical \"state\" of a dynamical system and in modeling large complex nonlinear systems if such state variables are chosen to be compact for the sake of practicality and simplicity, then parts of dynamic evolution of system are missing. The state space description will involve other variables called exogenous variables whose evolution is not understood or is too complicated to be modeled but affect the state variables evolution in a known manner and are measurable in real-time using sensors.\nWhen a large number of sensors are used, some of these sensors measure outputs in the system theoretic sense as known, explicit nonlinear functions of the modeled states and time, while other sensors are accurate estimates of the exogenous variables. Hence, the model will be a time varying, nonlinear system, with the future time variation unknown, but measured by the sensors in real-time.\nIn this case, if formula_7 denotes the exogenous variable vector, and formula_8 denotes the modeled state, then the state equations are written as\nThe parameter formula_10 is not known but its evolution is measured in real time and used for control. If the above equation of parameter dependent system is linear in time then it is called Linear Parameter Dependent systems. They are written similar to Linear Time Invariant form albeit the inclusion in time variant parameter.\nParameter-dependent systems are linear systems, whose state-space descriptions are known functions of time-varying parameters. The time variation of each of the parameters is not known in advance, but is assumed to be measurable in real time. The controller is restricted to be a linear system, whose state-space entries depend causally on the parameter’s history. There exist three different methodologies to design a LPV controller namely,\nThese problems are solved by reformulating the control design into finite-dimensional, convex feasibility problems which can be solved exactly, and infinite-dimensional convex feasibility problems which can be solved approximately.\nThis formulation constitutes a type of gain scheduling problem and contrast to classical gain scheduling, this approach address the effect of parameter variations with assured stability and performance.", "Automation-Control": 0.9999869466, "Qwen2": "Yes"} {"id": "30979854", "revid": "42522270", "url": "https://en.wikipedia.org/wiki?curid=30979854", "title": "Pseudospectral knotting method", "text": "In applied mathematics, the pseudospectral knotting method is a generalization and enhancement of a standard pseudospectral method for optimal control. The concept was introduced by I. Michael Ross and F. Fahroo in 2004, and forms part of the collection of the Ross–Fahroo pseudospectral methods.\nDefinition.\nAccording to Ross and Fahroo a pseudospectral (PS) knot is a double Lobatto point; i.e. two boundary points on top of one another. At this point, information (such as discontinuities, jumps, dimension changes etc.) is exchanged between two standard PS methods. This information exchange is used to solve some of the most difficult problems in optimal control known as hybrid optimal control problems.\nIn a hybrid optimal control problem, an optimal control problem is intertwined with a graph problem. A standard pseudospectral optimal control method is incapable of solving such problems; however, through the use of pseudospectral knots, the information of the graph can be encoded at the double Lobatto points thereby allowing a hybrid optimal control problem to be discretized and solved using powerful software such as DIDO.\nApplications.\nPS knots have found applications in aerospace problems such as the ascent guidance of a launch vehicles, and advancing the Aldrin Cycler through the use of solar sails.\nPS knots have also been used for anti-aliasing of PS optimal control solutions and for capturing critical information in switches in solving bang-bang-type optimal control problems.\nSoftware.\nThe PS knotting method was first implemented in the MATLAB optimal control software package, DIDO.", "Automation-Control": 0.8350176811, "Qwen2": "Yes"} {"id": "48981755", "revid": "140154", "url": "https://en.wikipedia.org/wiki?curid=48981755", "title": "Namebench", "text": "Namebench is an open-source Domain Name System (DNS) benchmark utility by Google, Inc, which is licensed under the Apache License, version 2.0. Namebench runs on Windows, OS X, and Unix. It is available with a graphical user interface as well as a command-line interface. Its purpose is to find the fastest DNS server one could use. The project began as a 20% project at Google. It can run the benchmark using your web browser history, tcpdump output, or standardized datasets, in order to provide an individualized recommendation. Namebench was written using open-source tools and libraries. It was created by Google engineer Thomas Stromberg.", "Automation-Control": 0.9235411882, "Qwen2": "Yes"} {"id": "6142533", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=6142533", "title": "Limited-memory BFGS", "text": "Limited-memory BFGS (L-BFGS or LM-BFGS) is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning. The algorithm's target problem is to minimize formula_1 over unconstrained values of the real-vector formula_2 where formula_3 is a differentiable scalar function.\nLike the original BFGS, L-BFGS uses an estimate of the inverse Hessian matrix to steer its search through variable space, but where BFGS stores a dense formula_4 approximation to the inverse Hessian (\"n\" being the number of variables in the problem), L-BFGS stores only a few vectors that represent the approximation implicitly. Due to its resulting linear memory requirement, the L-BFGS method is particularly well suited for optimization problems with many variables. Instead of the inverse Hessian Hk\", L-BFGS maintains a history of the past \"m\" updates of the position x and gradient ∇\"f\"(x), where generally the history size \"m\" can be small (often formula_5). These updates are used to implicitly do operations requiring the Hk\"-vector product.\nAlgorithm.\nThe algorithm starts with an initial estimate of the optimal value, formula_6, and proceeds iteratively to refine that estimate with a sequence of better estimates formula_7. The derivatives of the function formula_8 are used as a key driver of the algorithm to identify the direction of steepest descent, and also to form an estimate of the Hessian matrix (second derivative) of formula_1.\nL-BFGS shares many features with other quasi-Newton algorithms, but is very different in how the matrix-vector multiplication formula_10 is carried out, where formula_11 is the approximate Newton's direction, formula_12 is the current gradient, and formula_13 is the inverse of the Hessian matrix. There are multiple published approaches using a history of updates to form this direction vector. Here, we give a common approach, the so-called \"two loop recursion.\"\nWe take as given formula_14, the position at the -th iteration, and formula_15 where formula_3 is the function being minimized, and all vectors are column vectors. We also assume that we have stored the last updates of the form \nWe define formula_19, and formula_20 will be the 'initial' approximate of the inverse Hessian that our estimate at iteration begins with.\nThe algorithm is based on the BFGS recursion for the inverse Hessian as\nFor a fixed we define a sequence of vectors formula_22 as formula_23 and formula_24. Then a recursive algorithm for calculating formula_25 from formula_26 is to define formula_27 and formula_28. We also define another sequence of vectors formula_29 as formula_30. There is another recursive algorithm for calculating these vectors which is to define formula_31 and then recursively define formula_32 and formula_33. The value of formula_34 is then our ascent direction.\nThus we can compute the descent direction as follows:\nThis formulation gives the search direction for the minimization problem, i.e., formula_36. For maximization problems, one should thus take instead. Note that the initial approximate inverse Hessian formula_20 is chosen as a diagonal matrix or even a multiple of the identity matrix since this is numerically efficient.\nThe scaling of the initial matrix formula_38 ensures that the search direction is well scaled and therefore the unit step length is accepted in most iterations. A Wolfe line search is used to ensure that the curvature condition is satisfied and the BFGS updating is stable. Note that some software implementations use an Armijo backtracking line search, but cannot guarantee that the curvature condition formula_39 will be satisfied by the chosen step since a step length greater than formula_40 may be needed to satisfy this condition. Some implementations address this by skipping the BFGS update when formula_41 is negative or too close to zero, but this approach is not generally recommended since the updates may be skipped too often to allow the Hessian approximation formula_13 to capture important curvature information.\nThis two loop update only works for the inverse Hessian. Approaches to implementing L-BFGS using the direct approximate Hessian formula_43 have also been developed, as have other means of approximating the inverse Hessian.\nApplications.\nL-BFGS has been called \"the algorithm of choice\" for fitting log-linear (MaxEnt) models and conditional random fields with formula_44-regularization.\nVariants.\nSince BFGS (and hence L-BFGS) is designed to minimize smooth functions without constraints, the L-BFGS algorithm must be modified to handle functions that include non-differentiable components or constraints. A popular class of modifications are called active-set methods, based on the concept of the active set. The idea is that when restricted to a small neighborhood of the current iterate, the function and constraints can be simplified.\nL-BFGS-B.\nThe L-BFGS-B algorithm extends L-BFGS to handle simple box constraints (aka bound constraints) on variables; that is, constraints of the form where and are per-variable constant lower and upper bounds, respectively (for each , either or both bounds may be omitted). The method works by identifying fixed and free variables at every step (using a simple gradient method), and then using the L-BFGS method on the free variables only to get higher accuracy, and then repeating the process.\nOWL-QN.\nOrthant-wise limited-memory quasi-Newton (OWL-QN) is an L-BFGS variant for fitting formula_45-regularized models, exploiting the inherent sparsity of such models.\nIt minimizes functions of the form\nwhere formula_47 is a differentiable convex loss function. The method is an active-set type method: at each iterate, it estimates the sign of each component of the variable, and restricts the subsequent step to have the same sign. Once the sign is fixed, the non-differentiable formula_48 term becomes a smooth linear term which can be handled by L-BFGS. After an L-BFGS step, the method allows some variables to change sign, and repeats the process.\nO-LBFGS.\nSchraudolph \"et al.\" present an online approximation to both BFGS and L-BFGS. Similar to stochastic gradient descent, this can be used to reduce the computational complexity by evaluating the error function and gradient on a randomly drawn subset of the overall dataset in each iteration. It has been shown that O-LBFGS has a global almost sure convergence while the online approximation of BFGS (O-BFGS) is not necessarily convergent.\nImplementation of variants.\nNotable open source implementations include:\nNotable non open source implementations include:", "Automation-Control": 0.8758679032, "Qwen2": "Yes"} {"id": "70211503", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=70211503", "title": "Angul steel plant", "text": "Angul steel plant is the largest steel plant in India and is under expansion. After the expansion, it will become the largest steel plant in the world.", "Automation-Control": 0.9015993476, "Qwen2": "Yes"} {"id": "17152300", "revid": "1158552153", "url": "https://en.wikipedia.org/wiki?curid=17152300", "title": "Donald P. Eckman Award", "text": "The Donald P. Eckman Award is an award given by the American Automatic Control Council recognizing outstanding achievements by a young researcher under the age of 35 in the field of control theory. Together with the Richard E. Bellman Control Heritage Award, the Eckman Award is one of the most prestigious awards in control theory.", "Automation-Control": 0.9952458739, "Qwen2": "Yes"} {"id": "24238010", "revid": "892079", "url": "https://en.wikipedia.org/wiki?curid=24238010", "title": "Any-angle path planning", "text": "Any-angle path planning algorithms are pathfinding algorithms that search for a Euclidean shortest path between two points on a grid map while allowing the turns in the path to have any angle. The result is a path that cuts directly through open areas and has relatively few turns. More traditional pathfinding algorithms such as A* either lack in performance or produce jagged, indirect paths.\nBackground.\nReal-world and many game maps have open areas that are most efficiently traversed in a direct way. Traditional algorithms are ill-equipped to solve these problems:\nAn any-angle path planning algorithm aims to produce optimal or near-optimal solutions while taking less time than the basic visibility graph approach. Fast any-angle algorithms take roughly the same time as a grid-based solution to compute.\nAlgorithms.\nA*-based.\nSo far, five main any-angle path planning algorithms that are based on the heuristic search algorithm A* have been developed, all of which propagate information along grid edges:\nThere are also A*-based algorithm distinct from the above family:\nRRT-based.\nBesides, for search in high-dimensional search spaces, such as when the configuration space of the system involves many degrees of freedom that need to be considered (see Motion planning), and/or momentum needs to be considered (which could effectively double the number of dimensions of the search space; this larger space including momentum is known as the phase space), variants of the rapidly-exploring random tree (RRT) have been developed that (almost surely) converge to the optimal path by increasingly finding shorter and shorter paths:\nApplications.\nAny-angle path planning are useful for robot navigation and real-time strategy games where more optimal paths are desirable. Hybrid A*, for example, was used as an entry to a DARPA challenge. The steering-aware properties of some examples also translate to autonomous cars.", "Automation-Control": 0.7528229952, "Qwen2": "Yes"} {"id": "7886457", "revid": "5042921", "url": "https://en.wikipedia.org/wiki?curid=7886457", "title": "Metzler matrix", "text": "In mathematics, a Metzler matrix is a matrix in which all the off-diagonal components are nonnegative (equal to or greater than zero):\nIt is named after the American economist Lloyd Metzler.\nMetzler matrices appear in stability analysis of time delayed differential equations and positive linear dynamical systems. Their properties can be derived by applying the properties of nonnegative matrices to matrices of the form \"M\" + \"aI\", where \"M\" is a Metzler matrix.\nDefinition and terminology.\nIn mathematics, especially linear algebra, a matrix is called Metzler, quasipositive (or quasi-positive) or essentially nonnegative if all of its elements are non-negative except for those on the main diagonal, which are unconstrained. That is, a Metzler matrix is any matrix \"A\" which satisfies\nMetzler matrices are also sometimes referred to as formula_3-matrices, as a \"Z\"-matrix is equivalent to a negated quasipositive matrix.\nProperties.\nThe exponential of a Metzler (or quasipositive) matrix is a nonnegative matrix because of the corresponding property for the exponential of a nonnegative matrix. This is natural, once one observes that the generator matrices of continuous-time finite-state Markov processes are always Metzler matrices, and that probability distributions are always non-negative.\nA Metzler matrix has an eigenvector in the nonnegative orthant because of the corresponding property for nonnegative matrices.", "Automation-Control": 0.6916133165, "Qwen2": "Yes"} {"id": "33727695", "revid": "643450", "url": "https://en.wikipedia.org/wiki?curid=33727695", "title": "Organization for Machine Automation and Control", "text": "The Organization for Machine Automation and Control (OMAC) is a global organization that supports the machine automation and operational needs of manufacturing. OMAC, has in conjunction with ISA, created the PackML industry standard for describing the state and transitions of packaging machines. OMAC was formed by General Motors in the 1980s under the name Open Modular Architecture Controls to address the problem of each machine having different controls and/or software implementations. In the late 1990s OMAC expanded into the packaging automation industry.", "Automation-Control": 0.9978887439, "Qwen2": "Yes"} {"id": "33731132", "revid": "10609380", "url": "https://en.wikipedia.org/wiki?curid=33731132", "title": "Gas metal arc welding", "text": "Gas metal arc welding (GMAW), sometimes referred to by its subtypes metal inert gas (MIG) and metal active gas (MAG) is a welding process in which an electric arc forms between a consumable MIG wire electrode and the workpiece metal(s), which heats the workpiece metal(s), causing them to fuse (melt and join). Along with the wire electrode, a shielding gas feeds through the welding gun, which shields the process from atmospheric contamination.\nThe process can be semi-automatic or automatic. A constant voltage, direct current power source is most commonly used with GMAW, but constant current systems, as well as alternating current, can be used. There are four primary methods of metal transfer in GMAW, called globular, short-circuiting, spray, and pulsed-spray, each of which has distinct properties and corresponding advantages and limitations.\nOriginally developed in the 1940s for welding aluminium and other non-ferrous materials, GMAW was soon applied to steels because it provided faster welding time compared to other welding processes. The cost of inert gas limited its use in steels until several years later, when the use of semi-inert gases such as carbon dioxide became common. Further developments during the 1950s and 1960s gave the process more versatility and as a result, it became a highly used industrial process. Today, GMAW is the most common industrial welding process, preferred for its versatility, speed and the relative ease of adapting the process to robotic automation. Unlike welding processes that do not employ a shielding gas, such as shielded metal arc welding, it is rarely used outdoors or in other areas of moving air. A related process, flux cored arc welding, often does not use a shielding gas, but instead employs an electrode wire that is hollow and filled with flux.\nDevelopment.\nThe principles of gas metal arc welding began to be understood in the early 19th century, after Humphry Davy discovered the short pulsed electric arcs in 1800. Vasily Petrov independently produced the continuous electric arc in 1802 (followed by Davy after 1808). It was not until the 1880s that the technology became developed with the aim of industrial usage. At first, carbon electrodes were used in carbon arc welding. By 1890, metal electrodes had been invented by Nikolay Slavyanov and C. L. Coffin. In 1920, an early predecessor of GMAW was invented by P. O. Nobel of General Electric. It used direct current with a bare electrode wire and used arc voltage to regulate the feed rate. It did not use a shielding gas to protect the weld, as developments in welding atmospheres did not take place until later that decade. In 1926 another forerunner of GMAW was released, but it was not suitable for practical use.\nIn 1948, GMAW was developed by the Battelle Memorial Institute. It used a smaller diameter electrode and a constant voltage power source developed by H. E. Kennedy. It offered a high deposition rate, but the high cost of inert gases limited its use to non-ferrous materials and prevented cost savings. In 1953, the use of carbon dioxide as a welding atmosphere was developed, and it quickly gained popularity in GMAW, since it made welding steel more economical. In 1958 and 1959, the short-arc variation of GMAW was released, which increased welding versatility and made the welding of thin materials possible while relying on smaller electrode wires and more advanced power supplies. It quickly became the most popular GMAW variation.\nThe spray-arc transfer variation was developed in the early 1960s, when experimenters added small amounts of oxygen to inert gases. More recently, pulsed current has been applied, giving rise to a new method called the pulsed spray-arc variation.\nGMAW is one of the most popular welding methods, especially in industrial environments. It is used extensively by the sheet metal industry and the automobile industry. There, the method is often used for arc spot welding, replacing riveting or resistance spot welding. It is also popular for automated welding, where robots handle the workpieces and the welding gun to accelerate manufacturing. GMAW can be difficult to perform well outdoors, since drafts can dissipate the shielding gas and allow contaminants into the weld; flux cored arc welding is better suited for outdoor use such as in construction. Likewise, GMAW's use of a shielding gas does not lend itself to underwater welding, which is more commonly performed via shielded metal arc welding, flux cored arc welding, or gas tungsten arc welding.\nEquipment.\nTo perform gas metal arc welding, the basic necessary equipment is a welding gun, a wire feed unit, a welding power supply, a welding electrode wire, and a shielding gas supply.\nWelding gun and wire feed unit.\nThe typical GMAW welding gun has a number of key parts—a control switch, a contact tip, a power cable, a gas nozzle, an electrode conduit and liner, and a gas hose. The control switch, or trigger, when pressed by the operator, initiates the wire feed, electric power, and the shielding gas flow, causing an electric arc to be struck. The contact tip, normally made of copper and sometimes chemically treated to reduce spatter, is connected to the welding power source through the power cable and transmits the electrical energy to the electrode while directing it to the weld area. It must be firmly secured and properly sized, since it must allow the electrode to pass while maintaining electrical contact. On the way to the contact tip, the wire is protected and guided by the electrode conduit and liner, which help prevent buckling and maintain an uninterrupted wire feed. The gas nozzle directs the shielding gas evenly into the welding zone. Inconsistent flow may not adequately protect the weld area. Larger nozzles provide greater shielding gas flow, which is useful for high current welding operations that develop a larger molten weld pool. A gas hose from the tanks of shielding gas supplies the gas to the nozzle. Sometimes, a water hose is also built into the welding gun, cooling the gun in high heat operations.\nThe wire feed unit supplies the electrode to the work, driving it through the conduit and on to the contact tip. Most models provide the wire at a constant feed rate, but more advanced machines can vary the feed rate in response to the arc length and voltage. Some wire feeders can reach feed rates as high as 30 m/min (1200 in/min), but feed rates for semiautomatic GMAW typically range from 2 to 10 m/min (75 – 400 in/min).\nTool style.\nThe most common electrode holder is a semiautomatic air-cooled holder. Compressed air circulates through it to maintain moderate temperatures. It is used with lower current levels for welding lap or butt joints. The second most common type of electrode holder is semiautomatic water-cooled, where the only difference is that water takes the place of air. It uses higher current levels for welding T or corner joints. The third typical holder type is a water cooled automatic electrode holder—which is typically used with automated equipment.\nPower supply.\nMost applications of gas metal arc welding use a constant voltage power supply. As a result, any change in arc length (which is directly related to voltage) results in a large change in heat input and current. A shorter arc length causes a much greater heat input, which makes the wire electrode melt more quickly and thereby restore the original arc length. This helps operators keep the arc length consistent even when manually welding with hand-held welding guns. To achieve a similar effect, sometimes a constant current power source is used in combination with an arc voltage-controlled wire feed unit. In this case, a change in arc length makes the wire feed rate adjust to maintain a relatively constant arc length. In rare circumstances, a constant current power source and a constant wire feed rate unit might be coupled, especially for the welding of metals with high thermal conductivities, such as aluminum. This grants the operator additional control over the heat input into the weld, but requires significant skill to perform successfully.\nAlternating current is rarely used with GMAW; instead, direct current is employed and the electrode is generally positively charged. Since the anode tends to have a greater heat concentration, this results in faster melting of the feed wire, which increases weld penetration and welding speed. The polarity can be reversed only when special emissive-coated electrode wires are used, but since these are not popular, a negatively charged electrode is rarely employed.\nElectrode.\nThe electrode is a metallic alloy wire, called a MIG wire, whose selection, alloy and size, is based primarily on the composition of the metal being welded, the process variation being used, joint design, and the material surface conditions. Electrode selection greatly influences the mechanical properties of the weld and is a key factor of weld quality. In general the finished weld metal should have mechanical properties similar to those of the base material with no defects such as discontinuities, entrained contaminants or porosity within the weld. To achieve these goals a wide variety of electrodes exist. All commercially available electrodes contain deoxidizing metals such as silicon, manganese, titanium and aluminum in small percentages to help prevent oxygen porosity. Some contain denitriding metals such as titanium and zirconium to avoid nitrogen porosity. Depending on the process variation and base material being welded the diameters of the electrodes used in GMAW typically range from 0.7 to 2.4 mm (0.028 – 0.095 in) but can be as large as 4 mm (0.16 in). The smallest electrodes, generally up to 1.14 mm (0.045 in) are associated with the short-circuiting metal transfer process, while the most common spray-transfer process mode electrodes are usually at least 0.9 mm (0.035 in).\nShielding gas.\nShielding gases are necessary for gas metal arc welding to protect the welding area from atmospheric gases such as nitrogen and oxygen, which can cause fusion defects, porosity, and weld metal embrittlement if they come in contact with the electrode, the arc, or the welding metal. This problem is common to all arc welding processes; for example, in the older Shielded-Metal Arc Welding process (SMAW), the electrode is coated with a solid flux which evolves a protective cloud of carbon dioxide when melted by the arc. In GMAW, however, the electrode wire does not have a flux coating, and a separate shielding gas is employed to protect the weld. This eliminates slag, the hard residue from the flux that builds up after welding and must be chipped off to reveal the completed weld.\nThe choice of a shielding gas depends on several factors, most importantly the type of material being welded and the process variation being used. Pure inert gases such as argon and helium are only used for nonferrous welding; with steel they do not provide adequate weld penetration (argon) or cause an erratic arc and encourage spatter (with helium). Pure carbon dioxide, on the other hand, allows for deep penetration welds but encourages oxide formation, which adversely affects the mechanical properties of the weld. lts low cost makes it an attractive choice, but because of the reactivity of the arc plasma, spatter is unavoidable and welding thin materials is difficult. As a result, argon and carbon dioxide are frequently mixed in a 75%/25% to 90%/10% mixture. Generally, in short circuit GMAW, higher carbon dioxide content increases the weld heat and energy when all other weld parameters (volts, current, electrode type and diameter) are held the same. As the carbon dioxide content increases over 20%, spray transfer GMAW becomes increasingly problematic, especially with smaller electrode diameters.\nArgon is also commonly mixed with other gases, oxygen, helium, hydrogen and nitrogen. The addition of up to 5% oxygen (like the higher concentrations of carbon dioxide mentioned above) can be helpful in welding stainless steel, however, in most applications carbon dioxide is preferred. Increased oxygen makes the shielding gas oxidize the electrode, which can lead to porosity in the deposit if the electrode does not contain sufficient deoxidizers. Excessive oxygen, especially when used in application for which it is not prescribed, can lead to brittleness in the heat affected zone. Argon-helium mixtures are extremely inert, and can be used on nonferrous materials. A helium concentration of 50–75% raises the required voltage and increases the heat in the arc, due to helium's higher ionization temperature. Hydrogen is sometimes added to argon in small concentrations (up to about 5%) for welding nickel and thick stainless steel workpieces. In higher concentrations (up to 25% hydrogen), it may be used for welding conductive materials such as copper. However, it should not be used on steel, aluminum or magnesium because it can cause porosity and hydrogen embrittlement.\nShielding gas mixtures of three or more gases are also available. Mixtures of argon, carbon dioxide and oxygen are marketed for welding steels. Other mixtures add a small amount of helium to argon-oxygen combinations. These mixtures are claimed to allow higher arc voltages and welding speed. Helium also sometimes serves as the base gas, with small amounts of argon and carbon dioxide added. However, because it is less dense than air, helium is less effective at shielding the weld than argon—which is denser than air. It also can lead to arc stability and penetration issues, and increased spatter, due to its much more energetic arc plasma. Helium is also substantially more expensive than other shielding gases. Other specialized and often proprietary gas mixtures claim even greater benefits for specific applications.\nDespite being poisonous, trace amounts of nitric oxide can be used to prevent the even more troublesome ozone from being formed in the arc.\nThe desirable rate of shielding-gas flow depends primarily on weld geometry, speed, current, the type of gas, and the metal transfer mode. Welding flat surfaces requires higher flow than welding grooved materials, since gas disperses more quickly. Faster welding speeds, in general, mean that more gas must be supplied to provide adequate coverage. Additionally, higher current requires greater flow, and generally, more helium is required to provide adequate coverage than if argon is used. Perhaps most importantly, the four primary variations of GMAW have differing shielding gas flow requirements—for the small weld pools of the short circuiting and pulsed spray modes, about 10 L/min (20 ft3/h) is generally suitable, whereas for globular transfer, around 15 L/min (30 ft3/h) is preferred. The spray transfer variation normally requires more shielding-gas flow because of its higher heat input and thus larger weld pool. Typical gas-flow amounts are approximately 20–25 L/min (40–50 ft3/h).\nGMAW-based 3-D printing.\nGMAW has also been used as a low-cost method to 3-D print metal objects. Various open source 3-D printers have been developed to use GMAW. Such components fabricated from aluminum compete with more traditionally manufactured components on mechanical strength. By forming a bad weld on the first layer, GMAW 3-D printed parts can be removed from the substrate with a hammer.\nOperation.\nFor most of its applications gas metal arc welding is a fairly simple welding process to learn requiring no more than a week or two to master basic welding technique. Even when welding is performed by well-trained operators weld quality can fluctuate since it depends on a number of external factors. All GMAW is dangerous, though perhaps less so than some other welding methods, such as shielded metal arc welding.\nTechnique.\nGMAW's basic technique is uncomplicated, with most individuals able to achieve reasonable proficiency in a few weeks, assuming proper training and sufficient practice. As much of the process is automated, GMAW relieves the welder (operator) of the burden of maintaining a precise arc length, as well as feeding filler metal into the weld puddle, coordinated operations that are required in other manual welding processes, such as shielded metal arc. GMAW requires only that the welder guide the gun with proper position and orientation along the area being welded, as well as periodically clean the gun's gas nozzle to remove spatter buildup. Additional skill includes knowing how to adjust the welder so the voltage, wire feed rate and gas flow rate are correct for the materials being welded and the wire size being employed.\nMaintaining a relatively constant contact tip-to-work distance (the \"stick-out\" distance) is important. Excessive stick-out distance may cause the wire electrode to prematurely melt, causing a sputtering arc, and may also cause the shielding gas to rapidly disperse, degrading the quality of the weld. In contrast, insufficient stick-out may increase the rate at which spatter builds up inside the gun's nozzle and in extreme cases, may cause damage to the gun's contact tip. Stick-out distance varies for different GMAW weld processes and applications.\nThe orientation of the gun relative to the weldment is also important. It should be held so as to bisect the angle between the workpieces; that is, at 45 degrees for a fillet weld and 90 degrees for welding a flat surface. The travel angle, or lead angle, is the angle of the gun with respect to the direction of travel, and it should generally remain approximately vertical. However, the desirable angle changes somewhat depending on the type of shielding gas used—with pure inert gases, the bottom of the torch is often slightly in front of the upper section, while the opposite is true when the welding atmosphere is carbon dioxide.\nPosition welding, that is, welding vertical or overhead joints, may require the use of a weaving technique to assure proper weld deposition and penetration. In position welding, gravity tends to cause molten metal to run out of the puddle, resulting in cratering and undercutting, two conditions that produce a weak weld. Weaving constantly moves the fusion zone around so as to limit the amount of metal deposited at any one point. Surface tension then assists in keeping the molten metal in the puddle until it is able to solidify. Development of position welding skill takes some experience, but is usually soon mastered.\nQuality.\nTwo of the most prevalent quality problems in GMAW are dross and porosity. If not controlled, they can lead to weaker, less ductile welds. Dross is an especially common problem in aluminium GMAW welds, normally coming from particles of aluminium oxide or aluminum nitride present in the electrode or base materials. Electrodes and workpieces must be brushed with a wire brush or chemically treated to remove oxides on the surface. Any oxygen in contact with the weld pool, whether from the atmosphere or the shielding gas, causes dross as well. As a result, sufficient flow of inert shielding gases is necessary, and welding in moving air should be avoided.\nIn GMAW the primary cause of porosity is gas entrapment in the weld pool, which occurs when the metal solidifies before the gas escapes. The gas can come from impurities in the shielding gas or on the workpiece, as well as from an excessively long or violent arc. Generally, the amount of gas entrapped is directly related to the cooling rate of the weld pool. Because of its higher thermal conductivity, aluminum welds are especially susceptible to greater cooling rates and thus additional porosity. To reduce it, the workpiece and electrode should be clean, the welding speed diminished and the current set high enough to provide sufficient heat input and stable metal transfer but low enough that the arc remains steady. Preheating can also help reduce the cooling rate in some cases by reducing the temperature gradient between the weld area and the base metal.\nSafety.\nArc welding in any form can be dangerous if proper precautions are not taken. Since GMAW employs an electric arc, welders must wear suitable protective clothing, including heavy gloves and protective long sleeve jackets, to minimize exposure to the arc itself, as well as intense heat, sparks and hot metal. The intense ultraviolet radiation of the arc may cause sunburn-like damage to exposed skin, as well a condition known as arc eye, an inflammation of the cornea, or in cases of prolonged exposure, irreversible damage to the eye's retina. Conventional welding helmets contain dark face plates to prevent this exposure. Newer helmet designs feature a liquid crystal-type face plate that self-darkens upon exposure to the arc. Transparent welding curtains, made of a polyvinyl chloride plastic film, are often used to shield nearby workers and bystanders from exposure to the arc.\nWelders are often exposed to hazardous gases and airborne particulate matter. GMAW produces smoke containing particles of various types of oxides, and the size of the particles tends to influence the toxicity of the fumes. Smaller particles present greater danger. Concentrations of carbon dioxide and ozone can prove dangerous if ventilation is inadequate. Other precautions include keeping combustible materials away from the workplace, and having a working fire extinguisher nearby.\nMetal transfer modes.\nThe three transfer modes in GMAW are globular, short-circuiting, and spray. There are a few recognized variations of these three transfer modes including modified short-circuiting and pulsed-spray.\nGlobular.\nGMAW with globular metal transfer is considered the least desirable of the three major GMAW variations, because of its tendency to produce high heat, a poor weld surface, and spatter. The method was originally developed as a cost efficient way to weld steel using GMAW, because this variation uses carbon dioxide, a less expensive shielding gas than argon. Adding to its economic advantage was its high deposition rate, allowing welding speeds of up to 110 mm/s (250 in/min). As the weld is made, a ball of molten metal from the electrode tends to build up on the end of the electrode, often in irregular shapes with a larger diameter than the electrode itself. When the droplet finally detaches either by gravity or short circuiting, it falls to the workpiece, leaving an uneven surface and often causing spatter. As a result of the large molten droplet, the process is generally limited to flat and horizontal welding positions, requires thicker workpieces, and results in a larger weld pool.\nShort-circuiting.\nFurther developments in welding steel with GMAW led to a variation known as short-circuit transfer (SCT) or short-arc GMAW, in which the current is lower than for the globular method. As a result of the lower current, the heat input for the short-arc variation is considerably reduced, making it possible to weld thinner materials while decreasing the amount of distortion and residual stress in the weld area. As in globular welding, molten droplets form on the tip of the electrode, but instead of dropping to the weld pool, they bridge the gap between the electrode and the weld pool as a result of the lower wire feed rate. This causes a short circuit and extinguishes the arc, but it is quickly reignited after the surface tension of the weld pool pulls the molten metal bead off the electrode tip. This process is repeated about 100 times per second, making the arc appear constant to the human eye. This type of metal transfer provides better weld quality and less spatter than the globular variation, and allows for welding in all positions, albeit with slower deposition of weld material. Setting the weld process parameters (volts, amps and wire feed rate) within a relatively narrow band is critical to maintaining a stable arc: generally between 100 and 200 amperes at 17 to 22 volts for most applications. Also, using short-arc transfer can result in lack of fusion and insufficient penetration when welding thicker materials, due to the lower arc energy and rapidly freezing weld pool. Like the globular variation, it can only be used on ferrous metals.\nCold Metal Transfer.\nFor thin materials, Cold Metal Transfer (CMT) is used by reducing the current when a short circuit is registered, producing many drops per second. CMT can be used for aluminum.\nSpray.\nSpray transfer GMAW was the first metal transfer method used in GMAW, and well-suited to welding aluminium and stainless steel while employing an inert shielding gas. In this GMAW process, the weld electrode metal is rapidly passed along the stable electric arc from the electrode to the workpiece, essentially eliminating spatter and resulting in a high-quality weld finish. As the current and voltage increases beyond the range of short circuit transfer the weld electrode metal transfer transitions from larger globules through small droplets to a vaporized stream at the highest energies. Since this vaporized spray transfer variation of the GMAW weld process requires higher voltage and current than short circuit transfer, and as a result of the higher heat input and larger weld pool area (for a given weld electrode diameter), it is generally used only on workpieces of thicknesses above about 6.4 mm (0.25 in).\nAlso, because of the large weld pool, it is often limited to flat and horizontal welding positions and sometimes also used for vertical-down welds. It is generally not practical for root pass welds. When a smaller electrode is used in conjunction with lower heat input, its versatility increases. The maximum deposition rate for spray arc GMAW is relatively high—about 600 mm/s (1500 in/min).\nPulsed-spray.\nA variation of the spray transfer mode, pulse-spray is based on the principles of spray transfer but uses a pulsing current to melt the filler wire and allow one small molten droplet to fall with each pulse. The pulses allow the average current to be lower, decreasing the overall heat input and thereby decreasing the size of the weld pool and heat-affected zone while making it possible to weld thin workpieces. The pulse provides a stable arc and no spatter, since no short-circuiting takes place. This also makes the process suitable for nearly all metals, and thicker electrode wire can be used as well. The smaller weld pool gives the variation greater versatility, making it possible to weld in all positions. In comparison with short arc GMAW, this method has a somewhat slower maximum speed (85 mm/s or 200 in/min) and the process also requires that the shielding gas be primarily argon with a low carbon dioxide concentration. Additionally, it requires a special power source capable of providing current pulses with a frequency between 30 and 400 pulses per second. However, the method has gained popularity, since it requires lower heat input and can be used to weld thin workpieces, as well as nonferrous materials.\nComparison with flux-cored wire-fed arc welding.\nFlux-cored, self-shielding or gasless wire-fed welding had been developed for simplicity and portability. This avoids the gas system of conventional GMAW and uses a cored wire containing a solid flux. This flux vaporises during welding and produces a plume of shielding gas. Although described as a 'flux', this compound has little activity and acts mostly as an inert shield. The wire is of slightly larger diameter than for a comparable gas-shielded weld, to allow room for the flux. The smallest available is 0.8 mm diameter, compared to 0.6 mm for solid wire. The shield vapor is slightly active, rather than inert, so the process is always MAGS but not MIG (inert gas shield). This limits the process to steel and not aluminium.\nThese gasless machines operate as DCEN, rather than the DCEP usually used for GMAW solid wire. DCEP, or DC Electrode Positive, makes the welding wire into the positively-charged anode, which is the hotter side of the arc. Provided that it is switchable from DCEN to DCEP, a gas-shielded wire-feed machine may also be used for flux-cored wire.\nFlux-cored wire is considered to have some advantages for outdoor welding on-site, as the shielding gas plume is less likely to be blown away in a wind than shield gas from a conventional nozzle. A slight drawback is that, like SMAW (stick) welding, there may be some flux deposited over the weld bead, requiring more of a cleaning process between passes.\nFlux-cored welding machines are most popular at the hobbyist level, as the machines are slightly simpler but mainly because they avoid the cost of providing shield gas, either through a rented cylinder or with the high cost of disposable cylinders.", "Automation-Control": 0.6018926501, "Qwen2": "Yes"} {"id": "26848621", "revid": "10289486", "url": "https://en.wikipedia.org/wiki?curid=26848621", "title": "Cipher security summary", "text": "This article summarizes publicly known attacks against block ciphers and stream ciphers. Note that there are perhaps attacks that are not publicly known, and not all entries may be up to date.\nBest attack.\nThis column lists the complexity of the attack:\nCommon ciphers.\nKey or plaintext recovery attacks.\nAttacks that lead to disclosure of the key or plaintext.\nDistinguishing attacks.\nAttacks that allow distinguishing ciphertext from random data.\nLess common ciphers.\nKey recovery attacks.\nAttacks that lead to disclosure of the key.\nDistinguishing attacks.\nAttacks that allow distinguishing ciphertext from random data.", "Automation-Control": 0.9744482636, "Qwen2": "Yes"} {"id": "7177594", "revid": "6803239", "url": "https://en.wikipedia.org/wiki?curid=7177594", "title": "WS-CAF", "text": "Web Services Composite Application Framework (WS-CAF) is an open framework developed by OASIS. Its purpose is to define a generic and open framework for applications that contain multiple services used together, which are sometimes referred to as composite applications. WS-CAF characteristics include interoperability, ease of implementation and ease of use.\nScope.\nThe scope of WS-CAF includes:\nInput specifications.\nThe WS-CAF accepts the following Web services specifications as input:\nBenefits.\nThe benefits and results of CAF are intended to be standard and interoperable ways to:", "Automation-Control": 0.9851732254, "Qwen2": "Yes"} {"id": "13418907", "revid": "57939", "url": "https://en.wikipedia.org/wiki?curid=13418907", "title": "IEC 62379", "text": "IEC 62379 is a control engineering standard for the common control interface for networked digital audio and video products. IEC 62379 uses Simple Network Management Protocol to communicate control and monitoring information.\nIt is a family of standards that specifies a control framework for networked audio and video equipment and is published by the International Electrotechnical Commission. It has been designed to provide a means for entering a common set of management commands to control the transmission across the network as well as other functions within the interfaced equipment.\nOrganization.\nThe parts within this standard include:\nPart one is common to all equipment that conforms to IEC 62379 and a preview of the published document can be downloaded from the IEC web store here, a section of the International Electrotechnical Commission web site. More information is available at the project group web site.\nHistory.\n2 October 2008.\nPart 2, Audio has now been published and a preview can be downloaded from the IEC web store, a section the International Electrotechnical Commission web site.\n31 August 2011.\nA first edition of Part 3, Video has been submitted to the IEC International Electrotechnical Commission technical committee for the commencement of the standardization process for this part.\nIt contains the video MIB required by Part 7.\nPart 7, Measurement, has been submitted to the IEC International Electrotechnical Commission technical committee for the commencement of the standardization process for this part.\nThis part specifies those aspects that are specific to the measurement requirements of the EBU ECN-IPM Group, a member of the Expert Communities Networks. An associated document EBU TECH 3345 has recently been published by the EBU European Broadcasting Union.\n16 December 2011.\nPart 3 (Document 100/1896/NP) and Part 7 (Document 100/1897/NP) have been approved by IEC TC 100.\n3 April 2014.\nPart 5.2, Transmission over Networks - Signalling, has now been published and can be downloaded from the IEC web store, \n5 June 2015.\nIEC 62379-3:2015 Common control interface for networked digital audio and video products - Part 3: Video has now been published and can be downloaded from the IEC web store.\n16 June 2015.\nIEC 62379-7:2015 Common control interface for networked digital audio and video products - Part 7: Measurements has now been published and can be downloaded from the IEC web store.\nIEC 62379-7:2015 is the standardised (and extended) version of EBU TECH 3345 - End-to-End IP Network Measurement - MIB & Parameters, which can be obtained from here: published by the EBU European Broadcasting Union.", "Automation-Control": 0.799223721, "Qwen2": "Yes"} {"id": "9900080", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=9900080", "title": "Transversality condition", "text": "In optimal control theory, a transversality condition is a boundary condition for the terminal values of the costate variables. They are one of the necessary conditions for optimality infinite-horizon optimal control problems without an endpoint constraint on the state variables.", "Automation-Control": 0.9999965429, "Qwen2": "Yes"} {"id": "23091750", "revid": "38132428", "url": "https://en.wikipedia.org/wiki?curid=23091750", "title": "JACK Intelligent Agents", "text": "JACK Intelligent Agents is a framework in Java for multi-agent system development. JACK Intelligent Agents was built by Agent Oriented Software Pty. Ltd. (AOS) and is a third generation agent platform building on the experiences of the Procedural Reasoning System (PRS) and Distributed Multi-Agent Reasoning System (dMARS). JACK is one of the few multi-agent systems that uses the BDI software model and provides its own Java-based plan language and graphical planning tools.\nHistory.\nJACK Intelligent Agents was initially developed in 1997 by ex-members of the Australian Artificial Intelligence Institute (AAII or A2I2) who were involved in the design, implementation, and application of PRS at SRI International and/or dMARS at the AAII. The JACK platform was written for commercial application of the multi-agent paradigm (a COTS product) to complex problem solving and was the basis for starting the company Agent Oriented Software (AOS) where it remains the flagship product.\nFeatures.\nJACK Intelligent Agents is a mature commercial multi-agent platform that has been under active research, development, and domain-specific application for more than 10 years. The following provides a listing of the platform's key differentiating features.\nExtensions.\nThe JACK platform has been extended a number of times since its inception. Most of the extensions, such as JACK Teams and CoJACK were developed by or in collaboration with AOS.", "Automation-Control": 0.7902204394, "Qwen2": "Yes"} {"id": "26078323", "revid": "1076729208", "url": "https://en.wikipedia.org/wiki?curid=26078323", "title": "Multidimensional system", "text": "In mathematical systems theory, a multidimensional system or m-D system is a system in which not only one independent variable exists (like time), but there are several independent variables.\nImportant problems such as factorization and stability of \"m\"-D systems (\"m\" > 1) have recently attracted the interest of many researchers and practitioners. The reason is that the factorization and stability is not a straightforward extension of the factorization and stability of 1-D systems because, for example, the fundamental theorem of algebra does not exist in the ring of \"m\"-D (\"m\" > 1) polynomials.\nApplications.\nMultidimensional systems or \"m\"-D systems are the necessary mathematical background for modern digital image processing with many applications in biomedicine, X-ray technology and satellite communications.\nThere are also some studies combining \"m\"-D systems with partial differential equations (PDEs).\nLinear multidimensional state-space model.\nA state-space model is a representation of a system in which the effect of all \"prior\" input values is contained by a state vector. In the case of an \"m\"-d system, each dimension has a state vector that contains the effect of prior inputs relative to that dimension. The collection of all such dimensional state vectors at a point constitutes the total state vector at the point.\nConsider a uniform discrete space linear two-dimensional (2d) system that is space invariant and causal. It can be represented in matrix-vector form as follows:\nRepresent the input vector at each point formula_1 by formula_2, the output vector by formula_3 the horizontal state vector by formula_4 and the vertical state vector by formula_5. Then the operation at each point is defined by:\nwhere formula_7 and formula_8 are matrices of appropriate dimensions.\nThese equations can be written more compactly by combining the matrices:\nGiven input vectors formula_2 at each point and initial state values, the value of each output vector can be computed by recursively performing the operation above.\nMultidimensional transfer function.\nA discrete linear two-dimensional system is often described by a partial difference equation in the form:\nformula_11\nwhere formula_12 is the input and formula_3 is the output at point formula_1 and formula_15 and formula_16 are constant coefficients.\nTo derive a transfer function for the system the 2d Z-transform is applied to both sides of the equation above.\nTransposing yields the transfer function formula_18:\nSo given any pattern of input values, the 2d Z-transform of the pattern is computed and then multiplied by the transfer function formula_18 to produce the Z-transform of the system output.\nRealization of a 2d transfer function.\nOften an image processing or other md computational task is described by a transfer function that has certain filtering properties, but it is desired to convert it to state-space form for more direct computation. Such conversion is referred to as realization of the transfer function.\nConsider a 2d linear spatially invariant causal system having an input-output relationship described by:\nTwo cases are individually considered 1) the bottom summation is simply the constant 1 2) the top summation is simply a constant formula_22. Case 1 is often called the “all-zero” or “finite impulse response” case, whereas case 2 is called the “all-pole” or “infinite impulse response” case. The general situation can be implemented as a cascade of the two individual cases. The solution for case 1 is considerably simpler than case 2 and is shown below.\nExample: all zero or finite impulse response.\nThe state-space vectors will have the following dimensions:\nEach term in the summation involves a negative (or zero) power of formula_26 and of formula_27 which correspond to a delay (or shift) along the respective dimension of the input formula_12. This delay can be effected by placing formula_29’s along the super diagonal in the formula_30. and formula_31 matrices and the multiplying coefficients formula_32 in the proper positions in the formula_33. The value formula_34 is placed in the upper position of the formula_35 matrix, which will multiply the input formula_12 and add it to the first component of the formula_37 vector. Also, a value of formula_38 is placed in the formula_8 matrix which will multiply the input formula_12 and add it to the output formula_41.\nThe matrices then appear as follows:\nformula_45", "Automation-Control": 0.9485242963, "Qwen2": "Yes"} {"id": "362565", "revid": "1703986", "url": "https://en.wikipedia.org/wiki?curid=362565", "title": "Optimal control", "text": "Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.\nOptimal control is an extension of the calculus of variations, and is a mathematical optimization method for deriving control policies. The method is largely due to the work of Lev Pontryagin and Richard Bellman in the 1950s, after contributions to calculus of variations by Edward J. McShane. Optimal control can be seen as a control strategy in control theory.\nGeneral method.\nOptimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. A control problem includes a cost functional that is a function of state and control variables. An optimal control is a set of differential equations describing the paths of the control variables that minimize the cost function. The optimal control can be derived using Pontryagin's maximum principle (a necessary condition also known as Pontryagin's minimum principle or simply Pontryagin's principle), or by solving the Hamilton–Jacobi–Bellman equation (a sufficient condition).\nWe begin with a simple example. Consider a car traveling in a straight line on a hilly road. The question is, how should the driver press the accelerator pedal in order to \"minimize\" the total traveling time? In this example, the term \"control law\" refers specifically to the way in which the driver presses the accelerator and shifts the gears. The \"system\" consists of both the car and the road, and the \"optimality criterion\" is the minimization of the total traveling time. Control problems usually include ancillary constraints. For example, the amount of available fuel might be limited, the accelerator pedal cannot be pushed through the floor of the car, speed limits, etc.\nA proper cost function will be a mathematical expression giving the traveling time as a function of the speed, geometrical considerations, and initial conditions of the system. Constraints are often interchangeable with the cost function.\nAnother related optimal control problem may be to find the way to drive the car so as to minimize its fuel consumption, given that it must complete a given course in a time not exceeding some amount. Yet another related control problem may be to minimize the total monetary cost of completing the trip, given assumed monetary prices for time and fuel.\nA more abstract framework goes as follows. Minimize the continuous-time cost functional\nformula_1\nsubject to the first-order dynamic constraints (the state equation)\nformula_2\nthe algebraic \"path constraints\"\nformula_3\nand the endpoint conditions\nformula_4\nwhere formula_5 is the \"state\", formula_6 is the \"control\", formula_7 is the independent variable (generally speaking, time), formula_8 is the initial time, and formula_9 is the terminal time. The terms formula_10 and formula_11 are called the \"endpoint cost \" and the \"running cost\" respectively. In the calculus of variations, formula_10 and formula_11 are referred to as the Mayer term and the \"Lagrangian\", respectively. Furthermore, it is noted that the path constraints are in general \"inequality\" constraints and thus may not be active (i.e., equal to zero) at the optimal solution. It is also noted that the optimal control problem as stated above may have multiple solutions (i.e., the solution may not be unique). Thus, it is most often the case that any solution formula_14 to the optimal control problem is \"locally minimizing\".\nLinear quadratic control.\nA special case of the general nonlinear optimal control problem given in the previous section is the \"linear quadratic\" (LQ) optimal control problem. The LQ problem is stated as follows. Minimize the \"quadratic\" continuous-time cost functional\nformula_15\nSubject to the \"linear\" first-order dynamic constraints\nformula_16\nand the initial condition\nformula_17\nA particular form of the LQ problem that arises in many control system problems is that of the \"linear quadratic regulator\" (LQR) where all of the matrices (i.e., formula_18, formula_19, formula_20, and formula_21) are \"constant\", the initial time is arbitrarily set to zero, and the terminal time is taken in the limit formula_22 (this last assumption is what is known as \"infinite horizon\"). The LQR problem is stated as follows. Minimize the infinite horizon quadratic continuous-time cost functional\nformula_23\nSubject to the \"linear time-invariant\" first-order dynamic constraints\nformula_24\nand the initial condition\nformula_17\nIn the finite-horizon case the matrices are restricted in that formula_20 and formula_21 are positive semi-definite and positive definite, respectively. In the infinite-horizon case, however, the matrices formula_20 and formula_21 are not only positive-semidefinite and positive-definite, respectively, but are also \"constant\". These additional restrictions on\nformula_20 and formula_21 in the infinite-horizon case are enforced to ensure that the cost functional remains positive. Furthermore, in order to ensure that the cost function is \"bounded\", the additional restriction is imposed that the pair formula_32 is \"controllable\". Note that the LQ or LQR cost functional can be thought of physically as attempting to minimize the \"control energy\" (measured as a quadratic form).\nThe infinite horizon problem (i.e., LQR) may seem overly restrictive and essentially useless because it assumes that the operator is driving the system to zero-state and hence driving the output of the system to zero. This is indeed correct. However the problem of driving the output to a desired nonzero level can be solved \"after\" the zero output one is. In fact, it can be proved that this secondary LQR problem can be solved in a very straightforward manner. It has been shown in classical optimal control theory that the LQ (or LQR) optimal control has the feedback form\nformula_33\nwhere formula_34 is a properly dimensioned matrix, given as\nformula_35\nand formula_36 is the solution of the differential Riccati equation. The differential Riccati equation is given as\nformula_37\nFor the finite horizon LQ problem, the Riccati equation is integrated backward in time using the terminal boundary condition\nformula_38\nFor the infinite horizon LQR problem, the differential Riccati equation is replaced with the \"algebraic\" Riccati equation (ARE) given as\nformula_39\nUnderstanding that the ARE arises from infinite horizon problem, the matrices formula_18, formula_19, formula_20, and formula_21 are all \"constant\". It is noted that there are in general multiple solutions to the algebraic Riccati equation and the \"positive definite\" (or positive semi-definite) solution is the one that is used to compute the feedback gain. The LQ (LQR) problem was elegantly solved by Rudolf E. Kálmán.\nNumerical methods for optimal control.\nOptimal control problems are generally nonlinear and therefore, generally do not have analytic solutions (e.g., like the linear-quadratic optimal control problem). As a result, it is necessary to employ numerical methods to solve optimal control problems. In the early years of optimal control ( 1950s to 1980s) the favored approach for solving optimal control problems was that of \"indirect methods\". In an indirect method, the calculus of variations is employed to obtain the first-order optimality conditions. These conditions result in a two-point (or, in the case of a complex problem, a multi-point) boundary-value problem. This boundary-value problem actually has a special structure because it arises from taking the derivative of a Hamiltonian. Thus, the resulting dynamical system is a Hamiltonian system of the form\nformula_44\nwhere\nformula_45\nis the \"augmented Hamiltonian\" and in an indirect method, the boundary-value problem is solved (using the appropriate boundary or \"transversality\" conditions). The beauty of using an indirect method is that the state and adjoint (i.e., formula_46) are solved for and the resulting solution is readily verified to be an extremal trajectory. The disadvantage of indirect methods is that the boundary-value problem is often extremely difficult to solve (particularly for problems that span large time intervals or problems with interior point constraints). A well-known software program that implements indirect methods is BNDSCO.\nThe approach that has risen to prominence in numerical optimal control since the 1980s is that of so-called \"direct methods\". In a direct method, the state or the control, or both, are approximated using an appropriate function approximation (e.g., polynomial approximation or piecewise constant parameterization). Simultaneously, the cost functional is approximated as a \"cost function\". Then, the coefficients of the function approximations are treated as optimization variables and the problem is \"transcribed\" to a nonlinear optimization problem of the form:\nMinimize\nformula_47\nsubject to the algebraic constraints\nformula_48\nDepending upon the type of direct method employed, the size of the nonlinear optimization problem can be quite small (e.g., as in a direct shooting or quasilinearization method), moderate (e.g. pseudospectral optimal control) or may be quite large (e.g., a direct collocation method). In the latter case (i.e., a collocation method), the nonlinear optimization problem may be literally thousands to tens of thousands of variables and constraints. Given the size of many NLPs arising from a direct method, it may appear somewhat counter-intuitive that solving the nonlinear optimization problem is easier than solving the boundary-value problem. It is, however, the fact that the NLP is easier to solve than the boundary-value problem. The reason for the relative ease of computation, particularly of a direct collocation method, is that the NLP is \"sparse\" and many well-known software programs exist (e.g., SNOPT) to solve large sparse NLPs. As a result, the range of problems that can be solved via direct methods (particularly direct \"collocation methods\" which are very popular these days) is significantly larger than the range of problems that can be solved via indirect methods. In fact, direct methods have become so popular these days that many people have written elaborate software programs that employ these methods. In particular, many such programs include \"DIRCOL\", SOCS, OTIS, GESOP/ASTOS, DITAN. and PyGMO/PyKEP. In recent years, due to the advent of the MATLAB programming language, optimal control software in MATLAB has become more common. Examples of academically developed MATLAB software tools implementing direct methods include \"RIOTS\", \"DIDO\", \"DIRECT\", FALCON.m, and \"GPOPS,\" while an example of an industry developed MATLAB tool is \"PROPT\". These software tools have increased significantly the opportunity for people to explore complex optimal control problems both for academic research and industrial problems. Finally, it is noted that general-purpose MATLAB optimization environments such as TOMLAB have made coding complex optimal control problems significantly easier than was previously possible in languages such as C and FORTRAN.\nDiscrete-time optimal control.\nThe examples thus far have shown continuous time systems and control solutions. In fact, as optimal control solutions are now often implemented digitally, contemporary control theory is now primarily concerned with discrete time systems and solutions. The Theory of Consistent Approximations provides conditions under which solutions to a series of increasingly accurate discretized optimal control problem converge to the solution of the original, continuous-time problem. Not all discretization methods have this property, even seemingly obvious ones. For instance, using a variable step-size routine to integrate the problem's dynamic equations may generate a gradient which does not converge to zero (or point in the right direction) as the solution is approached. The direct method \"RIOTS\" is based on the Theory of Consistent Approximation.\nExamples.\nA common solution strategy in many optimal control problems is to solve for the costate (sometimes called the shadow price) formula_49. The costate summarizes in one number the marginal value of expanding or contracting the state variable next turn. The marginal value is not only the gains accruing to it next turn but associated with the duration of the program. It is nice when formula_49 can be solved analytically, but usually, the most one can do is describe it sufficiently well that the intuition can grasp the character of the solution and an equation solver can solve numerically for the values.\nHaving obtained formula_49, the turn-t optimal value for the control can usually be solved as a differential equation conditional on knowledge of formula_49. Again it is infrequent, especially in continuous-time problems, that one obtains the value of the control or the state explicitly. Usually, the strategy is to solve for thresholds and regions that characterize the optimal control and use a numerical solver to isolate the actual choice values in time.\nFinite time.\nConsider the problem of a mine owner who must decide at what rate to extract ore from their mine. They own rights to the ore from date formula_53 to date formula_54. At date formula_53 there is formula_56 ore in the ground, and the time-dependent amount of ore formula_57 left in the ground declines at the rate of formula_58 that the mine owner extracts it. The mine owner extracts ore at cost formula_59 (the cost of extraction increasing with the square of the extraction speed and the inverse of the amount of ore left) and sells ore at a constant price formula_60. Any ore left in the ground at time formula_54 cannot be sold and has no value (there is no \"scrap value\"). The owner chooses the rate of extraction varying with time formula_58 to maximize profits over the period of ownership with no time discounting.\n\\end{align}\nand using the initial and turn-T conditions, the formula_63 series can be solved explicitly, giving formula_64.\nThe manager maximizes profit formula_65:\nformula_66\nwhere the state variable formula_57 evolves as follows:\nformula_68\nForm the Hamiltonian and differentiate:\nformula_69\nAs the mine owner does not value the ore remaining at time formula_54,\nformula_71\nUsing the above equations, it is easy to solve for the differential equations governing formula_58 and formula_49\nformula_74\nand using the initial and turn-T conditions, the functions can be solved to yield\nformula_75", "Automation-Control": 0.866058588, "Qwen2": "Yes"} {"id": "363360", "revid": "11112747", "url": "https://en.wikipedia.org/wiki?curid=363360", "title": "Lyapunov stability", "text": "Various types of stability may be discussed for the solutions of differential equations or difference equations describing dynamical systems. The most important type is that concerning the stability of solutions near to a point of equilibrium. This may be discussed by the theory of Aleksandr Lyapunov. In simple terms, if the solutions that start out near an equilibrium point formula_1 stay near formula_1 forever, then formula_1 is Lyapunov stable. More strongly, if formula_1 is Lyapunov stable and all solutions that start out near formula_1 converge to formula_1, then formula_1 is said to be asymptotically stable (see asymptotic analysis). The notion of \"exponential stability\" guarantees a minimal rate of decay, i.e., an estimate of how quickly the solutions converge. The idea of Lyapunov stability can be extended to infinite-dimensional manifolds, where it is known as structural stability, which concerns the behavior of different but \"nearby\" solutions to differential equations. Input-to-state stability (ISS) applies Lyapunov notions to systems with inputs.\nHistory.\nLyapunov stability is named after Aleksandr Mikhailovich Lyapunov, a Russian mathematician who defended the thesis \"The General Problem of Stability of Motion\" at Kharkov University in 1892. A. M. Lyapunov was a pioneer in successful endeavors to develop a global approach to the analysis of the stability of nonlinear dynamical systems by comparison with the widely spread local method of linearizing them about points of equilibrium. His work, initially published in Russian and then translated to French, received little attention for many years. The mathematical theory of stability of motion, founded by A. M. Lyapunov, considerably anticipated the time for its implementation in science and technology. Moreover Lyapunov did not himself make application in this field, his own interest being in the stability of rotating fluid masses with astronomical application. He did not have doctoral students who followed the research in the field of stability and his own destiny was terribly tragic because of his suicide in 1918 . For several decades the theory of stability sank into complete oblivion. The Russian-Soviet mathematician and mechanician Nikolay Gur'yevich Chetaev working at the Kazan Aviation Institute in the 1930s was the first who realized the incredible magnitude of the discovery made by A. M. Lyapunov. The contribution to the theory made by N. G. Chetaev was so significant that many mathematicians, physicists and engineers consider him Lyapunov's direct successor and the next-in-line scientific descendant in the creation and development of the mathematical theory of stability.\nThe interest in it suddenly skyrocketed during the Cold War period when the so-called \"Second Method of Lyapunov\" (see below) was found to be applicable to the stability of aerospace guidance systems which typically contain strong nonlinearities not treatable by other methods. A large number of publications appeared then and since in the control and systems literature.\nMore recently the concept of the Lyapunov exponent (related to Lyapunov's First Method of discussing stability) has received wide interest in connection with chaos theory. Lyapunov stability methods have also been applied to finding equilibrium solutions in traffic assignment problems.\nDefinition for continuous-time systems.\nConsider an autonomous nonlinear dynamical system\nwhere formula_9 denotes the system state vector, formula_10 an open set containing the origin, and formula_11 is a continuous vector field on formula_10. Suppose formula_13 has an equilibrium at formula_1 so that formula_15 then\nConceptually, the meanings of the above terms are the following:\nThe trajectory \"formula_32\" is (locally) \"attractive\" if\nfor all trajectories formula_35 that start close enough to formula_36, and \"globally attractive\" if this property holds for all trajectories.\nThat is, if \"x\" belongs to the interior of its stable manifold, it is \"asymptotically stable\" if it is both attractive and stable. (There are examples showing that attractivity does not imply asymptotic stability. Such examples are easy to create using homoclinic connections.)\nIf the Jacobian of the dynamical system at an equilibrium happens to be a stability matrix (i.e., if the real part of each eigenvalue is strictly negative), then the equilibrium is asymptotically stable.\nSystem of deviations.\nInstead of considering stability only near an equilibrium point (a constant solution formula_37), one can formulate similar definitions of stability near an arbitrary solution formula_32. However, one can reduce the more general case to that of an equilibrium by a change of variables called a \"system of deviations\". Define formula_39, obeying the differential equation:\nThis is no longer an autonomous system, but it has a guaranteed equilibrium point at formula_41 whose stability is equivalent to the stability of the original solution formula_32. \nLyapunov's second method for stability.\nLyapunov, in his original 1892 work, proposed two methods for demonstrating stability. The first method developed the solution in a series which was then proved convergent within limits. The second method, which is now referred to as the Lyapunov stability criterion or the Direct Method, makes use of a \"Lyapunov function V(x)\" which has an analogy to the potential function of classical dynamics. It is introduced as follows for a system formula_43 having a point of equilibrium at formula_44. Consider a function formula_45 such that\nThen \"V(x)\" is called a Lyapunov function and the system is stable in the sense of Lyapunov. (Note that formula_54 is required; otherwise for example formula_55 would \"prove\" that formula_56 is locally stable.) An additional condition called \"properness\" or \"radial unboundedness\" is required in order to conclude global stability. Global asymptotic stability (GAS) follows similarly.\nIt is easier to visualize this method of analysis by thinking of a physical system (e.g. vibrating spring and mass) and considering the energy of such a system. If the system loses energy over time and the energy is never restored then eventually the system must grind to a stop and reach some final resting state. This final state is called the attractor. However, finding a function that gives the precise energy of a physical system can be difficult, and for abstract mathematical systems, economic systems or biological systems, the concept of energy may not be applicable.\nLyapunov's realization was that stability can be proven without requiring knowledge of the true physical energy, provided a Lyapunov function can be found to satisfy the above constraints.\nDefinition for discrete-time systems.\nThe definition for discrete-time systems is almost identical to that for continuous-time systems. The definition below provides this, using an alternate language commonly used in more mathematical texts.\nLet (\"X\", \"d\") be a metric space and \"f\" : \"X\" → \"X\" a continuous function. A point \"x\" in \"X\" is said to be Lyapunov stable, if,\nWe say that \"x\" is asymptotically stable if it belongs to the interior of its stable set, \"i.e.\" if,\nStability for linear state space models.\nA linear state space model\nwhere formula_60 is a finite matrix, is asymptotically stable (in fact, exponentially stable) if all real parts of the eigenvalues of formula_60 are negative. This condition is equivalent to the following one:\nis negative definite for some positive definite matrix formula_63. (The relevant Lyapunov function is formula_64.)\nCorrespondingly, a time-discrete linear state space model\nis asymptotically stable (in fact, exponentially stable) if all the eigenvalues of formula_60 have a modulus smaller than one.\nThis latter condition has been generalized to switched systems: a linear switched discrete time system (ruled by a set of matrices formula_67)\nis asymptotically stable (in fact, exponentially stable) if the joint spectral radius of the set formula_67 is smaller than one.\nStability for systems with inputs.\nA system with inputs (or controls) has the form\nwhere the (generally time-dependent) input u(t) may be viewed as a \"control\", \"external input\",\n\"stimulus\", \"disturbance\", or \"forcing function\". It has been shown that near to a point of equilibrium which is Lyapunov stable the system remains stable under small disturbances. For larger input disturbances the study of such systems is the subject of control theory and applied in control engineering. For systems with inputs, one must quantify the effect of inputs on the stability of the system. The main two approaches to this analysis are BIBO stability (for linear systems) and input-to-state stability (ISS) (for nonlinear systems)\nExample.\nThis example shows a system where a Lyapunov function can be used to prove Lyapunov stability but cannot show asymptotic stability.\nConsider the following equation, based on the Van der Pol oscillator equation with the friction term changed:\nLet\nso that the corresponding system is\nThe origin formula_74 is the only equilibrium point.\nLet us choose as a Lyapunov function\nwhich is clearly positive definite. Its derivative is\nIt seems that if the parameter formula_77 is positive, stability is asymptotic for formula_78 But this is wrong, since formula_79 does not depend on formula_80, and will be 0 everywhere on the formula_80 axis. The equilibrium is Lyapunov stable but not asymptotically stable.\nBarbalat's lemma and stability of time-varying systems.\nAssume that f is a function of time only.\nBarbalat's Lemma says:\nAn alternative version is as follows:\nIn the following form the Lemma is true also in the vector valued case:\nThe following example is taken from page 125 of Slotine and Li's book \"Applied Nonlinear Control\".\nConsider a non-autonomous system\nThis is non-autonomous because the input formula_114 is a function of time. Assume that the input formula_115 is bounded.\nTaking formula_116 gives formula_117\nThis says that formula_118 by first two conditions and hence formula_119 and formula_120 are bounded. But it does not say anything about the convergence of formula_119 to zero. Moreover, the invariant set theorem cannot be applied, because the dynamics is non-autonomous.\nUsing Barbalat's lemma:\nThis is bounded because formula_119, formula_120 and formula_114 are bounded. This implies formula_126 as formula_84 and hence formula_128. This proves that the error converges.", "Automation-Control": 0.9864658713, "Qwen2": "Yes"} {"id": "40789", "revid": "76", "url": "https://en.wikipedia.org/wiki?curid=40789", "title": "Bilateral synchronization", "text": "In telecommunication, bilateral synchronization (or bilateral control) is a synchronization control system between exchanges A and B in which the clock at telephone exchange A controls the data received at exchange B and the clock at exchange B controls the data received at exchange A. \nBilateral synchronization is usually implemented by deriving the timing from the incoming bitstream.\nSource: from Federal Standard 1037C in support of MIL-STD-188", "Automation-Control": 0.8362718821, "Qwen2": "Yes"} {"id": "40833", "revid": "12136076", "url": "https://en.wikipedia.org/wiki?curid=40833", "title": "Called-party camp-on", "text": "In telecommunication, a called-party camp-on is a communication system service feature that enables the system to complete an access attempt in spite of issuance of a user blocking signal. This is most often found in a switchboard system at a company. Instead of going to voicemail or simply sitting on hold until the line is free, this feature places you in a queue whereby the call will be put through as soon as the line clears.\nSystems that provide this feature monitor the busy user until the user blocking signal ends, and then proceed to complete the requested access. This feature permits holding an incoming telephone call until the called party is free.", "Automation-Control": 0.7935919166, "Qwen2": "Yes"} {"id": "40885", "revid": "22619", "url": "https://en.wikipedia.org/wiki?curid=40885", "title": "Closed-loop transfer function", "text": "In control theory, a closed-loop transfer function is a mathematical function describing the net result of the effects of a feedback control loop on the input signal to the plant under control.\nOverview.\nThe closed-loop transfer function is measured at the output. The output signal can be calculated from the closed-loop transfer function and the input signal. Signals may be waveforms, images, or other types of data streams.\nAn example of a closed-loop transfer function is shown below:\nThe summing node and the \"G\"(\"s\") and \"H\"(\"s\") blocks can all be combined into one block, which would have the following transfer function:\nformula_2 is called feedforward transfer function, formula_3 is called feedback transfer function, and their product formula_4 is called the open-loop transfer function.\nDerivation.\nWe define an intermediate signal Z (also known as error signal) shown as follows:\nUsing this figure we write:\nNow, plug the second equation into the first to eliminate Z(s):\nMove all the terms with Y(s) to the left hand side, and keep the term with X(s) on the right hand side:\nTherefore,", "Automation-Control": 0.9999573231, "Qwen2": "Yes"} {"id": "41053", "revid": "42522270", "url": "https://en.wikipedia.org/wiki?curid=41053", "title": "Distortion-limited operation", "text": "In telecommunication, distortion-limited operation is the condition prevailing when distortion of a received signal, rather than its attenuated amplitude (or power), limits performance under stated operational conditions and limits. \n\"Note:\" Distortion-limited operation is reached when the system distorts the shape of the waveform beyond specified limits. For linear systems, distortion-limited operation is equivalent to bandwidth-limited operation.", "Automation-Control": 0.7419685125, "Qwen2": "Yes"} {"id": "41096", "revid": "7852030", "url": "https://en.wikipedia.org/wiki?curid=41096", "title": "Electromagnetic interference control", "text": "In Electrical systems, such as telecommunication, power electronics, industrial electronics, power engineering; electromagnetic interference \"(EMI)\" control is the control of radiated and conducted energy such that emissions that are unnecessary for system, subsystem, or equipment operation are reduced, minimized, or eliminated. \n\"Note:\" Electromagnetic radiated and conducted emissions are controlled regardless of their origin within the system, subsystem, or equipment. Successful EMI control with effective susceptibility control leads to electromagnetic compatibility.", "Automation-Control": 0.9978055358, "Qwen2": "Yes"} {"id": "20926", "revid": "786259", "url": "https://en.wikipedia.org/wiki?curid=20926", "title": "Supervised learning", "text": "Supervised learning (SL) is a paradigm in machine learning where input objects (for example, a vector of predictor variables) and a desired output value (also known as human-labeled \"supervisory signal\") train a model. The training data is processed, building a function that maps new data on expected output values. An optimal scenario will allow for the algorithm to correctly determine output values for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a \"reasonable\" way (see inductive bias). This statistical quality of an algorithm is measured through the so-called generalization error.\nSteps to follow.\nTo solve a given problem of supervised learning, one has to perform the following steps:\nAlgorithm choice.\nA wide range of supervised learning algorithms are available, each with its strengths and weaknesses. There is no single learning algorithm that works best on all supervised learning problems (see the No free lunch theorem).\nThere are four major issues to consider in supervised learning:\nBias-variance tradeoff.\nA first issue is the tradeoff between \"bias\" and \"variance\". Imagine that we have available several different, but equally good, training data sets. A learning algorithm is biased for a particular input formula_1 if, when trained on each of these data sets, it is systematically incorrect when predicting the correct output for formula_1. A learning algorithm has high variance for a particular input formula_1 if it predicts different output values when trained on different training sets. The prediction error of a learned classifier is related to the sum of the bias and the variance of the learning algorithm. Generally, there is a tradeoff between bias and variance. A learning algorithm with low bias must be \"flexible\" so that it can fit the data well. But if the learning algorithm is too flexible, it will fit each training data set differently, and hence have high variance. A key aspect of many supervised learning methods is that they are able to adjust this tradeoff between bias and variance (either automatically or by providing a bias/variance parameter that the user can adjust).\nFunction complexity and amount of training data.\nThe second issue is of the amount of training data available relative to the complexity of the \"true\" function (classifier or regression function). If the true function is simple, then an \"inflexible\" learning algorithm with high bias and low variance will be able to learn it from a small amount of data. But if the true function is highly complex (e.g., because it involves complex interactions among many different input features and behaves differently in different parts of the input space), then the function will only be able to learn with a large amount of training data paired with a \"flexible\" learning algorithm with low bias and high variance.\nDimensionality of the input space.\nA third issue is the dimensionality of the input space. If the input feature vectors have large dimensions, learning the function can be difficult even if the true function only depends on a small number of those features. This is because the many \"extra\" dimensions can confuse the learning algorithm and cause it to have high variance. Hence, input data of large dimensions typically requires tuning the classifier to have low variance and high bias. In practice, if the engineer can manually remove irrelevant features from the input data, it will likely improve the accuracy of the learned function. In addition, there are many algorithms for feature selection that seek to identify the relevant features and discard the irrelevant ones. This is an instance of the more general strategy of dimensionality reduction, which seeks to map the input data into a lower-dimensional space prior to running the supervised learning algorithm.\nNoise in the output values.\nA fourth issue is the degree of noise in the desired output values (the supervisory target variables). If the desired output values are often incorrect (because of human error or sensor errors), then the learning algorithm should not attempt to find a function that exactly matches the training examples. Attempting to fit the data too carefully leads to overfitting. You can overfit even when there are no measurement errors (stochastic noise) if the function you are trying to learn is too complex for your learning model. In such a situation, the part of the target function that cannot be modeled \"corrupts\" your training data - this phenomenon has been called deterministic noise. When either type of noise is present, it is better to go with a higher bias, lower variance estimator.\nIn practice, there are several approaches to alleviate noise in the output values such as early stopping to prevent overfitting as well as detecting and removing the noisy training examples prior to training the supervised learning algorithm. There are several algorithms that identify noisy training examples and removing the suspected noisy training examples prior to training has decreased generalization error with statistical significance.\nOther factors to consider.\nOther factors to consider when choosing and applying a learning algorithm include the following:\nWhen considering a new application, the engineer can compare multiple learning algorithms and experimentally determine which one works best on the problem at hand (see cross validation). Tuning the performance of a learning algorithm can be very time-consuming. Given fixed resources, it is often better to spend more time collecting additional training data and more informative features than it is to spend extra time tuning the learning algorithms.\nAlgorithms.\nThe most widely used learning algorithms are: \nHow supervised learning algorithms work.\nGiven a set of formula_4 training examples of the form formula_5 such that formula_6 is the feature vector of the formula_7-th example and formula_8 is its label (i.e., class), a learning algorithm seeks a function formula_9, where formula_10 is the input space and formula_11 is the output space. The function formula_12 is an element of some space of possible functions formula_13, usually called the \"hypothesis space\". It is sometimes convenient to represent formula_12 using a scoring function formula_15 such that formula_12 is defined as returning the formula_17 value that gives the highest score: formula_18. Let formula_19 denote the space of scoring functions.\nAlthough formula_13 and formula_19 can be any space of functions, many learning algorithms are probabilistic models where formula_12 takes the form of a conditional probability model formula_23, or formula_24 takes the form of a joint probability model formula_25. For example, naive Bayes and linear discriminant analysis are joint probability models, whereas logistic regression is a conditional probability model.\nThere are two basic approaches to choosing formula_24 or formula_12: empirical risk minimization and structural risk minimization. Empirical risk minimization seeks the function that best fits the training data. Structural risk minimization includes a \"penalty function\" that controls the bias/variance tradeoff.\nIn both cases, it is assumed that the training set consists of a sample of independent and identically distributed pairs, formula_28. In order to measure how well a function fits the training data, a loss function formula_29 is defined. For training example formula_30, the loss of predicting the value formula_31 is formula_32.\nThe \"risk\" formula_33 of function formula_12 is defined as the expected loss of formula_12. This can be estimated from the training data as\nEmpirical risk minimization.\nIn empirical risk minimization, the supervised learning algorithm seeks the function formula_12 that minimizes formula_33. Hence, a supervised learning algorithm can be constructed by applying an optimization algorithm to find formula_12.\nWhen formula_12 is a conditional probability distribution formula_41 and the loss function is the negative log likelihood: formula_42, then empirical risk minimization is equivalent to maximum likelihood estimation.\nWhen formula_13 contains many candidate functions or the training set is not sufficiently large, empirical risk minimization leads to high variance and poor generalization. The learning algorithm is able to memorize the training examples without generalizing well. This is called overfitting.\nStructural risk minimization.\nStructural risk minimization seeks to prevent overfitting by incorporating a regularization penalty into the optimization. The regularization penalty can be viewed as implementing a form of Occam's razor that prefers simpler functions over more complex ones.\nA wide variety of penalties have been employed that correspond to different definitions of complexity. For example, consider the case where the function formula_12 is a linear function of the form\nA popular regularization penalty is formula_46, which is the squared Euclidean norm of the weights, also known as the formula_47 norm. Other norms include the formula_48 norm, formula_49, and the formula_50 \"norm\", which is the number of non-zero formula_51s. The penalty will be denoted by formula_52.\nThe supervised learning optimization problem is to find the function formula_12 that minimizes\nThe parameter formula_55 controls the bias-variance tradeoff. When formula_56, this gives empirical risk minimization with low bias and high variance. When formula_55 is large, the learning algorithm will have high bias and low variance. The value of formula_55 can be chosen empirically via cross validation.\nThe complexity penalty has a Bayesian interpretation as the negative log prior probability of formula_12, formula_60, in which case formula_61 is the posterior probability of formula_12.\nGenerative training.\nThe training methods described above are \"discriminative training\" methods, because they seek to find a function formula_12 that discriminates well between the different output values (see discriminative model). For the special case where formula_25 is a joint probability distribution and the loss function is the negative log likelihood formula_65 a risk minimization algorithm is said to perform \"generative training\", because formula_24 can be regarded as a generative model that explains how the data were generated. Generative training algorithms are often simpler and more computationally efficient than discriminative training algorithms. In some cases, the solution can be computed in closed form as in naive Bayes and linear discriminant analysis.\nGeneralizations.\nThere are several ways in which the standard supervised learning problem can be generalized:", "Automation-Control": 0.7814962864, "Qwen2": "Yes"} {"id": "13342572", "revid": "1588193", "url": "https://en.wikipedia.org/wiki?curid=13342572", "title": "Single-input single-output system", "text": "In control engineering, a single-input and single-output (SISO) system is a simple single-variable control system with one input and one output. In radio, it is the use of only one antenna both in the transmitter and receiver.\nDetails.\nSISO systems are typically less complex than multiple-input multiple-output (MIMO) systems. Usually, it is also easier to make an order of magnitude or trending predictions \"on the fly\" or \"back of the envelope\". MIMO systems have too many interactions for most of us to trace through them quickly, thoroughly, and effectively in our heads.\nFrequency domain techniques for analysis and controller design dominate SISO control system theory. Bode plot, Nyquist stability criterion, Nichols plot, and root locus are the usual tools for SISO system analysis. Controllers can be designed through the polynomial design, root locus design methods to name just two of the more popular. Often SISO controllers will be PI, PID, or lead-lag.", "Automation-Control": 0.9367051125, "Qwen2": "Yes"} {"id": "37686388", "revid": "4842600", "url": "https://en.wikipedia.org/wiki?curid=37686388", "title": "Sensors for arc welding", "text": "Sensors for arc welding are devices which – as a part of a fully mechanised welding equipment – are capable to acquire information about position and, if possible, about the geometry of the intended weld at the workpiece and to provide respective data in a suitable form for the control of the weld torch position and, if possible, for the arc welding process parameters.\nIntroduction.\nThe quality of a weld depends, besides the weld parameters which are important for the welding process (e.g. voltage, current, wire feed and weld speed) also mainly from the type of input of process energy and of the used filler material. The positioning of the torch exerts a direct influence on the material flow. The heat input for the melting of the component edges and the steady heat flow are, furthermore, directly connected with the torch guidance and exert substantial influence on the weld quality and on the resulting residual stresses. In fully mechanised and automated shielded gas welding, the inaccuracies of torch guidance, workpiece handling, groove preparation and thermal distortion are adding to the variations of the edge position and edge geometry. In fully mechanised welding, the information which is required for the weld quality is detected via sensors. Sensors are applied for checking the position of the component (detection of weld start and end of weld), for joint tracking and for the adaptation of the process parameters to changes of the joints/grooves. It is possible to use the sensors online (together/at the same time with the welding process) or offline (in a separate working step before welding). Sensors are mainly used in online joint tracking.\nPrinciples.\nAll physical principles which are capable to provide information about the position of an object are suitable to serve as the starting basis for a sensor function. The ambient conditions prevailing during arc welding and also the requirements which are made by fully mechanised equipments have, however, many restrictions as a consequence. Figure 1 depicts the system overview. The monitoring strategy of the sensor (process or geometry) has been chosen as the superordinate criterion, the further subdivision is orientated on the measuring principle. A further distinctive feature of sensor systems is their design. Leading sensors are, thus, marked by the fact that measuring point and joining point are not located in the same position. Here, the measuring and joining process are mainly running in sequence. For making position-relevant statements about the welding process, those systems require calibration of the relative position. If process-oriented sensors are used, the measuring point and the joining point are identical.\nWhat the measuring principles all have in common is the fact that through the evaluation of the sensor signal, geometrical information about the joint and its relative position to the measuring head is provided. The individual active principles allow different processing speed for acquiring the information.\nGeometry-oriented.\nGeometry-oriented sensors acquire their signals from the geometry of the groove or from an edge or area the course of which is in accordance with the groove.\nTactile sensors.\nElectric contact sensors for joint tracking and/or work piece measurement are representing one type of tactile sensors. The sensor makes electric contact with the workpiece, the electrically conductive workpiece is included into the measuring circuit of the sensor.\nThe mechanical contact sensors belong to the second category of the tactile sensors. The mechanical deflection of a scanning element which makes contact with the workpiece is evaluated.\nElectric contact sensors.\nFollowing a determined searching strategy, the electric contact sensor systems are scanning the weld start or other track points via contacting the work piece with parts/components which have been subjected to voltage (direct voltage of several ten Volt up to 1 KV, depending on material and surface) of the welding equipment (shielding gas nozzle, welding electrode, stylus, or similar.) This means the offline-measuring of the weld start, the part position or part geometry before welding. Knowing the scheduled path, a transformation of the track points in accordance with the measured conditions is carried out. In this case, corrective action is not carried out during the welding process.\nThermal.\nHere, the heat flow is measured with two thermo-couples which are arranged on the welding torch, the thermal flow is used for the side/lateral- and height control of the torch. The orientation of the torch towards the groove is detected via the comparison of the sensor temperature of the two thermo-couples. If the orientation of the torch is symmetrical, the difference of the radiated thermal flow equals to zero, so do the temperature differences of the thermo-couples. Dependent on the lateral misalignment of the torch the thermo-couples are subject to different heat flows, by the deformation of the arc and also by the changed position of the molten pool.\nMechanical contact.\nMechanical contact systems transform the deflection of the scanning element directly into electric control signals. The following transformer principles are differentiated:\nDue to the required distance of the acting/break points in one level, transformers which are equipped with micro-switches have a control hysteresis in the working point which has the consequence of a restricted reproducible accuracy. Electric displacement of the working point is not possible. The other, above-mentioned transformer systems (the use of optical systems is probably limited due to design reasons) produce analogous signals in proportion to the scanning element deflection and allow thus the error-proportional weld head tracking and also the electric working point displacement through the superordinate control, e.g. in multiple layer welding. The output signals of the most commonly used inductive measuring transformer systems are between 0 and 10 V DC, depending on the scanning element deflection (Figure 2).\nBoundary conditions.\nAny impairment of the electric contact between sensor scanning element and workpiece is, in the case of electric-contact sensors, problematic, e.g. welding spatters at the shielding gas nozzle, scale and rolling skin on the workpiece surface or through a wire electrode end which has molten spheroidally and has adherence of slag.\nWhen mechanical-contact sensors are used, the scanning elements must be adapted to the respective groove shapes. Butt welds with a square butt joint preparation must have a groove gap of more than 3 mm; in overlap joints the top plate must have a thickness of more than 3 mm.\nThe sensor must be mounted separately from the welding torch.\nThus, the groove scanning is mainly carried out in a leading position in front of the torch. If the welds are mainly straight, this adjustment is no problem. It is also possible to use scanning element arrays (e.g. fork callipers or separated scanning elements for height and lateral scanning which allow scanning in the torch level and thus weld scanning which is almost free from errors. Apart from the torch guidance along a weld groove, mechanical contact sensors can also be applied for detection of the weld start and end of the weld.\nOptical.\nOptical sensors belong to the group of non-contact measuring, geometry-oriented sensors (Figure 1). For information retrieval, the weld groove is scanned via a radiation detector which records the emitted optical radiation of the measured object. Semiconductor image sensors are applied for the detection of radiation. The optical measuring principles are differentiated into sensors with and without active structured lighting. If there is no active structured lighting, a camera is used for signal acquisition. The camera observes the workpiece and extracts the required information from the two-dimensional halftone picture. Active structured lighting means the application of a light source for the defined lighting of specified regions of the part. For the subsequent acquisition, single photo elements, lines or arrays can be used, depending on their design.\nOperating mode.\nFor optical measurement without active structured lighting, a camera is directed on the region of the weld groove and the scene of interest is observed directly. This is used, for example, for SA welding processes in order to provide the welder with a live photograph of the weld groove on the monitor.\nWe know two semiconductor technologies for image sensing. The CCD camera (CCD: Charged Coupled Device) is the best known, most widely spread camera type, it is also used in standard video cameras.\nIf a CMOS image sensor has been used, the high input dynamics allow, even with a burning arc, to record a usable image of the weld groove.\nThe method of optical measuring technique with active structured lighting, mainly generated by a laser with a defined wavelength, is often used for the automation of welding processes. It is differentiated between 1, 2 and 3-D measuring systems. Since measuring in the arc directly is not possible, a defined distance (advance) which depends on the type and size of the arc itself must be maintained.\nIf one-dimensional measuring systems are used, the distance from sensor to workpiece surface is determined. This is carried out via measurement of the running time. A further, frequently used method is laser triangulation (Figure 4).\nThe distance of the workpiece is determined from the known dimensions of the sensor and the triangulation angle α.\nThis type of one-dimensional optical distance measurement systems is widely used in the field of industrial automation technique and is therefore offered by many companies. In automated welding, they are often used for the detection of the part and/or groove position before the start of the welding process.\nThere are different types of design of the two-dimensionally measuring sensor systems. From the 1D triangulation sensor, the two-dimensional laser scanner can be derived from the oscillation movement. Here, the groove geometry is detected via a scanning movement transverse to the groove (Figure 5). This is mainly carried out via a movable mirror unit which is integrated in the sensor head.\nAs an alternative, an oscillating movement of the entire sensor head can be carried out, this is, however, only considered a special application of a one-dimensionally measuring system. An advantage of the laser scanner is that, with according processing speed, the lighting conditions can be adapted for every single point-shaped distance measurement which results in illumination uniformity. Moreover, due to the point-shaped illumination, the laser point is through the concentrated laser power and also through appropriate optical filters, compared with the interfering arc radiation, easier to detect by the detection element.\nThe light-section sensor avoids the disadvantage of moving parts in the sensor head (Figure 6). Here, the surface is not scanned pointwise, the entire geometry is, moreover, captured in one image. For this purpose, the point-shaped laser beam is expanded via an optics to a line which is projected onto the surface of the workpiece transverse to the groove in accordance with the scan line of the scanner. The laser line is, in accordance with the same geometrical principle of triangulation, again acquired with a detector element, this time, however, two-dimensionally. For the acquisition, CCD and CMOS cameras with the above-mentioned properties can be used.\nAs the output signal after the pre-processing of sensor signals with a laser scanner and light-section sensor, the so-called height profile of the measured groove geometry is achieved. It represents the surface of the workpiece along the section at the projected laser line.\n3 D measuring systems with active illumination are mainly using the light-section method in combination with the projection of several parallel laser lines. In doing so, each line generates a height profile. Through the arrangement of several lines along the weld groove, a further dimension is achieved which shows the change of the height profiles of the groove geometry. Through the number of the lines, the resolution in groove longitudinal direction is increasing, however, the data processing expenditure is also increasing. Similar to the projection of several parallel lines, the measurement via a projected circle or other geometrical figures on the workpiece surface is possible.\nBoundary conditions.\nWhat all optical measuring methods have in common is that the determined groove points must be transformed from the sensor coordinates of the cameras into machine and/or work piece coordinates. To this end, they must be calibrated on test work pieces before the welding process takes place and calibration matrices must be provided. Moreover, for the application of image processing algorithms, information about the groove profile must also be provided in advance. This is carried out via teaching of templates, input of geometrical parameters or teach-in via test work pieces. A more comprehensive image processing for 2 and 3 D sensor systems requires normally a PC system for the evaluation; this is why commercially available PC interfaces are used for data exchange, uniform sensor interfaces do, however, not yet exist.\nApplication problems.\nIn optical sensor systems, problems occur due to the operation principle through the scattered light of the open arc. Therefore, measuring in the working point directly is in most cases not possible when optical sensors are used, a certain advance/distance must be maintained. Further process trouble stems from weld spatters which may exert a negative influence on the detection results. Screening systems between sensor and torch provide a remedy to a certain extent. The direct observation of the arc with special cameras for process monitoring remain an exception.\nThe running of the sensor in front of the arc causes the limited accessibility of corners in the parts. In order to reduce this problem, a design/structure which is as compact as possible and a short advance distance are most important. The pre-defined orientation of the sensor is, moreover, restricting the working space of the robot. For untroubled operation of the optical components also stronger soiling/impurification (dust and deposition of weld fume particles) should be avoided, if possible. Exchangeable protective glasses and safety screens in the form of compressed air curtains provide a remedy. The quality of the surface which is to be measured has substantial influence on the measuring result. If the surface is strongly reflecting, unwanted reflection and faulty measurements may occur, lustreless surfaces are less difficult. Ever-changing surface qualities also lead to problems.\nSince optical systems are equipped with semiconductor detectors and comprehensive electronics, it is most important to pay attention to safe electro-magnetic screening. This applies to the sensor, the image processing unit and the connecting cables thereof. Sensor systems with active laser illumination are reacting particularly sensitive to strong temperature fluctuations since the emitted light wavelength of the used laser diodes depends on the temperature of the laser. If the ambient temperature and thus the wavelength of the active illumination are changing, the light is no longer capable to penetrate through the narrow-band optical filter to the photodetector. Therefore, appropriate screening against the welding process or the cooling of the sensor head is required. Depending on the applied laser power, particular caution must be taken when sensors with active illumination are applied. The wavelength of the applied systems are often in the field of vision, which means the classification into the hazard classes 3A and 3B. The respective accident prevention regulations must be strictly adhered to.\nThe application of optical sensors demands the consideration of following points:\nInductive.\nInductive sensors evaluate the attenuation of a high-frequency electro-magnetic field which has been generated by eddy currents in the work piece. The application of single-coil design types allows side or height correction. Multiple-coil sensors allow correction in two coordinate directions and, moreover, influence on the weld torch orientation.\nCapacitive.\nCapacitive sensors measure the capacity between the work piece and an electrically conductive plate with small dimensions. They offer the possibility of distance measurement in media with unchanged dielectricity constant.\nProcess-oriented.\nProcess-oriented sensors are acquiring their signals from the primary or secondary process parameters.\nArc sensors are using the primary process parameters (weld current and/or voltage) of one moving or two unoscillated arcs for the generation of height and side/lateral correction signals.\nThese sensors require, of course, also a scannable groove geometry; the measuring and the joining point are, however, compared with geometry-oriented sensors, located in the same position.\nArc.\nStable working points in arc welding are developing as interface between process characteristic and power source characteristic (Figure 7). The process characteristic specifies the connection between a stable arc voltage and the appropriate current rating of the process under constant boundary conditions. A family of characteristics is achieved via the variation of the arc length / torch distance.\nIn TIG welding.\nTIG welding belongs to the welding processes with non-melting electrode. Therefore, the process characteristic is often designated as arc characteristic. A direct change of the working distance is compensated via the length of the arc. As a result, the arc resistance is changing. Short arcs have a lower electric resistance than long arcs.\nIn TIG welding, typically power sources with a steeply drooping characteristic are applied. A change of the arc length leads, therefore, directly to the change of the process voltage. A comparative measurement allows the determination of the distance to the workpiece.\nIn GMA welding.\nIn GMA welding, the process characteristic in the voltage-current diagram is a result from the interaction of the electric properties of the wire stick-out and of the arc. In principle, stable working points are achieved through the application of suitable power source characteristics or through super-imposed control strategies.\nThere is a stable equilibrium in point 1 of Figure 8 where the energy which has been input into the process is sufficient for the melting of the continuously fed wire electrode. In the case of a rapid change of distance, the arc compensates the length change, point 2. The lower resistance of the short arc brings about the increase of the current intensity which leads to the faster melting of the wire stick-out until again a stable working point is reached, point 3. This compensation process takes approximately between 100 and 200 ms. The arc sensor evaluates the remaining change in current intensity between point 1 and point 3 in order to achieve a distance-proportionate parameter. In principle, this evaluation concept is also applicable to pulsed arc welding. The concept which has been specified above is, in the case of most arc sensors, extended by the transversal scanning of the groove geometry. The deflection of the process to the fusion faces allows the comparative measurement of the torch distance. By calculating the difference of the distance values, the lateral positioning of the torch can be evaluated. The mean value of both distance values indicates the height of the torch above the groove. Different concepts are applied for the deflection (Figure 9). Mechanical oscillation is most widely spread and is frequently used, especially with robots. Basically, fast deflectory systems, e.g. with magnetic or rotatory deflection offer the improvement of the signal rate and the signal quality, a higher apparatus expenditure must, however, be calculated when these systems are used. In double-wire technique, both fusion faces are scanned at the same time with one wire each.\nBoundary conditions.\nArc sensors are evaluating the stable working points in arc welding. Disturbance variables of the process must be compensated via suitable filtering and evaluation strategies which are not susceptible to disturbances.\nIn the case of a simultaneous height and side control attention must be paid to the fact that only those groove geometries are suitable for arc sensor systems whose geometry allows the lateral position determination via comparative measurement of the fusion faces. V-type welds and fillet welds are suitable without any restrictions. Square butt welds without gap are not suitable for side/lateral control. Commercially available arc sensors are, so far, not applicable for aluminium materials.\nSecondary process parameters.\nSensor types which are observing the molten pool are restricted in their applicability range by the fact that molten pool size and arc radiation are dependent on geometrical factors, e.g. material density or composition (alloying constituents). The optical observation of the molten pool region determines changes of the molten pool contour. The deflection from a contour which is defined as “ideal” is interpreted as malposition or as a change of the process behaviour and is compensated subsequently.\nSpectral analysis.\nThe spectral analysis of the process signals compares the emission spectres of the arc or of the molten pool with assumed ideal values. Deflections point to a changed chemical composition or to energetic changes of the process zone.", "Automation-Control": 0.722183466, "Qwen2": "Yes"} {"id": "201489", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=201489", "title": "Gradient descent", "text": "In mathematics, gradient descent (also often called steepest descent) is a iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the gradient (or approximate gradient) of the function at the current point, because this is the direction of steepest descent. Conversely, stepping in the direction of the gradient will lead to a local maximum of that function; the procedure is then known as \"gradient ascent\".\nIt is particularly useful in machine learning for minimizing the cost or loss function. Gradient descent should not be confused with local search algorithms, although both are iterative methods for optimization.\nGradient descent is generally attributed to Augustin-Louis Cauchy, who first suggested it in 1847. Jacques Hadamard independently proposed a similar method in 1907. Its convergence properties for non-linear optimization problems were first studied by Haskell Curry in 1944, with the method becoming increasingly well-studied and used in the following decades.\nA simple extension of gradient descent, stochastic gradient descent, serves as the most basic algorithm used for training most deep networks today.\nDescription.\nGradient descent is based on the observation that if the multi-variable function formula_1 is defined and differentiable in a neighborhood of a point formula_2, then formula_1 decreases \"fastest\" if one goes from formula_2 in the direction of the negative gradient of formula_5 at formula_6. It follows that, if\nfor a small enough step size or learning rate formula_8, then formula_9. In other words, the term formula_10 is subtracted from formula_2 because we want to move against the gradient, toward the local minimum. With this observation in mind, one starts with a guess formula_12 for a local minimum of formula_5, and considers the sequence formula_14 such that\nWe have a monotonic sequence\nso, hopefully, the sequence formula_17 converges to the desired local minimum. Note that the value of the \"step size\" formula_18 is allowed to change at every iteration. With certain assumptions on the function formula_5 (for example, formula_5 convex and formula_21 Lipschitz) and particular choices of formula_18 (e.g., chosen either via a line search that satisfies the Wolfe conditions, or the Barzilai-Borwein method shown as following),\nconvergence to a local minimum can be guaranteed. When the function formula_5 is convex, all local minima are also global minima, so in this case gradient descent can converge to the global solution.\nThis process is illustrated in the adjacent picture. Here, formula_5 is assumed to be defined on the plane, and that its graph has a bowl shape. The blue curves are the contour lines, that is, the regions on which the value of formula_5 is constant. A red arrow originating at a point shows the direction of the negative gradient at that point. Note that the (negative) gradient at a point is orthogonal to the contour line going through that point. We see that gradient \"descent\" leads us to the bottom of the bowl, that is, to the point where the value of the function formula_5 is minimal.\nAn analogy for understanding gradient descent.\nThe basic intuition behind gradient descent can be illustrated by a hypothetical scenario. A person is stuck in the mountains and is trying to get down (i.e., trying to find the global minimum). There is heavy fog such that visibility is extremely low. Therefore, the path down the mountain is not visible, so they must use local information to find the minimum. They can use the method of gradient descent, which involves looking at the steepness of the hill at their current position, then proceeding in the direction with the steepest descent (i.e., downhill). If they were trying to find the top of the mountain (i.e., the maximum), then they would proceed in the direction of steepest ascent (i.e., uphill). Using this method, they would eventually find their way down the mountain or possibly get stuck in some hole (i.e., local minimum or saddle point), like a mountain lake. However, assume also that the steepness of the hill is not immediately obvious with simple observation, but rather it requires a sophisticated instrument to measure, which the person happens to have at the moment. It takes quite some time to measure the steepness of the hill with the instrument, thus they should minimize their use of the instrument if they wanted to get down the mountain before sunset. The difficulty then is choosing the frequency at which they should measure the steepness of the hill so not to go off track.\nIn this analogy, the person represents the algorithm, and the path taken down the mountain represents the sequence of parameter settings that the algorithm will explore. The steepness of the hill represents the slope of the function at that point. The instrument used to measure steepness is differentiation. The direction they choose to travel in aligns with the gradient of the function at that point. The amount of time they travel before taking another measurement is the step size.\nChoosing the step size and descent direction.\nSince using a step size formula_18 that is too small would slow convergence, and a formula_18 too large would lead to overshoot and divergence, finding a good setting of formula_18 is an important practical problem. Philip Wolfe also advocated using \"clever choices of the [descent] direction\" in practice. Whilst using a direction that deviates from the steepest descent direction may seem counter-intuitive, the idea is that the smaller slope may be compensated for by being sustained over a much longer distance.\nTo reason about this mathematically, consider a direction formula_31 and step size formula_32 and consider the more general update:\nFinding good settings of formula_31 and formula_32 requires some thought. First of all, we would like the update direction to point downhill. Mathematically, letting formula_36 denote the angle between formula_37 and formula_31, this requires that formula_39 To say more, we need more information about the objective function that we are optimising. Under the fairly weak assumption that formula_5 is continuously differentiable, we may prove that:\nThis inequality implies that the amount by which we can be sure the function formula_5 is decreased depends on a trade off between the two terms in square brackets. The first term in square brackets measures the angle between the descent direction and the negative gradient. The second term measures how quickly the gradient changes along the descent direction.\nIn principle inequality could be optimized over formula_31 and formula_32 to choose an optimal step size and direction. The problem is that evaluating the second term in square brackets requires evaluating formula_44, and extra gradient evaluations are generally expensive and undesirable. Some ways around this problem are:\nUsually by following one of the recipes above, convergence to a local minimum can be guaranteed. When the function formula_5 is convex, all local minima are also global minima, so in this case gradient descent can converge to the global solution.\nSolution of a linear system.\nGradient descent can be used to solve a system of linear equations\nreformulated as a quadratic minimization problem.\nIf the system matrix formula_66 is real symmetric and positive-definite, an objective function is defined as the quadratic function, with minimization of\nso that\nFor a general real matrix formula_66, linear least squares define\nIn traditional linear least squares for real formula_66 and formula_72 the Euclidean norm is used, in which case\nThe line search minimization, finding the locally optimal step size formula_18 on every iteration, can be performed analytically for quadratic functions, and explicit formulas for the locally optimal formula_18 are known.\nFor example, for real symmetric and positive-definite matrix formula_66, a simple algorithm can be as follows,\nTo avoid multiplying by formula_66 twice per iteration,\nwe note that formula_79 implies formula_80, which gives the traditional algorithm,\nThe method is rarely used for solving linear equations, with the conjugate gradient method being one of the most popular alternatives. The number of gradient descent iterations is commonly proportional to the spectral condition number formula_82 of the system matrix formula_66 (the ratio of the maximum to minimum eigenvalues of , while the convergence of conjugate gradient method is typically determined by a square root of the condition number, i.e., is much faster. Both methods can benefit from preconditioning, where gradient descent may require less assumptions on the preconditioner.\nSolution of a non-linear system.\nGradient descent can also be used to solve a system of nonlinear equations. Below is an example that shows how to use the gradient descent to solve for three unknown variables, \"x\"1, \"x\"2, and \"x\"3. This example shows one iteration of the gradient descent.\nConsider the nonlinear system of equations\nLet us introduce the associated function\nwhere\nOne might now define the objective function\nwhich we will attempt to minimize. As an initial guess, let us use\nWe know that\nwhere the Jacobian matrix formula_90 is given by\nWe calculate:\nThus\nand\nNow, a suitable formula_95 must be found such that\nThis can be done with any of a variety of line search algorithms. One might also simply guess formula_97 which gives\nEvaluating the objective function at this value, yields\nThe decrease from formula_100 to the next step's value of\nis a sizable decrease in the objective function. Further steps would reduce its value further until an approximate solution to the system was found.\nComments.\nGradient descent works in spaces of any number of dimensions, even in infinite-dimensional ones. In the latter case, the search space is typically a function space, and one calculates the Fréchet derivative of the functional to be minimized to determine the descent direction.\nThat gradient descent works in any number of dimensions (finite number at least) can be seen as a consequence of the Cauchy-Schwarz inequality. That article proves that the magnitude of the inner (dot) product of two vectors of any dimension is maximized when they are colinear. In the case of gradient descent, that would be when the vector of independent variable adjustments is proportional to the gradient vector of partial derivatives.\nThe gradient descent can take many iterations to compute a local minimum with a required accuracy, if the curvature in different directions is very different for the given function. For such functions, preconditioning, which changes the geometry of the space to shape the function level sets like concentric circles, cures the slow convergence. Constructing and applying preconditioning can be computationally expensive, however.\nThe gradient descent can be combined with a line search, finding the locally optimal step size formula_18 on every iteration. Performing the line search can be time-consuming. Conversely, using a fixed small formula_18 can yield poor convergence.\nMethods based on Newton's method and inversion of the Hessian using conjugate gradient techniques can be better alternatives. Generally, such methods converge in fewer iterations, but the cost of each iteration is higher. An example is the BFGS method which consists in calculating on every step a matrix by which the gradient vector is multiplied to go into a \"better\" direction, combined with a more sophisticated line search algorithm, to find the \"best\" value of formula_104 For extremely large problems, where the computer-memory issues dominate, a limited-memory method such as L-BFGS should be used instead of BFGS or the steepest descent. \nWhile it is sometimes possible to substitute gradient descent for a local search algorithm, gradient descent is not in the same family: although it is an iterative method for local optimization, it relies on an objective function’s gradient rather than an explicit exploration of a solution space.\nGradient descent can be viewed as applying Euler's method for solving ordinary differential equations formula_105 to a gradient flow. In turn, this equation may be derived as an optimal controller for the control system formula_106 with formula_107 given in feedback form formula_108.\nIt can be shown that there is a correspondence between neuroevolution and gradient descent.\nModifications.\nGradient descent can converge to a local minimum and slow down in a neighborhood of a saddle point. Even for unconstrained quadratic minimization, gradient descent develops a zig-zag pattern of subsequent iterates as iterations progress, resulting in slow convergence. Multiple modifications of gradient descent have been proposed to address these deficiencies.\nFast gradient methods.\nYurii Nesterov has proposed a simple modification that enables faster convergence for convex problems and has been since further generalized. For unconstrained smooth problems, the method is called the fast gradient method (FGM) or the accelerated gradient method (AGM). Specifically, if the differentiable function formula_5 is convex and formula_21 is Lipschitz, and it is not assumed that formula_5 is strongly convex, then the error in the objective value generated at each step formula_112 by the gradient descent method will be bounded by formula_113. Using the Nesterov acceleration technique, the error decreases at formula_114. It is known that the rate formula_115 for the decrease of the cost function is optimal for first-order optimization methods. Nevertheless, there is the opportunity to improve the algorithm by reducing the constant factor. The optimized gradient method (OGM) reduces that constant by a factor of two and is an optimal first-order method for large-scale problems.\nFor constrained or non-smooth problems, Nesterov's FGM is called the fast proximal gradient method (FPGM), an acceleration of the proximal gradient method.\nMomentum or \"heavy ball\" method.\nTrying to break the zig-zag pattern of gradient descent, the \"momentum or heavy ball method\" uses a momentum term in analogy to a heavy ball sliding on the surface of values of the function being minimized, or to mass movement in Newtonian dynamics through a viscous medium in a conservative force field. Gradient descent with momentum remembers the solution update at each iteration, and determines the next update as a linear combination of the gradient and the previous update. For unconstrained quadratic minimization, a theoretical convergence rate bound of the heavy ball method is asymptotically the same as that for the optimal conjugate gradient method.\nThis technique is used in stochastic gradient descent and as an extension to the backpropagation algorithms used to train artificial neural networks. In the direction of updating, stochastic gradient descent adds a stochastic property. The weights can be used to calculate the derivatives.\nExtensions.\nGradient descent can be extended to handle constraints by including a projection onto the set of constraints. This method is only feasible when the projection is efficiently computable on a computer. Under suitable assumptions, this method converges. This method is a specific case of the forward-backward algorithm for monotone inclusions (which includes convex programming and variational inequalities).\nGradient descent is a special case of mirror descent using the squared Euclidean distance as the given Bregman divergence.", "Automation-Control": 0.649500668, "Qwen2": "Yes"} {"id": "202094", "revid": "13263935", "url": "https://en.wikipedia.org/wiki?curid=202094", "title": "Process gain", "text": "In a spread-spectrum system, the process gain (or \"processing gain\") is the ratio of the spread (or RF) bandwidth to the unspread (or baseband) bandwidth. It is usually expressed in decibels (dB).\nFor example, if a 1 kHz signal is spread to 100 kHz, the process gain expressed as a numerical ratio would be / = 100. Or in decibels, 10 log10(100) = 20 dB.\nNote that process gain does not reduce the effects of wideband thermal noise. It can be shown that a direct-sequence spread-spectrum (DSSS) system has exactly the same bit error behavior as a non-spread-spectrum system with the same modulation format. Thus, on an additive white Gaussian noise (AWGN) channel without interference, a spread system requires the same transmitter power as an unspread system, all other things being equal.\nUnlike a conventional communication system, however, a DSSS system does have a certain resistance against narrowband interference, as the interference is not subject to the process gain of the DSSS signal, and hence the signal-to-interference ratio is improved.\nIn frequency modulation (FM), the processing gain can be expressed as\nwhere:", "Automation-Control": 0.9408208728, "Qwen2": "Yes"} {"id": "2783055", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=2783055", "title": "IBM Spectrum LSF", "text": "IBM Spectrum LSF (LSF, originally Platform Load Sharing Facility) is a workload management platform, job scheduler, for distributed high performance computing (HPC) by IBM.\nDetails.\nIt can be used to execute batch jobs on networked Unix and Windows systems on many different architectures. LSF was based on the \"Utopia\" research project at the University of Toronto.\nIn 2007, Platform released \"Platform Lava\", which is a simplified version of LSF based on an old version of LSF release, licensed under GNU General Public License v2. The project was discontinued in 2011, succeeded by OpenLava.\nIn January, 2012, Platform Computing was acquired by IBM. The product is now called IBM Spectrum LSF. \nIBM Spectrum LSF Community Edition is a no-charge community edition of the IBM Spectrum LSF workload management platform.", "Automation-Control": 0.9454948902, "Qwen2": "Yes"} {"id": "16662777", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=16662777", "title": "Charge control", "text": "Charge control is a technology that lets an electric utility control, in real time, the charging of a \"gridable\" (plug-in) vehicle, such as a plug-in hybrid (PHEV) or a battery electric vehicle (BEV). Through charge control, the utility is able to postpone charging of the vehicle during time of peak demand. Additionally, this technology may enable the owner and the power company to track the vehicle's usage and performance, while on the road and while charging.\nComparison to V2G.\nIn both V2G and charge control, the electric utility can control the power flow between a plug-in vehicle and the power grid. However, in charge control power only flows from the grid to the vehicle, while in V2G power can flow in both directions.\nPeak load leveling.\nDisabling charging in charge control vehicles helps balance the loading on the power grid by \"valley filling\" (charging at night when demand is low) and \"peak shaving\" (not charging when demand is high). It can enable utilities new ways to provide regulation services (keeping voltage and frequency stable).", "Automation-Control": 0.9857923388, "Qwen2": "Yes"} {"id": "16684054", "revid": "35113335", "url": "https://en.wikipedia.org/wiki?curid=16684054", "title": "Algebraic Riccati equation", "text": "An algebraic Riccati equation is a type of nonlinear equation that arises in the context of infinite-horizon optimal control problems in continuous time or discrete time.\nA typical algebraic Riccati equation is similar to one of the following:\nthe continuous time algebraic Riccati equation (CARE):\nor the discrete time algebraic Riccati equation (DARE):\n\"P\" is the unknown \"n\" by \"n\" symmetric matrix and \"A\", \"B\", \"Q\", \"R\" are known real coefficient matrices.\nThough generally this equation can have many solutions, it is usually specified that we want to obtain the unique stabilizing solution, if such a solution exists.\nOrigin of the name.\nThe name Riccati is given to these equations because of their relation to the Riccati differential equation. Indeed, the CARE is verified by the time invariant solutions of the associated matrix valued Riccati differential equation. As for the DARE, it is verified by the time invariant solutions of the matrix valued Riccati difference equation (which is the analogue of the Riccati differential equation in the context of discrete time LQR).\nContext of the discrete-time algebraic Riccati equation.\nIn infinite-horizon optimal control problems, one cares about the value of some variable of interest arbitrarily far into the future, and one must optimally choose a value of a controlled variable right now, knowing that one will also behave optimally at all times in the future. The optimal current values of the problem's control variables at any time can be found using the solution of the Riccati equation and the current observations on evolving state variables. With multiple state variables and multiple control variables, the Riccati equation will be a matrix equation.\nThe algebraic Riccati equation determines the solution of the infinite-horizon time-invariant Linear-Quadratic Regulator problem (LQR) as well as that of the infinite horizon time-invariant Linear-Quadratic-Gaussian control problem (LQG). These are two of the most fundamental problems in control theory.\nA typical specification of the discrete-time linear quadratic control problem is to minimize\nsubject to the state equation\nwhere \"y\" is an \"n\" × 1 vector of state variables, \"u\" is a \"k\" × 1 vector of control variables, \"A\" is the \"n\" × \"n\" state transition matrix, \"B\" is the \"n\" × \"k\" matrix of control multipliers, \"Q\" (\"n\" × \"n\") is a symmetric positive semi-definite state cost matrix, and \"R\" (\"k\" × \"k\") is a symmetric positive definite control cost matrix.\nInduction backwards in time can be used to obtain the optimal control solution at each time,\nwith the symmetric positive definite cost-to-go matrix \"P\" evolving backwards in time from formula_6 according to\nwhich is known as the discrete-time dynamic Riccati equation of this problem. The steady-state characterization of \"P\", relevant for the infinite-horizon problem in which \"T\" goes to infinity, can be found by iterating the dynamic equation repeatedly until it converges; then \"P\" is characterized by removing the time subscripts from the dynamic equation.\nSolution.\nUsually solvers try to find the unique stabilizing solution, if such a solution exists. A solution is stabilizing if using it for controlling the associated LQR system makes the closed loop system stable.\nFor the CARE, the control is\nand the closed loop state transfer matrix is\nwhich is stable if and only if all of its eigenvalues have strictly negative real part.\nFor the DARE, the control is\nand the closed loop state transfer matrix is\nwhich is stable if and only if all of its eigenvalues are strictly inside the unit circle of the complex plane.\nA solution to the algebraic Riccati equation can be obtained by matrix factorizations or by iterating on the Riccati equation. One type of iteration can be obtained in the discrete time case by using the \"dynamic\" Riccati equation that arises in the finite-horizon problem: in the latter type of problem each iteration of the value of the matrix is relevant for optimal choice at each period that is a finite distance in time from a final time period, and if it is iterated infinitely far back in time it converges to the specific matrix that is relevant for optimal choice an infinite length of time prior to a final period—that is, for when there is an infinite horizon.\nIt is also possible to find the solution by finding the eigendecomposition of a larger system. For the CARE, we define the Hamiltonian matrix\nSince formula_13 is Hamiltonian, if it does not have any eigenvalues on the imaginary axis, then exactly half of its eigenvalues have a negative real part. If we denote the formula_14 matrix whose columns form a basis of the corresponding subspace, in block-matrix notation, as\nthen\nis a solution of the Riccati equation; furthermore, the eigenvalues of formula_17 are the eigenvalues of formula_18 with negative real part.\nFor the DARE, when formula_19 is invertible, we define the symplectic matrix\nSince formula_21 is symplectic, if it does not have any eigenvalues on the unit circle, then exactly half of its eigenvalues are inside the unit circle. If we denote the formula_22 matrix whose columns form a basis of the corresponding subspace, in block-matrix notation, as\nwhere formula_24 and formula_25 result from the decomposition\nthen\nis a solution of the Riccati equation; furthermore, the eigenvalues of formula_28 are the eigenvalues of formula_29 which are inside the unit circle.", "Automation-Control": 0.9996529222, "Qwen2": "Yes"} {"id": "16696535", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=16696535", "title": "Backstepping", "text": "In control theory, backstepping is a technique developed 1990 by Petar V. Kokotovic and others for designing stabilizing controls for a special class of nonlinear dynamical systems. These systems are built from subsystems that radiate out from an irreducible subsystem that can be stabilized using some other method. Because of this recursive structure, the designer can start the design process at the known-stable system and \"back out\" new controllers that progressively stabilize each outer subsystem. The process terminates when the final external control is reached. Hence, this process is known as \"backstepping.\"\nBackstepping approach.\nThe backstepping approach provides a recursive method for stabilizing the origin of a system in strict-feedback form. That is, consider a system of the form\nwhere\nAlso assume that the subsystem\nis stabilized to the origin (i.e., formula_11) by some known control formula_12 such that formula_13. It is also assumed that a Lyapunov function formula_14 for this stable subsystem is known. That is, this subsystem is stabilized by some other method and backstepping extends its stability to the formula_15 shell around it.\nIn systems of this \"strict-feedback form\" around a stable subsystem,\nThe backstepping approach determines how to stabilize the subsystem using formula_21, and then proceeds with determining how to make the next state formula_22 drive formula_21 to the control required to stabilize . Hence, the process \"steps backward\" from out of the strict-feedback form system until the ultimate control is designed.\nRecursive Control Design Overview.\nThis process is known as backstepping because it starts with the requirements on some internal subsystem for stability and progressively \"steps back\" out of the system, maintaining stability at each step. Because\nthen the resulting system has an equilibrium at the origin (i.e., where formula_64, formula_65, formula_66, ..., formula_67, and formula_68) that is globally asymptotically stable.\nIntegrator Backstepping.\nBefore describing the backstepping procedure for general strict-feedback form dynamical systems, it is convenient to discuss the approach for a smaller class of strict-feedback form systems. These systems connect a series of integrators to the input of a\nsystem with a known feedback-stabilizing control law, and so the stabilizing approach is known as \"integrator backstepping.\" With a small modification, the integrator backstepping approach can be extended to handle all strict-feedback form systems.\nSingle-integrator Equilibrium.\nConsider the dynamical system\n = f_x(\\mathbf{x}) + g_x(\\mathbf{x}) z_1\\\\\n\\dot{z}_1 = u_1\n\\end{cases}\nwhere formula_2 and formula_21 is a scalar. This system is a cascade connection of an integrator with the subsystem (i.e., the input enters an integrator, and the integral formula_21 enters the subsystem).\nWe assume that formula_72, and so if formula_73, formula_11 and formula_75, then\nSo the origin formula_77 is an equilibrium (i.e., a stationary point) of the system. If the system ever reaches the origin, it will remain there forever after.\nSingle-integrator Backstepping.\nIn this example, backstepping is used to stabilize the single-integrator system in Equation  around its equilibrium at the origin. To be less precise, we wish to design a control law formula_29 that ensures that the states formula_79 return to formula_80 after the system is started from some arbitrary initial condition.\ng_x(\\mathbf{x})-k_1(\\underbrace{z_1-u_x(\\mathbf{x})}_{e_1})}^{v_1} \\, + \\, \\overbrace{\\frac{\\partial u_x}{\\partial \\mathbf{x}}(\\underbrace{f_x(\\mathbf{x})+g_x(\\mathbf{x})z_1}_{\\dot{\\mathbf{x}} \\text{ (i.e., } \\frac{\\operatorname{d}\\mathbf{x}}{\\operatorname{d}t} \\text{)}})}^{\\dot{u}_x \\text{ (i.e., } \\frac{ \\operatorname{d}u_x }{\\operatorname{d}t} \\text{)}}\nSo because this system is feedback stabilized by formula_144 and has Lyapunov function formula_145 with formula_146, it can be used as the upper subsystem in another single-integrator cascade system.\nMotivating Example: Two-integrator Backstepping.\nBefore discussing the recursive procedure for the general multiple-integrator case, it is instructive to study the recursion present in the two-integrator case. That is, consider the dynamical system\n = f_x(\\mathbf{x}) + g_x(\\mathbf{x}) z_1\\\\\n\\dot{z}_1 = z_2\\\\\n\\dot{z}_2 = u_2\n\\end{cases}\nwhere formula_2 and formula_21 and formula_22 are scalars. This system is a cascade connection of the single-integrator system in Equation  with another integrator (i.e., the input formula_41 enters through an integrator, and the output of that integrator enters the system in Equation  by its formula_34 input).\nBy letting\nthen the two-integrator system in Equation  becomes the single-integrator system\n = f_y(\\mathbf{y}) + g_y(\\mathbf{y}) z_2 &\\quad \\text{( where this } \\mathbf{y} \\text{ subsystem is stabilized by } z_2 = u_1(\\mathbf{x},z_1) \\text{ )}\\\\\n\\dot{z}_2 = u_2.\n\\end{cases}\nBy the single-integrator procedure, the control law formula_155 stabilizes the upper formula_22-to- subsystem using the Lyapunov function formula_145, and so Equation  is a new single-integrator system that is structurally equivalent to the single-integrator system in Equation . So a stabilizing control formula_41 can be found using the same single-integrator procedure that was used to find formula_34.\nMany-integrator backstepping.\nIn the two-integrator case, the upper single-integrator subsystem was stabilized yielding a new single-integrator system that can be similarly stabilized. This recursive procedure can be extended to handle any finite number of integrators. This claim can be formally proved with mathematical induction. Here, a stabilized multiple-integrator system is built up from subsystems of already-stabilized multiple-integrator subsystems.\nHence, any system in this special many-integrator strict-feedback form can be feedback stabilized using a straightforward procedure that can even be automated (e.g., as part of an adaptive control algorithm).\nGeneric Backstepping.\nSystems in the special strict-feedback form have a recursive structure similar to the many-integrator system structure. Likewise, they are stabilized by stabilizing the smallest cascaded system and then \"backstepping\" to the next cascaded system and repeating the procedure. So it is critical to develop a single-step procedure; that procedure can be recursively applied to cover the many-step case. Fortunately, due to the requirements on the functions in the strict-feedback form, each single-step system can be rendered by feedback to a single-integrator system, and that single-integrator system can be stabilized using methods discussed above.\nSingle-step Procedure.\nConsider the simple strict-feedback system\n = f_x(\\mathbf{x}) + g_x(\\mathbf{x}) z_1\\\\\n\\dot{z}_1 = f_1(\\mathbf{x}, z_1) + g_1(\\mathbf{x}, z_1) u_1\n\\end{cases}\nwhere\nRather than designing feedback-stabilizing control formula_34 directly, introduce a new control formula_252 (to be designed \"later\") and use control law\nwhich is possible because formula_254. So the system in Equation  is\nwhich simplifies to\nThis new formula_252-to- system matches the \"single-integrator cascade system\" in Equation . Assuming that a feedback-stabilizing control law formula_12 and Lyapunov function formula_87 for the upper subsystem is known, the feedback-stabilizing control law from Equation  is\nwith gain formula_123. So the final feedback-stabilizing control law is\ng_x(\\mathbf{x})-k_1(z_1-u_x(\\mathbf{x})) + \\frac{\\partial u_x}{\\partial \\mathbf{x}}(f_x(\\mathbf{x})+g_x(\\mathbf{x})z_1)}^{u_{a1}(\\mathbf{x},z_1)} \\, - \\, f_1(\\mathbf{x}, z_1) \\right)\nwith gain formula_123. The corresponding Lyapunov function from Equation  is\nBecause this \"strict-feedback system\" has a feedback-stabilizing control and a corresponding Lyapunov function, it can be cascaded as part of a larger strict-feedback system, and this procedure can be repeated to find the surrounding feedback-stabilizing control.\nMany-step Procedure.\nAs in many-integrator backstepping, the single-step procedure can be completed iteratively to stabilize an entire strict-feedback system. In each step,\nThat is, any \"strict-feedback system\"\nhas the recursive structure\nand can be feedback stabilized by finding the feedback-stabilizing control and Lyapunov function for the single-integrator formula_98 subsystem (i.e., with input formula_22 and output ) and iterating out from that inner subsystem until the ultimate feedback-stabilizing control is known. At iteration , the equivalent system is\nBy Equation , the corresponding feedback-stabilizing control law is\nwith gain formula_242. By Equation , the corresponding Lyapunov function is\nBy this construction, the ultimate control formula_244 (i.e., ultimate control is found at final iteration formula_245).\nHence, any strict-feedback system can be feedback stabilized using a straightforward procedure that can even be automated (e.g., as part of an adaptive control algorithm).", "Automation-Control": 0.9988571405, "Qwen2": "Yes"} {"id": "16696745", "revid": "41822090", "url": "https://en.wikipedia.org/wiki?curid=16696745", "title": "Small-gain theorem", "text": "In nonlinear systems, the formalism of input-output stability is an important tool in studying the stability of interconnected systems since the gain of a system directly relates to how the norm of a signal increases or decreases as it passes through the system. The small-gain theorem gives a sufficient condition for finite-gain formula_1 stability of the feedback connection. The small gain theorem was proved by George Zames in 1966. It can be seen as a generalization of the Nyquist criterion to non-linear time-varying MIMO systems (systems with multiple inputs and multiple outputs).\n\"Theorem\". Assume two stable systems formula_2 and formula_3 are connected in a feedback loop, then the closed loop system is input-output stable if formula_4 and both formula_2 and formula_3 are stable by themselves. (This norm is typically the formula_7-norm, the size of the largest singular value of the transfer function over all frequencies. Any induced Norm will also lead to the same results).", "Automation-Control": 0.9984131455, "Qwen2": "Yes"} {"id": "16697083", "revid": "8766034", "url": "https://en.wikipedia.org/wiki?curid=16697083", "title": "Lyapunov redesign", "text": "In nonlinear control, the technique of \"Lyapunov redesign\" refers to the design where a stabilizing state feedback controller can be constructed with knowledge of the Lyapunov function formula_1. Consider the system\nwhere formula_3 is the state vector and formula_4 is the vector of inputs. The functions formula_5, formula_6, and formula_7 are defined for formula_8, where formula_9 is a domain that contains the origin. A nominal model for this system can be written as\nand the control law\nstabilizes the system. The design of formula_12 is called \"Lyapunov redesign\".", "Automation-Control": 0.6554820538, "Qwen2": "Yes"} {"id": "25140222", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=25140222", "title": "Joint spectral radius", "text": "In mathematics, the joint spectral radius is a generalization of the classical notion of spectral radius of a matrix, to sets of matrices. In recent years this notion has found applications in a large number of engineering fields and is still a topic of active research.\nGeneral description.\nThe joint spectral radius of a set of matrices is the maximal asymptotic growth rate of products of matrices taken in that set. For a finite (or more generally compact) set of matrices formula_1 the joint spectral radius is defined as follows:\nIt can be proved that the limit exists and that the quantity actually does not depend on the chosen matrix norm (this is true for any norm but particularly easy to see if the norm is sub-multiplicative). The joint spectral radius was introduced in 1960 by Gian-Carlo Rota and Gilbert Strang, two mathematicians from MIT, but started attracting attention with the work of Ingrid Daubechies and Jeffrey Lagarias. They showed that the joint spectral radius can be used to describe smoothness properties of certain wavelet functions. A wide number of applications have been proposed since then. It is known that the joint spectral radius quantity is NP-hard to compute or to approximate, even when the set formula_3 consists of only two matrices with all nonzero entries of the two\nmatrices which are constrained to be equal. Moreover, the question \"formula_4\" is an undecidable problem. Nevertheless, in recent years much progress has been done on its understanding, and it appears that in practice the joint spectral radius can often be computed to satisfactory precision, and that it moreover can bring interesting insight in engineering and mathematical problems.\nComputation.\nApproximation algorithms.\nIn spite of the negative theoretical results on the joint spectral radius computability, methods have been proposed that perform well in practice. Algorithms are even known, which can reach an arbitrary accuracy in an a priori computable amount of time. These algorithms can be seen as trying to approximate the unit ball of a particular vector norm, called the extremal norm. One generally distinguishes between two families of such algorithms: the first family, called polytope norm methods, construct the extremal norm by computing long trajectories of points. An advantage of these methods is that in the favorable cases it can find the exact value of the joint spectral radius and provide a certificate that this is the exact value.\nThe second family of methods approximate the extremal norm with modern optimization techniques, such as ellipsoid norm approximation, semidefinite programming, Sum Of Squares, and conic programming. The advantage of these methods is that they are easy to implement, and in practice, they provide in general the best bounds on the joint spectral radius.\nThe finiteness conjecture.\nRelated to the computability of the joint spectral radius is the following conjecture:\n\"For any finite set of matrices formula_5 there is a product formula_6 of matrices in this set such that \nIn the above equation \"formula_8\" refers to the classical spectral radius of the matrix formula_9\nThis conjecture, proposed in 1995, was proven to be false in 2003. The counterexample provided in that reference uses advanced measure-theoretical ideas. Subsequently, many other counterexamples have been provided, including an elementary counterexample that uses simple combinatorial properties matrices and a counterexample based on dynamical systems properties. Recently an explicit counterexample has been proposed in. Many questions related to this conjecture are still open, as for instance the question of knowing whether it holds for pairs of binary matrices.\nApplications.\nThe joint spectral radius was introduced for its interpretation as a stability condition for discrete-time switching dynamical systems. Indeed, the system defined by the equations\nis stable if and only if formula_11\nThe joint spectral radius became popular when Ingrid Daubechies and Jeffrey Lagarias showed that it rules the continuity of certain wavelet functions. Since then, it has found many applications, ranging from number theory to information theory, autonomous agents consensus, combinatorics on words...\nRelated notions.\nThe joint spectral radius is the generalization of the spectral radius of a matrix for a set of several matrices. However, much more quantities can be defined when considering a set of matrices: The joint spectral subradius characterizes the minimal rate of growth of products in the semigroup generated by formula_3. \nThe p-radius characterizes the rate of growth of the formula_13 average of the norms of the products in the semigroup.\nThe Lyapunov exponent of the set of matrices characterizes the rate of growth of the geometric average.", "Automation-Control": 0.9090464711, "Qwen2": "Yes"} {"id": "36430432", "revid": "43993798", "url": "https://en.wikipedia.org/wiki?curid=36430432", "title": "Raytheon AN/MSQ-18 Battalion Missile Operations System", "text": "The Raytheon AN/MSQ-18 Battalion Missile Operations System (AN/TSQ-38 for the helicopter-transportable variant) was a Project Nike command, control, and coordination system for \"each associated missile battery\" to control a Nike missile as directed from a Raytheon AN/MSQ-28 at the Army Air Defense Command Post. Raytheon Company constructed the AN/MSQ-18 as 2 separate subsystems:", "Automation-Control": 0.9892556071, "Qwen2": "Yes"} {"id": "30871845", "revid": "38627444", "url": "https://en.wikipedia.org/wiki?curid=30871845", "title": "Anticausal system", "text": "In systems theory, an anticausal system is a hypothetical system with outputs and internal states that depend \"solely\" on future input values. Some textbooks and published research literature might define an anticausal system to be one that does not depend on past input values, allowing also for the dependence on present input values.\nAn acausal system is a system that is not a causal system, that is one that depends on some future input values and possibly on some input values from the past or present. This is in contrast to a causal system which depends only on current and/or past input values. This is often a topic of control theory and digital signal processing (DSP).\nAnticausal systems are also acausal, but the converse is not always true. An acausal system that has any dependence on past input values is not anticausal.\nAn example of acausal signal processing is the production of an output signal that is processed from an input signal that was recorded by looking at input values both forward and backward in time (from a predefined time arbitrarily denoted as the \"present\" time). In reality, that \"present\" time input, as well as the \"future\" time input values, have been recorded at some time in the past, but conceptually it can be called the \"present\" or \"future\" input values in this acausal process. This type of processing cannot be done in real time as future input values are not yet known, but is done after the input signal has been recorded and is post-processed.\nDigital room correction in some sound reproduction systems rely on acausal filters.", "Automation-Control": 0.7974876761, "Qwen2": "Yes"} {"id": "8426376", "revid": "997246618", "url": "https://en.wikipedia.org/wiki?curid=8426376", "title": "Implication table", "text": "An implication table is a tool used to facilitate the minimization of states in a state machine. The concept is to start assuming that every state may be able to combine with every other state, then eliminate combinations that are not possible. When all the impossible combinations have been eliminated, the remaining state combinations are valid, and thus can be combined.\nThe procedure is as follows:", "Automation-Control": 0.8838371038, "Qwen2": "Yes"} {"id": "69524998", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=69524998", "title": "Spectral submanifold", "text": "In dynamical systems, a spectral submanifold (SSM) is the unique smoothest invariant manifold serving as the nonlinear extension of a spectral subspace of a linear dynamical system under the addition of nonlinearities. SSM theory provides conditions for when invariant properties of eigenspaces of a linear dynamical system can be extended to a nonlinear system, and therefore motivates the use of SSMs in nonlinear dimensionality reduction.\nDefinition.\nConsider a nonlinear ordinary differential equation of the form\nwith constant matrix formula_2 and the nonlinearities contained in the smooth function formula_3.\nAssume that formula_4 for all eigenvalues formula_5 of formula_6, that is, the origin is an asymptotically stable fixed point. Now select a span formula_7 of formula_8 eigenvectors formula_9 of formula_6. Then, the eigenspace formula_11 is an invariant subspace of the linearized system\nUnder addition of the nonlinearity formula_13 to the linear system, formula_11 generally perturbs into infinitely many invariant manifolds. Among these invariant manifolds, the unique smoothest one is referred to as the spectral submanifold.\nAn equivalent result for unstable SSMs holds for formula_15. \nExistence.\nThe spectral submanifold tangent to formula_11 at the origin is guaranteed to exist provided that certain non-resonance conditions are satisfied by the eigenvalues formula_17 in the spectrum of formula_11. In particular, there can be no linear combination of formula_17 equal to one of the eigenvalues of formula_6 outside of the spectral subspace. If there is such an outer resonance, one can include the resonant mode into formula_11 and extend the analysis to a higher-dimensional SSM pertaining to the extended spectral subspace.\nNon-autonomous extension.\nThe theory on spectral submanifolds extends to nonlinear non-autonomous systems of the form\nwith formula_23 a quasiperiodic forcing term.\nSignificance.\nSpectral submanifolds are useful for rigorous nonlinear dimensionality reduction in dynamical systems. The reduction of a high-dimensional phase space to a lower-dimensional manifold can lead to major simplifications by allowing for an accurate description of the system's main asymptotic behaviour. For a known dynamical system, SSMs can be computed analytically by solving the invariance equations, and reduced models on SSMs may be employed for prediction of the response to forcing.\nFurthermore these manifolds may also be extracted directly from trajectory data of a dynamical system with the use of machine learning algorithms.", "Automation-Control": 0.6835774779, "Qwen2": "Yes"} {"id": "21573718", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=21573718", "title": "HyperNEAT", "text": "Hypercube-based NEAT, or HyperNEAT, is a generative encoding that evolves artificial neural networks (ANNs) with the principles of the widely used NeuroEvolution of Augmented Topologies (NEAT) algorithm developed by Kenneth Stanley. It is a novel technique for evolving large-scale neural networks using the geometric regularities of the task domain. It uses Compositional Pattern Producing Networks (CPPNs), which are used to generate the images for Picbreeder.org and shapes for EndlessForms.com . HyperNEAT has recently been extended to also evolve plastic ANNs and to evolve the location of every neuron in the network.", "Automation-Control": 0.9542204738, "Qwen2": "Yes"} {"id": "21598861", "revid": "46066408", "url": "https://en.wikipedia.org/wiki?curid=21598861", "title": "Neighbourhood components analysis", "text": "Neighbourhood components analysis is a supervised learning method for classifying multivariate data into distinct classes according to a given distance metric over the data. Functionally, it serves the same purposes as the K-nearest neighbors algorithm and makes direct use of a related concept termed \"stochastic nearest neighbours\".\nDefinition.\nNeighbourhood components analysis aims at \"learning\" a distance metric by finding a linear transformation of input data such that the average leave-one-out (LOO) classification performance is maximized in the transformed space. The key insight to the algorithm is that a matrix formula_1 corresponding to the transformation can be found by defining a differentiable objective function for formula_1, followed by the use of an iterative solver such as conjugate gradient descent. One of the benefits of this algorithm is that the number of classes formula_3 can be determined as a function of formula_1, up to a scalar constant. This use of the algorithm, therefore, addresses the issue of model selection.\nExplanation.\nIn order to define formula_1, we define an objective function describing classification accuracy in the transformed space and try to determine formula_6 such that this objective function is maximized.\nformula_7\nLeave-one-out (LOO) classification.\nConsider predicting the class label of a single data point by consensus of its formula_3-nearest neighbours with a given distance metric. This is known as \"leave-one-out\" classification. However, the set of nearest-neighbours formula_9 can be quite different after passing all the points through a linear transformation. Specifically, the set of neighbours for a point can undergo discrete changes in response to smooth changes in the elements of formula_1, implying that any objective function formula_11 based on the neighbours of a point will be \"piecewise-constant\", and hence \"not differentiable\".\nSolution.\nWe can resolve this difficulty by using an approach inspired by stochastic gradient descent. Rather than considering the formula_3-nearest neighbours at each transformed point in LOO-classification, we'll consider the entire transformed data set as \"stochastic nearest neighbours\". We define these using a softmax function of the squared Euclidean distance between a given LOO-classification point and each other point in the transformed space:\nformula_13\nThe probability of correctly classifying data point formula_14 is the probability of classifying the points of each of its neighbours with the same class formula_9:\nformula_16 where formula_17 is the probability of classifying neighbour formula_18 of point formula_14.\nDefine the objective function using LOO classification, this time using the entire data set as stochastic nearest neighbours:\nformula_20\nNote that under stochastic nearest neighbours, the consensus class for a single point formula_14 is the expected value of a point's class in the limit of an infinite number of samples drawn from the distribution over its neighbours formula_22 i.e.: formula_23. Thus the predicted class is an affine combination of the classes of every other point, weighted by the softmax function for each formula_24 where formula_25 is now the entire transformed data set.\nThis choice of objective function is preferable as it is differentiable with respect to formula_1 (denote formula_27):\nformula_28\nformula_29\nObtaining a gradient for formula_1 means that it can be found with an iterative solver such as conjugate gradient descent. Note that in practice, most of the innermost terms of the gradient evaluate to insignificant contributions due to the rapidly diminishing contribution of distant points from the point of interest. This means that the inner sum of the gradient can be truncated, resulting in reasonable computation times even for large data sets.\nAlternative formulation.\n\"Maximizing formula_11 is equivalent to minimizing the formula_32-distance between the predicted class distribution and the true class distribution (ie: where the formula_33 induced by formula_1 are all equal to 1). A natural alternative is the KL-divergence, which induces the following objective function and gradient:\" (Goldberger 2005)\nformula_35\nformula_36\nIn practice, optimization of formula_1 using this function tends to give similar performance results as with the original.\nHistory and background.\nNeighbourhood components analysis was developed by Jacob Goldberger, Sam Roweis, Ruslan Salakhudinov, and Geoff Hinton at the University of Toronto's department of computer science in 2004.", "Automation-Control": 0.8374905586, "Qwen2": "Yes"} {"id": "53587467", "revid": "6908984", "url": "https://en.wikipedia.org/wiki?curid=53587467", "title": "Outline of machine learning", "text": "The following outline is provided as an overview of and topical guide to machine learning. Machine learning is a subfield of soft computing within computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. In 1959, Arthur Samuel defined machine learning as a \"field of study that gives computers the ability to learn without being explicitly programmed\". Machine learning explores the study and construction of algorithms that can learn from and make predictions on data. Such algorithms operate by building a model from an example training set of input observations in order to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions.\nMachine learning methods.\nDimensionality reduction.\nDimensionality reduction\nEnsemble learning.\nEnsemble learning\nMeta-learning.\nMeta-learning\nReinforcement learning.\nReinforcement learning\nSupervised learning.\nSupervised learning\nBayesian.\nBayesian statistics\nDecision tree algorithms.\nDecision tree algorithm\nLinear classifier.\nLinear classifier\nUnsupervised learning.\nUnsupervised learning\nArtificial neural networks.\nArtificial neural network\nAssociation rule learning.\nAssociation rule learning\nHierarchical clustering.\nHierarchical clustering\nCluster analysis.\nCluster analysis\nAnomaly detection.\nAnomaly detection\nSemi-supervised learning.\nSemi-supervised learning\nDeep learning.\nDeep learning\nHistory of machine learning.\nHistory of machine learning\nMachine learning projects.\nMachine learning projects\nMachine learning organizations.\nMachine learning organizations", "Automation-Control": 0.8315320015, "Qwen2": "Yes"} {"id": "3523316", "revid": "9784415", "url": "https://en.wikipedia.org/wiki?curid=3523316", "title": "Distributed parameter system", "text": "In control theory, a distributed-parameter system (as opposed to a lumped-parameter system) is a system whose state space is infinite-dimensional. Such systems are therefore also known as infinite-dimensional systems. Typical examples are systems described by partial differential equations or by delay differential equations.\nLinear time-invariant distributed-parameter systems.\nAbstract evolution equations.\nDiscrete-time.\nWith \"U\", \"X\" and \"Y\" Hilbert spaces and \"formula_1\" ∈ \"L\"(\"X\"), \"formula_2\" ∈ \"L\"(\"U\", \"X\"), \"formula_3\" ∈ \"L\"(\"X\", \"Y\") and \"formula_4\" ∈ \"L\"(\"U\", \"Y\") the following difference equations determine a discrete-time linear time-invariant system:\nwith \"formula_7\" (the state) a sequence with values in \"X\", \"formula_8\" (the input or control) a sequence with values in \"U\" and \"formula_9\" (the output) a sequence with values in \"Y\".\nContinuous-time.\nThe continuous-time case is similar to the discrete-time case but now one considers differential equations instead of difference equations:\nAn added complication now however is that to include interesting physical examples such as partial differential equations and delay differential equations into this abstract framework, one is forced to consider unbounded operators. Usually \"A\" is assumed to generate a strongly continuous semigroup on the state space \"X\". Assuming \"B\", \"C\" and \"D\" to be bounded operators then already allows for the inclusion of many interesting physical examples, but the inclusion of many other interesting physical examples forces unboundedness of \"B\" and \"C\" as well.\nExample: a partial differential equation.\nThe partial differential equation with formula_12 and formula_13 given by\nfits into the abstract evolution equation framework described above as follows. The input space \"U\" and the output space \"Y\" are both chosen to be the set of complex numbers. The state space \"X\" is chosen to be \"L\"2(0, 1). The operator \"A\" is defined as\nIt can be shown that \"A\" generates a strongly continuous semigroup on \"X\". The bounded operators \"B\", \"C\" and \"D\" are defined as\nExample: a delay differential equation.\nThe delay differential equation\nfits into the abstract evolution equation framework described above as follows. The input space \"U\" and the output space \"Y\" are both chosen to be the set of complex numbers. The state space \"X\" is chosen to be the product of the complex numbers with \"L\"2(−\"τ\", 0). The operator \"A\" is defined as\nIt can be shown that \"A\" generates a strongly continuous semigroup on X. The bounded operators \"B\", \"C\" and \"D\" are defined as\nTransfer functions.\nAs in the finite-dimensional case the transfer function is defined through the Laplace transform (continuous-time) or Z-transform (discrete-time). Whereas in the finite-dimensional case the transfer function is a proper rational function, the infinite-dimensionality of the state space leads to irrational functions (which are however still holomorphic).\nDiscrete-time.\nIn discrete-time the transfer function is given in terms of the state-space parameters by formula_24 and it is holomorphic in a disc centered at the origin. In case 1/\"z\" belongs to the resolvent set of \"A\" (which is the case on a possibly smaller disc centered at the origin) the transfer function equals formula_25. An interesting fact is that any function that is holomorphic in zero is the transfer function of some discrete-time system.\nContinuous-time.\nIf \"A\" generates a strongly continuous semigroup and \"B\", \"C\" and \"D\" are bounded operators, then the transfer function is given in terms of the state space parameters by formula_26 for \"s\" with real part larger than the exponential growth bound of the semigroup generated by \"A\". In more general situations this formula as it stands may not even make sense, but an appropriate generalization of this formula still holds.\nTo obtain an easy expression for the transfer function it is often better to take the Laplace transform in the given differential equation than to use the state space formulas as illustrated below on the examples given above.\nTransfer function for the partial differential equation example.\nSetting the initial condition formula_27 equal to zero and denoting Laplace transforms with respect to \"t\" by capital letters we obtain from the partial differential equation given above\nThis is an inhomogeneous linear differential equation with formula_31 as the variable, \"s\" as a parameter and initial condition zero. The solution is formula_32. Substituting this in the equation for \"Y\" and integrating gives formula_33 so that the transfer function is formula_34.\nTransfer function for the delay differential equation example.\nProceeding similarly as for the partial differential equation example, the transfer function for the delay equation example is formula_35.\nControllability.\nIn the infinite-dimensional case there are several non-equivalent definitions of controllability which for the finite-dimensional case collapse to the one usual notion of controllability. The three most important controllability concepts are:\nControllability in discrete-time.\nAn important role is played by the maps formula_36 which map the set of all \"U\" valued sequences into X and are given by formula_37. The interpretation is that formula_38 is the state that is reached by applying the input sequence \"u\" when the initial condition is zero. The system is called \nControllability in continuous-time.\nIn controllability of continuous-time systems the map formula_42 given by formula_43 plays the role that formula_36 plays in discrete-time. However, the space of control functions on which this operator acts now influences the definition. The usual choice is \"L\"2(0, ∞;\"U\"), the space of (equivalence classes of) \"U\"-valued square integrable functions on the interval (0, ∞), but other choices such as \"L\"1(0, ∞;\"U\") are possible. The different controllability notions can be defined once the domain of formula_42 is chosen. The system is called\nObservability.\nAs in the finite-dimensional case, observability is the dual notion of controllability. In the infinite-dimensional case there are several different notions of observability which in the finite-dimensional case coincide. The three most important ones are:\nObservability in discrete-time.\nAn important role is played by the maps formula_50 which map \"X\" into the space of all \"Y\" valued sequences and are given by formula_51 if \"k\" ≤ \"n\" and zero if \"k\" > \"n\". The interpretation is that formula_52 is the truncated output with initial condition \"x\" and control zero. The system is called\nObservability in continuous-time.\nIn observability of continuous-time systems the map formula_56 given by formula_57 for \"s∈[0,t]\" and zero for \"s>t\" plays the role that formula_50 plays in discrete-time. However, the space of functions to which this operator maps now influences the definition. The usual choice is \"L\"2(0, ∞, \"Y\"), the space of (equivalence classes of) \"Y\"-valued square integrable functions on the interval \"(0,∞)\", but other choices such as \"L\"1(0, ∞, \"Y\") are possible. The different observability notions can be defined once the co-domain of formula_56 is chosen. The system is called\nDuality between controllability and observability.\nAs in the finite-dimensional case, controllability and observability are dual concepts (at least when for the domain of formula_63 and the co-domain of formula_64 the usual \"L\"2 choice is made). The correspondence under duality of the different concepts is:", "Automation-Control": 0.8249156475, "Qwen2": "Yes"} {"id": "5534558", "revid": "1893804", "url": "https://en.wikipedia.org/wiki?curid=5534558", "title": "Design for assembly", "text": "Design for assembly (DFA) is a process by which products are designed with ease of assembly in mind. If a product contains fewer parts it will take less time to assemble, thereby reducing assembly costs. In addition, if the parts are provided with features which make it easier to grasp, move, orient and insert them, this will also reduce assembly time and assembly costs. The reduction of the number of parts in an assembly has the added benefit of generally reducing the total cost of parts in the assembly. This is usually where the major cost benefits of the application of design for assembly occur. Critics of DFA from within industry argue that DFA/DFM is simply a new term for something that has existed as long as manufacturing itself, and is otherwise known as engineering design.\nApproaches.\nDesign for assembly can take different forms. In the 1960s and 1970s various rules and recommendations were proposed in order to help designers consider assembly problems during the design process. Many of these rules and recommendations were presented together with practical examples showing how assembly difficulty could be improved. However, it was not until the 1970s that numerical evaluation methods were developed to allow design for assembly studies to be carried out on existing and proposed designs.\nThe first evaluation method was developed at Hitachi and was called the Assembly Evaluation Method (AEM). This method is based on the principle of \"one motion for one part.\" For more complicated motions, a point-loss standard is used and the ease of assembly of the whole product is evaluated by subtracting points lost. The method was originally developed in order to rate assemblies for ease of automatic assembly.\nStarting in 1977, Geoff Boothroyd, supported by an NSF grant at the University of Massachusetts Amherst, developed the Design for Assembly method (DFA), which could be used to estimate the time for manual assembly of a product and the cost of assembling the product on an automatic assembly machine. Recognizing that the most important factor in reducing assembly costs was the minimization of the number of separate parts in a product, he introduced three simple criteria which could be used to determine theoretically whether any of the parts in the product could be eliminated or combined with other parts. These criteria, together with tables relating assembly time to various design factors influencing part grasping, orientation and insertion, could be used to estimate total assembly time and to rate the quality of a product design from an assembly viewpoint. For automatic assembly, tables of factors could be used to estimate the cost of automatic feeding and orienting and automatic insertion of the parts on an assembly machine.\nIn the 1980s and 1990s, variations of the AEM and DFA methods have been proposed, namely: the GE Hitachi method which is based on the AEM and DFA; the Lucas method, the Westinghouse method and several others which were based on the original DFA method. All methods are now referred to as \"design for assembly\" methods.\nImplementation.\nMost products are assembled manually and the original DFA method for manual assembly is the most widely used method and has had the greatest industrial impact throughout the world.\nThe DFA method, like the AEM method, was originally made available in the form of a handbook where the user would enter data on worksheets to obtain a rating for the ease of assembly of a product. Starting in 1981, Geoffrey Boothroyd and Peter Dewhurst developed a computerized version of the DFA method which allowed its implementation in a broad range of companies. For this work they were presented with many awards including the National Medal of Technology. There are many published examples of significant savings obtained through the application of DFA. For example, in 1981, Sidney Liebson, manager of manufacturing engineering for Xerox, estimated that his company would save hundreds of millions of dollars through the application of DFA. In 1988, Ford Motor Company credited the software with overall savings approaching $1 billion. In many companies DFA is a corporate requirement and DFA software is continually being adopted by companies attempting to obtain greater control over their manufacturing costs. There are many key principles in design for assembly.\nNotable examples.\nTwo notable examples of good design for assembly are the Sony Walkman and the Swatch watch. Both were designed for fully automated assembly. The Walkman line was designed for \"vertical assembly\", in which parts are inserted in straight-down moves only. The Sony SMART assembly system, used to assemble Walkman-type products, is a robotic system for assembling small devices designed for vertical assembly. \nThe IBM Proprinter used design for automated assembly (DFAA) rules. These DFAA rules help design a product that can be assembled automatically by robots, but they are useful even with products assembled by manual assembly.\nFurther information.\nFor more information on Design for Assembly and the subject of Design for Manufacture and Assembly see:", "Automation-Control": 0.9595777392, "Qwen2": "Yes"} {"id": "65271417", "revid": "107930", "url": "https://en.wikipedia.org/wiki?curid=65271417", "title": "Harley Davidson Motor Company Factory No. 7", "text": "Harley Davidson Motor Company Factory No. 7 is a factory building of the Harley-Davidson company in Milwaukee listed on the Wisconsin State Register of Historic Places. It was at this plant where the company invented and refined the automated system for casting and milling engine parts and wheel hubs for their motorcycles, which helped to secure the company's position as a leader in motorcycle manufacturing. This factory building was added to the state register on August 14, 2020.", "Automation-Control": 0.96965307, "Qwen2": "Yes"} {"id": "2932246", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=2932246", "title": "Hybrid intelligent system", "text": "Hybrid intelligent system denotes a software system which employs, in parallel, a combination of methods and techniques from artificial intelligence subfields, such as:\nFrom the cognitive science perspective, every natural intelligent system is hybrid because it performs mental operations on both the symbolic and subsymbolic levels. For the past few years, there has been an increasing discussion of the importance of A.I. Systems Integration. Based on notions that there have already been created simple and specific AI systems (such as systems for computer vision, speech synthesis, etc., or software that employs some of the models mentioned above) and now is the time for integration to create broad AI systems. Proponents of this approach are researchers such as Marvin Minsky, Ron Sun, Aaron Sloman, and Michael A. Arbib.\nAn example hybrid is a hierarchical control system in which the lowest, reactive layers are sub-symbolic. The higher layers, having relaxed time constraints, are capable of reasoning from an abstract world model and performing planning.\nIntelligent systems usually rely on hybrid reasoning processes, which include induction, deduction, abduction and reasoning by analogy.", "Automation-Control": 0.713763833, "Qwen2": "Yes"} {"id": "50569499", "revid": "1461430", "url": "https://en.wikipedia.org/wiki?curid=50569499", "title": "Gated recurrent unit", "text": "Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks, introduced in 2014 by Kyunghyun Cho et al. The GRU is like a long short-term memory (LSTM) with a forget gate, but has fewer parameters than LSTM, as it lacks an output gate. \nGRU's performance on certain tasks of polyphonic music modeling, speech signal modeling and natural language processing was found to be similar to that of LSTM. GRUs showed that gating is indeed helpful in general, and Bengio's team came to no concrete conclusion on which of the two gating units was better.\nArchitecture.\nThere are several variations on the full gated unit, with gating done using the previous hidden state and the bias in various combinations, and a simplified form called minimal gated unit.\nThe operator formula_1 denotes the Hadamard product in the following.\nFully gated unit.\nInitially, for formula_2, the output vector is formula_3. \nVariables\nActivation functions\nAlternative activation functions are possible, provided that formula_15.\nAlternate forms can be created by changing formula_8 and formula_9\nMinimal gated unit.\nThe minimal gated unit (MGU) is similar to the fully gated unit, except the update and reset gate vector is merged into a forget gate. This also implies that the equation for the output vector must be changed:\nVariables\nLight gated recurrent unit.\nThe light gated recurrent unit (LiGRU) removes the reset gate altogether, replaces tanh with the ReLU activation, and applies batch normalization (BN):\nLiGRU has been studied from a Bayesian perspective. This analysis yielded a variant called light Bayesian recurrent unit (LiBRU), which showed slight improvements over the LiGRU on speech recognition tasks.", "Automation-Control": 0.7846158743, "Qwen2": "Yes"} {"id": "1496061", "revid": "1163112205", "url": "https://en.wikipedia.org/wiki?curid=1496061", "title": "List of Unified Modeling Language tools", "text": "This article compares UML tools. UML tools are software applications which support some functions of the Unified Modeling Language.", "Automation-Control": 0.9904583097, "Qwen2": "Yes"} {"id": "19824207", "revid": "20483999", "url": "https://en.wikipedia.org/wiki?curid=19824207", "title": "Sherline", "text": "Sherline is a machine tool builder founded in Australia and currently headquartered in Vista, California, USA. It builds miniature machine tools (microlathes and micromills) and a wide range of tooling to be used on them. Within the miniature segment of the machine tool industry, Sherline is one of the most widely known brands. According to Sherline, their line of OEM accessories (chucks, vises, rotary tables, and so on) is more comprehensive than that of any other builder of machine tools, regardless of machine size.\nSherline tools are often used by hobbyists for making nearly any kind of part that can be machined, as long as it fits within a miniature machine tool's limits of slide travel. Sherline's products are also used by industry. They provide an inexpensive way to build custom tooling using modular components (XY tables, machine slides, etc.).\nSherline's sales are global. Its product line helps to put machine tools in places where traditionally they would be unlikely to go by lowering the threshold for market entry. Its turnkey CNC systems are some of the least expensive CNC machine tools on the market, making it possible for individual hobbyists to enter a market that in past decades was almost entirely industrial.", "Automation-Control": 0.9827892184, "Qwen2": "Yes"} {"id": "19826297", "revid": "15951685", "url": "https://en.wikipedia.org/wiki?curid=19826297", "title": "Electrohydraulic servo valve", "text": "An electrohydraulic servo valve (EHSV) is an electrically-operated valve that controls how hydraulic fluid is sent to an actuator. Servo valves are often used to control powerful hydraulic cylinders with a very small electrical signal. Servo valves can provide precise control of position, velocity, pressure, and force with good post-movement damping characteristics.\nHistory of electrohydraulic servo valves.\nThe electrohydraulic servo valve first appeared in World War II. The EHSVs in use during the 1940s was characterized by poor accuracy and slow response times due to the inability to rapidly convert electrical signals into hydraulic flows. The first two-stage servo valve used a solenoid to actuate a first stage spool which in turn drove a rotating main stage. The servo valves of the World War II-era were similar to this — using a solenoid to drive a spool valve.\nAdvancement of EHSVs took off in the 1950s, largely due to the adoption of permanent magnet torque motors as the first stage (as opposed to solenoids). This resulted in greatly improved response times and a reduction in power used to control the valves.\nDescription.\nTypes.\nElectrohydraulic servo valves may consist of one or more stages. A single-stage servo valve uses a torque motor to directly position a spool valve. Single-stage servo valves suffer from limitations in flow capability and stability due to torque motor power requirements. Two-stage servo valves may use flapper, jet pipe, or deflector jet valves as hydraulic amplifier first stages to position a second-stage spool valve. This design results in significant increases in servo valve flow capability, stability, and force output. Similarly, three-stage servo valves may use an intermediate stage spool valve to position a larger third stage spool valve. Three-stage servo valves are limited to very high power applications, where significant flows are required.\nFurthermore, two-stage servo valves may be classified by the type of feedback used for the second stage; which may be spool position, load pressure, or load flow feedback. Most commonly, two-stage servo valves use position feedback; which may further be classified by direct feedback, force feedback, or spring centering.\nControl.\nA servo valve receives pressurized hydraulic fluid from a source, typically a hydraulic pump. It then transfers the fluid to a hydraulic cylinder in a closely controlled manner. Typically, the valve will move the spool proportionnaly to an electrical signal that it receives, indirectly controlling flow rate. Simple hydraulic control valves are binary, they are either on or off. Servo valves are different in that they can continuously vary the flow they supply from zero up to their rated maximum flow, or until the output pressure reaches the supplied pressure. More complex servo valves can control other parameters. For instance, some have internal feedback so that the input signal effectively control flow or output pressure, rather than spool position.\nServo valves are often used in a feedback control where the position or force on a hydraulic cylinder is measured, and fed back into a controller that varies the signal sent to the servo valve. This allows very precise control of the cylinder.\nExamples of usage.\nManufacturing.\nOne example of servo valve use is in blow molding where the servo valve controls the wall thickness of extruded plastic making up the bottle or container by use of a deformable die. The mechanical feedback has been replaced by an electric feedback with a position transducer. Integrated electronics close the position loop for the spool. These valves are suitable for electrohydraulic position, velocity, pressure or force control systems with extremely high dynamic response requirements.\nAircraft.\nServo valves are used to regulate the flow of fuel into a turbofan engine governed by FADEC. In fly-by-wire aircraft the control surfaces are often moved by servo valves connected to hydraulic cylinders. The signals to the servo valves are controlled by a flight control computer that receives commands from the pilot and monitors the flight of the aircraft.", "Automation-Control": 0.9348313808, "Qwen2": "Yes"} {"id": "3733220", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=3733220", "title": "Automated guided vehicle", "text": "An automated guided vehicle (AGV), different from an autonomous mobile robot (AMR), is a portable robot that follows along marked long lines or wires on the floor, or uses radio waves, vision cameras, magnets, or lasers for navigation. They are most often used in industrial applications to transport heavy materials around a large industrial building, such as a factory or warehouse. Application of the automatic guided vehicle broadened during the late 20th century.\nIntroduction.\nThe AGV can tow objects behind them in trailers to which they can autonomously attach. The trailers can be used to move raw materials or finished products. The AGV can also store objects on a bed. The objects can be placed on a set of motorized rollers (conveyor) and then pushed off by reversing them. AGVs are employed in nearly every industry, including pulp, paper, metals, newspaper, and general manufacturing. Transporting materials such as food, linen or medicine in hospitals is also done.\nAn AGV can also be called a laser guided vehicle (LGV). In Germany the technology is also called \"Fahrerloses Transportsystem\" (FTS) and in Sweden \"förarlösa truckar\". Lower cost versions of AGVs are often called Automated Guided Carts (AGCs) and are usually guided by magnetic tape. The term AMR is sometimes used to differentiate the mobile robots that do not rely in their navigation on extra infrastructure in the environment (like magnetic strips or visual markers) from those that do; the latter are then called AGVs.\nAGVs are available in a variety of models and can be used to move products on an assembly line, transport goods throughout a plant or warehouse, and deliver loads.\nThe first AGV was brought to market in the 1950s, by Barrett Electronics of Northbrook, Illinois, and at the time it was simply a tow truck that followed a wire in the floor instead of a rail. Out of this technology came a new type of AGV, which follows invisible UV markers on the floor instead of being towed by a chain. The first such system was deployed at the Willis Tower (formerly Sears Tower) in Chicago, Illinois to deliver mail throughout its offices.\nOver the years the technology has become more sophisticated and today automated vehicles are mainly Laser navigated e.g. LGV (Laser Guided Vehicle). In an automated process, LGVs are programmed to communicate with other robots to ensure product is moved smoothly through the warehouse, whether it is being stored for future use or sent directly to shipping areas. Today, the AGV plays an important role in the design of new factories and warehouses, safely moving goods to their rightful destination.\nNavigation.\nWired.\nA slot is cut in to the floor and a wire is placed approximately 1 inch below the surface. This slot is cut along the path the AGV is to follow. This wire is used to transmit a radio signal. A sensor is installed on the bottom of the AGV close to the ground. The sensor detects the relative position of the radio signal being transmitted from the wire. This information is used to regulate the steering circuit, making the AGV follow the wire.\nGuide tape.\nAGVs (some known as automated guided carts or AGCs) use tape for the guide path. The tapes can be one of two styles: magnetic or colored. \nThe AGV is fitted with the appropriate guide sensor to follow the path of the tape.\nOne major advantage of tape over wired guidance is that it can be easily removed and relocated if the course needs to change. Colored tape is initially less expensive, but lacks the advantage of being embedded in high traffic areas where the tape may become damaged or dirty. A flexible magnetic bar can also be embedded in the floor like wire but works under the same provision as magnetic tape and so remains unpowered or passive. Another advantage of magnetic guide tape is the dual polarity. Small pieces of magnetic tape may be placed to change states of the AGC based on polarity and sequence of the tags.\nLaser target navigation.\nThe navigation is done by mounting reflective tape on walls, poles or fixed machines. The AGV carries a laser transmitter and receiver on a rotating turret. The laser is transmitted and received by the same sensor. The angle and (sometimes) distance to any reflectors that in line of sight and in range are automatically calculated. This information is compared to the map of the reflector layout stored in the AGV's memory. This allows the navigation system to triangulate the current position of the AGV. The current position is compared to the path programmed in to the reflector layout map. The steering is adjusted accordingly to keep the AGV on track. It can then navigate to a desired target using the constantly updating position.\nInertial (Gyroscopic) navigation.\nAnother form of an AGV guidance is inertial navigation. With inertial guidance, a computer control system directs and assigns tasks to the vehicles. Transponders are embedded in the floor of the work place. The AGV uses these transponders to verify that the vehicle is on course. A gyroscope is able to detect the slightest change in the direction of the vehicle and corrects it in order to keep the AGV on its path. The margin of error for the inertial method is ±1 inch.\nInertial can operate in nearly any environment including tight aisles or extreme temperatures. Inertial navigation can include use of magnets embedded in the floor of the facility that the vehicle can read and follow.\nNatural feature (Natural Targeting) navigation.\nNavigation without retrofitting of the workspace is called Natural Features or Natural Targeting Navigation. One method uses one or more range-finding sensors, such as a laser range-finder, as well as gyroscopes or inertial measurement units with Monte-Carlo/Markov localization techniques to understand where it is as it dynamically plans the shortest permitted path to its goal. The advantage of such systems is that they are highly flexible for on-demand delivery to any location. They can handle failure without bringing down the entire manufacturing operation, since AGVs can plan paths around the failed device. They also are quick to install, with less down-time for the factory.\nVision guidance.\nVision-Guided AGVs can be installed with no modifications to the environment or infrastructure. They operate by using cameras to record features along the route, allowing the AGV to replay the route by using the recorded features to navigate. Vision-Guided AGVs use Evidence Grid technology, an application of probabilistic volumetric sensing, and was invented and initially developed by Dr. Hans Moravec at Carnegie Mellon University. The Evidence Grid technology uses probabilities of occupancy for each point in space to compensate for the uncertainty in the performance of sensors and in the environment. The primary navigation sensors are specially designed stereo cameras. The vision-guided AGV uses 360-degree images to build a 3D map, which allows the vision-guided AGVs to follow a trained route without human assistance or the addition of special features, landmarks or positioning systems.\nGeoguidance.\nA geoguided AGV recognizes its environment to establish its location. Without any infrastructure, the forklift equipped with geoguidance technology detects and identifies columns, racks and walls within the warehouse. Using these fixed references, it can position itself, in real time and determine its route. There are no limitations on distances to cover number of pick-up or drop-off locations. Routes are infinitely modifiable.\nSteering control.\nTo help an AGV navigate it can use three different steer control systems. The differential speed control is the most common. In this method there are two independent drive wheels. Each drive is driven at different speeds in order to turn or the same speed to allow the AGV to go forwards or backwards. The AGV turns in a similar fashion to a tank. This method of steering is the simplest as it does not require additional steering motors and mechanism. More often than not, this is seen on an AGV that is used to transport and turn in tight spaces or when the AGV is working near machines. This setup for the wheels is not used in towing applications because the AGV would cause the trailer to jackknife when it turned.\nThe second type of steering used is steered wheel control AGV. This type of steering can be similar to a car's steering. But this is not very manoeuvrable. It is more common to use a three-wheeled vehicle similar to a conventional three wheeled forklift. The drive wheel is the turning wheel. It is more precise in following the programmed path than the differential speed controlled method. This type of AGV has smoother turning. Steered wheel control AGV can be used in all applications; unlike the differential controlled. Steered wheel control is used for towing and can also at times have an operator control it.\nThe third type is a combination of differential and steered. Two independent steer/drive motors are placed on diagonal corners of the AGV and swivelling castors are placed on the other corners. It can turn like a car (rotating in an arc) in any direction. It can crab in any direction and it can drive in differential mode in any direction.\nPath Decision.\nAGVs have to make decisions on path selection. This is done through different methods: frequency select mode (wired navigation only), and path select mode (wireless navigation only) or via a magnetic tape on the floor not only to guide the AGV but also to issue steering commands and speed commands.\nFrequency select mode.\nFrequency select mode bases its decision on the frequencies being emitted from the floor. When an AGV approaches a point on the wire which splits the AGV detects the two frequencies and through a table stored in its memory decides on the best path. The different frequencies are required only at the decision point for the AGV. The frequencies can change back to one set signal after this point. This method is not easily expandable and requires extra cutting meaning more money.\nPath select mode.\nAn AGV using the path select mode chooses a path based on preprogrammed paths. It uses the measurements taken from the sensors and compares them to values given to them by programmers. When an AGV approaches a decision point it only has to decide whether to follow path 1, 2, 3, etc. This decision is rather simple since it already knows its path from its programming. This method can increase the cost of an AGV because it is required to have a team of programmers to program the AGV with the correct paths and change the paths when necessary. This method is easy to change and set up.\nMagnetic tape mode.\nThe magnetic tape is laid on the surface of the floor or buried in a 10mm channel; not only does it provide the path for the AGV to follow but also strips of the tape in different combinations of polarity, sequence, and distance laid alongside the track tell the AGV to change lane, speed up, slow down, and stop.\nTraffic control.\nFlexible manufacturing systems containing more than one AGV may require it to have traffic control so the AGV's will not run into one another. Traffic control can be carried out locally or by software running on a fixed computer elsewhere in the facility. Local methods include zone control, forward sensing control, and combination control. Each method has its advantages and disadvantages.\nZone control.\nZone control is the favorite of most environments because it is simple to install and easy to expand. Zone control uses a wireless transmitter to transmit a signal in a fixed area. Each AGV contains a sensing device to receive this signal and transmit back to the transmitter. If the area is clear the signal is set at \"clear\" allowing any AGV to enter and pass through the area. When an AGV is in the area the \"stop\" signal is sent and all AGV attempting to enter the area stop and wait for their turn. Once the AGV in the zone has moved out beyond the zone the \"clear\" signal is sent to one of the waiting AGVs. Another way to set up zone control traffic management is to equip each individual robot with its own small transmitter/receiver. The individual AGV then sends its own \"do not enter\" message to all the AGVs getting too close to its zone in the area. A problem with this method is if one zone goes down all the AGV's are at risk to collide with any other AGV. Zone control is a cost efficient way to control the AGV in an area.\nCollision avoidance.\nForward sensing control uses collision avoidance sensors to avoid collisions with other AGV in the area. These sensors include: sonic, which work like radar; optical, which uses an infrared sensor; and bumper, physical contact sensor. Most AGV's are equipped with a bumper sensor of some sort as a fail-safe. Sonic sensors send a \"chirp\" or high frequency signal out and then wait for a reply from the outline of the reply the AGV can determine if an object is ahead of it and take the necessary actions to avoid collision. The optical uses an infrared transmitter/receiver and sends an infrared signal which then gets reflected back; working on a similar concept as the sonic sensor. The problems with these are they can only protect the AGV from so many sides. They are relatively hard to install and work with as well.\nCombination control.\nCombination control sensing is using collision avoidance sensors as well as the zone control sensors. The combination of the two helps to prevent collisions in any situation. For normal operation the zone control is used with the collision avoidance as a fail safe. For example, if the zone control system is down, the collision avoidance system would prevent the AGV from colliding.\nSystem management.\nIndustries with AGVs need to have some sort of control over the AGVs. There are three main ways to control the AGV: locator panel, CRT color graphics display, and central logging and report.\nA locator panel is a simple panel used to see which area the AGV is in. If the AGV is in one area for too long, it could mean it is stuck or broken down. CRT color graphics display shows real time where each vehicle is. It also gives a status of the AGV, its battery voltage, unique identifier, and can show blocked spots. Central logging used to keep track of the history of all the AGVs in the system. Central logging stores all the data and history from these vehicles which can be printed out for technical support or logged to check for up time.\nAGV is a system often used in FMS to keep up, transport, and connect smaller subsystems into one large production unit. AGVs employ a lot of technology to ensure they do not hit one another and make sure they get to their destination. Loading and transportation of materials from one area to another is the main task of the AGV. AGV require a lot of money to get started with, but they do their jobs with high efficiency. In places such as Japan automation has increased and is now considered to be twice as efficient as factories in America. For a huge initial cost the total cost over time decreases.\nCommon applications.\nAutomated Guided Vehicles can be used in a wide variety of applications to transport many different types of material including pallets, rolls, racks, carts, and containers. AGVs excel in applications with the following characteristics:\nHandling raw materials.\nAGVs are commonly used to transport raw materials such as paper, steel, rubber, metal, and plastic. This includes transporting materials from receiving to the warehouse, and delivering materials directly to production lines.\nWork-in-process movement.\nWork-in-Process movement is one of the first applications where automated guided vehicles were used, and includes the repetitive movement of materials throughout the manufacturing process. AGVs can be used to move material from the warehouse to production/processing lines or from one process to another.\nPallet handling.\nPallet handling is an extremely popular application for AGVs as repetitive movement of pallets is very common in manufacturing and distribution facilities. AGVs can move pallets from the palletizer to stretch wrapping to the warehouse/storage or to the outbound shipping docks. \nFinished product handling.\nMoving finished goods from manufacturing to storage or shipping is the final movement of materials before they are delivered to customers. These movements often require the gentlest material handling because the products are complete and subject to damage from rough handling. Because AGVs operate with precisely controlled navigation and acceleration and deceleration this minimizes the potential for damage making them an excellent choice for this type of application\nTrailer loading.\nAutomatic loading of trailers is a relatively new application for automated guided vehicles and becoming increasingly popular. AGVs are used to transport and load pallets of finished goods directly into standard, over-the-road trailers without any special dock equipment. AGVs can pick up pallets from conveyors, racking, or staging lanes and deliver them into the trailer in the specified loading pattern. Some Automatic Trailer Loading AGVs utilize Natural Targeting to view the walls of the trailer for navigation. These types of ATL AGVs can be either completely driverless or hybrid vehicles.\nRoll handling.\nAGVs are used to transport rolls in many types of plant including paper mills, converters, printers, newspapers, steel producers, and plastics manufacturers. AGVs can store and stack rolls on the floor, in racking, and can even automatically load printing presses with rolls of paper.\nContainer handling.\nAGVs are used to move sea containers in some port container terminals. The main benefits are reduced labour costs and a more reliable (less variable) performance. This use of AGVs was pioneered in 1993 at the Port of Rotterdam in the Netherlands. By 2014 there were 20 automated or semi-automated port container terminals around the world which use either or both automated-guideway vehicles and automated-stacking-cranes. The original AGVs used diesel power with either hydraulic or electric drives. However more AGV use battery power and automated battery swap, which reduces emissions and lowers refueling costs but cost more to purchase and have shorter range.\nPrimary application industries.\nEfficient, cost effective movement of materials is an important, and common element in improving operations in many manufacturing plants and warehouses. Because automatic guided vehicles (AGVs) can deliver efficient, cost effective movement of materials, AGVs can be applied to various industries in standard or customized designs to best suit an industry's requirements. Industries currently utilizing AGVs include (but are not limited to):\nPharmaceutical.\nAGVs are a preferred method of moving materials in the pharmaceutical industry. Because an AGV system tracks all movement provided by the AGVs, it supports process validation and cGMP (current Good Manufacturing Practice).\nChemical.\nAGVs deliver raw materials, move materials to curing storage warehouses, and provide transportation to other processing cells and stations. Common industries include rubber, plastics, and specialty chemicals.\nManufacturing.\nAGVs are often used in general manufacturing of products. AGVs can typically be found delivering raw materials, transporting work-in process, moving finished goods, removing scrap materials, and supplying packaging materials.\nAutomotive.\nAGV installations are found in Stamping Plants, Power Train (Engine and Transmission) Plants, and Assembly Plants delivering raw materials, transporting work-in process, and moving finished goods. AGVs are also used to supply specialized tooling which must be changed.\nPaper and print.\nAGVs can move paper rolls, pallets, and waste bins to provide all routine material movement in the production and warehousing (storage/retrieval) of paper, newspaper, printing, corrugating, converting, and plastic film.\nFood and beverage.\nAGVs can be applied to move materials in food processing (such as the loading of food or trays into sterilizers) and at the \"end of line,\" linking the palletizer, stretch wrapper, and the warehouse. AGVs can load standard, over-the-road trailers with finished goods, and unload trailers to supply raw materials or packaging materials to the plant. AGVs can also store and retrieve pallets in the warehouse.\nHospital.\nAGVs are becoming increasingly popular in the healthcare industry for efficient transport, and are programmed to be fully integrated to automatically operate doors, elevators/lifts, cart washers, trash dumpers, etc. AGVs typically move linens, trash, regulated medical waste, patient meals, soiled food trays, and surgical case carts.\nWarehousing.\nAGVs used in Warehouses and Distribution Centers logically move loads around the warehouses and prepare them for shipping/loading or receiving or move them from an induction conveyor to logical storage locations within the warehouse. Often, this type of use is accompanied by customized warehouse management software. To avoid damage to fragile goods, AGVs are preferred in warehouses that handle fragile items since human errors are reduced to almost zero. Warehouses with hazardous goods have primarily adopted this technology as they can operate in extreme conditions like passing through freezers.\nTheme parks.\nIn recent years, the theme park industry has begun using AGVs for rides. One of the earliest AGV ride systems was for Epcot's Universe of Energy, opened in 1982. The ride used wired navigation to drive the 'Traveling Theatre' through the ride. Many rides use wired navigation, especially when employees must frequently walk over the ride path such as at (the now-closed attraction) The Great Movie Ride at Disney's Hollywood Studios. Another ride at Hollywood Studios that uses wired navigation is The Twilight Zone Tower of Terror, a combined drop tower/dark ride. The elevator cars are AGVs that lock into place inside separate vertical motion cabs to move vertically. When it reaches a floor requiring horizontal movement, the AGV unlocks from the vertical cab and drives itself out of the elevator.\nA recent trend in theme parks is a so-called trackless ride system, AGV rides that use LPS, Wi-Fi, or RFID to move around. The advantage of this system is that the ride can execute seemingly random movements, giving a different ride experience each time.\nBattery charging.\nAGVs utilize a number of battery charging options. Each option is dependent on the users preference.\nBattery swap.\n\"Battery swap technology\" requires an operator to manually remove the discharged battery from the AGV and place a fully charged battery in its place after approximately 8 – 12 hours (about one shift) of AGVs operation. 5 – 10 minutes is required to perform this with each AGV in the fleet.\nAutomatic and opportunity charging.\n\"Automatic and opportunity battery charging\" allows for continuous operation. On average an AGV charges for 12 minutes every hour for automatic charging and no manual intervention is required. If opportunity is being utilized the AGV will receive a charge whenever the opportunity arises.\nWhen a battery pack gets to a predetermined level the AGV will finish the current job that it has been assigned before it goes to the charging station.\nAutomatic battery swap.\nAutomatic battery swap is an alternative to manual battery swap. It might require an additional piece of automation machinery, an automatic battery changer, to the overall AGV system. AGVs will pull up to the battery swap station and have their batteries automatically replaced with fully charged batteries. The automatic battery changer then places the removed batteries into a charging slot for automatic recharging. The automatic battery changer keeps track of the batteries in the system and pulls them only when they are fully charged.\nOther versions of automatic battery swap allow AGVs to change each other's batteries. \nWhile a battery swap system reduces the manpower required to swap batteries, recent developments in battery charging technology allow batteries to be charged more quickly and efficiently potentially eliminating the need to swap batteries.", "Automation-Control": 0.6257681251, "Qwen2": "Yes"} {"id": "12431124", "revid": "38344384", "url": "https://en.wikipedia.org/wiki?curid=12431124", "title": "Process Window Index", "text": "Process Window Index (PWI) is a statistical measure that quantifies the robustness of a manufacturing process, e.g. one which involves heating and cooling, known as a thermal process. In manufacturing industry, PWI values are used to calibrate the heating and cooling of soldering jobs (known as a thermal profile) while baked in a reflow oven.\nPWI measures how well a process fits into a user-defined process limit known as the specification limit. The specification limit is the tolerance allowed for the process and may be statistically determined. Industrially, these specification limits are known as the \"process window\", and values that a plotted inside or outside this window are known as the process window index.\nUsing PWI values, processes can be accurately measured, analyzed, compared, and tracked at the same level of statistical process control and quality control available to other manufacturing processes.\nStatistical process control.\nProcess capability is the ability of a process to produce output within specified limits. To help determine whether a manufacturing or business process is in a state of statistical control, process engineers use control charts, which help to predict the future performance of the process based on the current process.\nTo help determine the capability of a process, statistically determined upper and lower limits are drawn on either side of a process mean on the control chart. The control limits are set at three standard deviations on either side of the process mean, and are known as the upper control limit (UCL) and lower control limit (LCL) respectively. If the process data plotted on the control chart remains within the control limits over an extended period, then the process is said to be stable.\nThe tolerance values specified by the end-user are known as specification limits – the upper specification limit (USL) and lower specification limit (LSL) respectively. If the process data plotted on a control chart remains within these specification limits, then the process is considered a capable process, denoted by formula_1.\nManufacturing industry has developed customized specification limits known as Process Windows. Within this process window, values are plotted. The values relative to the process mean of the window are known as the Process Window Index. By using PWI values, processes can be accurately measured, analyzed, compared, and tracked at the same level of statistical process control and quality control available to other manufacturing processes.\nControl limits.\nControl limits, also known as natural process limits, are horizontal lines drawn on a statistical process control chart, usually at a distance of ±3 standard deviations of the plotted statistic's mean, used to judge the stability of a process.\nControl limits should not be confused with \"tolerance limits\" or \"specifications,\" which are completely independent of the distribution of the plotted sample statistic. Control limits describe what a process is capable of producing (sometimes referred to as the “voice of the process”), while tolerances and specifications describe how the product should perform to meet the customer's expectations (referred to as the “voice of the customer”).\nUse.\nControl limits are used to detect signals in process data that indicate that a process is not in control and, therefore, not operating predictably. A value in excess of the control limit indicates a special cause is affecting the process. \nTo detect signals one of several rule sets may be used . One specification outlines that a signal is defined as any single point outside of the control limits. A process is also considered out of control if there are seven consecutive points, still inside the control limits but on one single side of the mean.\nFor normally distributed statistics, the area bracketed by the control limits will on average contain 99.73% of all the plot points on the chart, as long as the process is and remains in statistical control. A false-detection rate of at least 0.27% is therefore expected.\nIt is often not known whether a particular process generates data that conform to particular distributions, but the Chebyshev's inequality and the Vysochanskij–Petunin inequality allow the inference that for any unimodal distribution at least 95% of the data will be encapsulated by limits placed at 3 sigma.\nPWI in electronics manufacturing.\nAn example of a process to which the PWI concept may be applied is soldering. In soldering, a thermal profile is the set of time-temperature values for a variety of processes such as slope, thermal soak, reflow, and peak. \nEach thermal profile is ranked on how it fits in a process window (the specification or tolerance limit). Raw temperature values are normalized in terms of a percentage relative to both the process mean and the window limits. The center of the process window is defined as zero, and the extreme edges of the process window are ±99%. A PWI greater than or equal to 100% indicates that the profile does not process the product within specification. A PWI of 99% indicates that the profile runs at the edge of the process window. For example, if the process mean is set at 200 °C, with the process window calibrated at 180 °C and 220 °C respectively; then a measured value of 188 °C translates to a process window index of −60%. A lower PWI value indicates a more robust profile. For maximum efficiency, separate PWI values are computed for peak, slope, reflow, and soak processes of a thermal profile. \nTo avoid thermal shock affecting production, the steepest slope in the thermal profile is determined and leveled. Manufacturers use custom-built software to accurately determine and decrease the steepness of the slope. In addition, the software also automatically recalibrates the PWI values for the peak, slope, reflow, and soak processes. By setting PWI values, engineers can ensure that the reflow soldering work does not overheat or cool too quickly.\nFormula.\nThe Process Window Index is calculated as the worst case (i.e. highest number) in the set of thermal profile data. For each profile statistic the percentage used of the respective process window is calculated, and the worst case (i.e. highest percentage) is the PWI.\nFor example, a thermal profile with three thermocouples, with four profile statistics logged for each thermocouple, would have a set of twelve statistics for that thermal profile. In this case, the PWI would be the highest value among the twelve percentages of the respective process windows. \nThe formula to calculate PWI is:\nwhere:", "Automation-Control": 0.9426817298, "Qwen2": "Yes"} {"id": "3326018", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=3326018", "title": "Teaching dimension", "text": "In computational learning theory, the teaching dimension of a concept class \"C\" is defined to be formula_1, where formula_2 is the minimum size of a witness set for \"c\" in \"C\". Intuitively, this measures the number of instances that are needed to identify a concept in the class, using supervised learning with examples provided by a helpful teacher who is trying to convey the concept as succinctly as possible. This definition was formulated in 1995 by Sally Goldman and Michael Kearns, based on earlier work by Goldman, Ron Rivest, and Robert Schapire.\nThe teaching dimension of a finite concept class can be used to give a lower and an upper bound on the membership query cost of the concept class.\nIn Stasys Jukna's book \"Extremal Combinatorics\", a lower bound is given for the teaching dimension in general:\nLet \"C\" be a concept class over a finite domain \"X\". If the size of \"C\" is greater than \nthen the teaching dimension of \"C\" is greater than \"k\".\nHowever, there are more specific teaching models that make assumptions about teacher or learner, and can get lower values for the teaching dimension. For instance, several models are the classical teaching (CT) model, the optimal teacher (OT) model, recursive teaching (RT), preference-based teaching (PBT), and non-clashing teaching (NCT).", "Automation-Control": 0.736564517, "Qwen2": "Yes"} {"id": "669733", "revid": "40683745", "url": "https://en.wikipedia.org/wiki?curid=669733", "title": "Die casting", "text": "Die casting is a metal casting process that is characterized by forcing molten metal under high pressure into a mold cavity. The mold cavity is created using two hardened tool steel dies which have been machined into shape and work similarly to an injection mold during the process. Most die castings are made from non-ferrous metals, specifically zinc, copper, aluminium, magnesium, lead, pewter, and tin-based alloys. Depending on the type of metal being cast, a hot- or cold-chamber machine is used.\nThe casting equipment and the metal dies represent large capital costs and this tends to limit the process to high-volume production. Manufacture of parts using die casting is relatively simple, involving only four main steps, which keeps the incremental cost per item low. It is especially suited for a large quantity of small- to medium-sized castings, which is why die casting produces more castings than any other casting process. Die castings are characterized by a very good surface finish (by casting standards) and dimensional consistency.\nHistory.\nDie casting equipment was invented in 1838 for the purpose of producing movable type for the printing industry. The first die casting-related patent was granted in 1849 for a small hand-operated machine for the purpose of mechanized printing type production. In 1885 Ottmar Mergenthaler invented the Linotype machine, which cast an entire line of type as a single unit, using a die casting process. It nearly completely replaced setting type by hand in the publishing industry. The Soss die-casting machine, manufactured in Brooklyn, NY, was the first machine to be sold in the open market in North America. Other applications grew rapidly, with die casting facilitating the growth of consumer goods, and appliances, by greatly reducing the production cost of intricate parts in high volumes. In 1966, General Motors released the \"Acurad\" process.\nCast metal.\nThe main die casting alloys are: zinc, aluminium, magnesium, copper, lead, and tin; although uncommon, ferrous die casting is also possible. Specific die casting alloys include: zinc aluminium; aluminium to, e.g. The Aluminum Association (AA) standards: AA 380, AA 384, AA 386, AA 390; and AZ91D magnesium. The following is a summary of the advantages of each alloy:\n, maximum weight limits for aluminium, brass, magnesium, and zinc castings are estimated at approximately , and , respectively. By late-2019, press machines capable of die casting single pieces over- were being used to produce aluminium chassis components for cars.\nThe material used defines the minimum section thickness and minimum draft required for a casting as outlined in the table below. The thickest section should be less than , but can be greater.\nDesign geometry.\nThere are a number of geometric features to be considered when creating a parametric model of a die casting:\nEquipment.\nThere are two basic types of die casting machines: \"hot-chamber machines\" and \"cold-chamber machines\". These are rated by how much clamping force they can apply. Typical ratings are between .\nHot-chamber die casting.\nHot-chamber die casting, also known as \"gooseneck machines\", rely upon a pool of molten metal to feed the die. At the beginning of the cycle the piston of the machine is retracted, which allows the molten metal to fill the \"gooseneck\". The pneumatic- or hydraulic-powered piston then forces this metal out of the gooseneck into the die. The advantages of this system include fast cycle times (approximately 15 cycles a minute) and the convenience of melting the metal in the casting machine. The disadvantages of this system are that it is limited to use with low-melting point metals and that aluminium cannot be used because it picks up some of the iron while in the molten pool. Therefore, hot-chamber machines are primarily used with zinc-, tin-, and lead-based alloys.\nCold-chamber die casting.\nThese are used when the casting alloy cannot be used in hot-chamber machines; these include aluminium, zinc alloys with a large composition of aluminium, magnesium and copper. The process for these machines start with melting the metal in a separate furnace. Then a precise amount of molten metal is transported to the cold-chamber machine where it is fed into an unheated shot chamber (or injection cylinder). This shot is then driven into the die by a hydraulic or mechanical piston. The biggest disadvantage of this system is the slower cycle time due to the need to transfer the molten metal from the furnace to the cold-chamber machine.\nMold or tooling.\nTwo dies are used in die casting; one is called the \"cover die half\" and the other the \"ejector die half\". Where they meet is called the parting line. The cover die contains the sprue (for hot-chamber machines) or shot hole (for cold-chamber machines), which allows the molten metal to flow into the dies; this feature matches up with the injector nozzle on the hot-chamber machines or the shot chamber in the cold-chamber machines. The ejector die contains the ejector pins and usually the runner, which is the path from the sprue or shot hole to the mould cavity. The cover die is secured to the stationary, or front, platen of the casting machine, while the ejector die is attached to the movable platen. The mould cavity is cut into two \"cavity inserts\", which are separate pieces that can be replaced relatively easily and bolt into the die halves.\nThe dies are designed so that the finished casting will slide off the cover half of the die and stay in the ejector half as the dies are opened. This assures that the casting will be ejected every cycle because the ejector half contains the \"ejector pins\" to push the casting out of that die half. The ejector pins are driven by an \"ejector pin plate\", which accurately drives all of the pins at the same time and with the same force, so that the casting is not damaged. The ejector pin plate also retracts the pins after ejecting the casting to prepare for the next shot. There must be enough ejector pins to keep the overall force on each pin low, because the casting is still hot and can be damaged by excessive force. The pins still leave a mark, so they must be located in places where these marks will not hamper the casting's purpose.\nOther die components include \"cores\" and \"slides\". Cores are components that usually produce holes or opening, but they can be used to create other details as well. There are three types of cores: fixed, movable, and loose. Fixed cores are ones that are oriented parallel to the pull direction of the dies (i.e. the direction the dies open), therefore they are fixed, or permanently attached to the die. Movable cores are ones that are oriented in any other way than parallel to the pull direction. These cores must be removed from the die cavity after the shot solidifies, but before the dies open, using a separate mechanism. Slides are similar to movable cores, except they are used to form undercut surfaces. The use of movable cores and slides greatly increases the cost of the dies. Loose cores, also called \"pick-outs\", are used to cast intricate features, such as threaded holes. These loose cores are inserted into the die by hand before each cycle and then ejected with the part at the end of the cycle. The core then must be removed by hand. Loose cores are the most expensive type of core, because of the extra labor and increased cycle time. Other features in the dies include water-cooling passages and vents along the parting lines. These vents are usually wide and thin (approximately ) so that when the molten metal starts filling them the metal quickly solidifies and minimizes scrap. No risers are used because the high pressure ensures a continuous feed of metal from the gate.\nThe most important material properties for the dies are thermal shock resistance and softening at elevated temperature; other important properties include hardenability, machinability, heat checking resistance, weldability, availability (especially for larger dies), and cost. The longevity of a die is directly dependent on the temperature of the molten metal and the cycle time. The dies used in die casting are usually made out of hardened tool steels, because cast iron cannot withstand the high pressures involved, therefore the dies are very expensive, resulting in high start-up costs. Metals that are cast at higher temperatures require dies made from higher alloy steels.\nThe main failure mode for die casting dies is wear or erosion. Other failure modes are \"heat checking\" and \"thermal fatigue\". Heat checking is when surface cracks occur on the die due to a large temperature change on every cycle. Thermal fatigue is when surface cracks occur on the die due to a large number of cycles.\nProcess.\nThe following are the four steps in \"traditional die casting\", also known as \"\", these are also the basis for any of the die casting variations: die preparation, filling, ejection, and shakeout. The dies are prepared by spraying the mould cavity with lubricant. The lubricant both helps control the temperature of the die and it also assists in the removal of the casting. The dies are then closed and molten metal is injected into the dies under high pressure; between . Once the mould cavity is filled, the pressure is maintained until the casting solidifies. The dies are then opened and the shot (shots are different from castings because there can be multiple cavities in a die, yielding multiple castings per shot) is ejected by the ejector pins. Finally, the shakeout involves separating the scrap, which includes the gate, runners, sprues and flash, from the shot. This is often done using a special trim die in a power press or hydraulic press. Other methods of shaking out include sawing and grinding. A less labor-intensive method is to tumble shots if gates are thin and easily broken; separation of gates from finished parts must follow. This scrap is recycled by remelting it. The yield is approximately 67%.\nThe high-pressure injection leads to a quick fill of the die, which is required so the entire cavity fills before any part of the casting solidifies. In this way, discontinuities are avoided, even if the shape requires difficult-to-fill thin sections. This creates the problem of air entrapment, because when the mould is filled quickly there is little time for the air to escape. This problem is minimized by including vents along the parting lines, however, even in a highly refined process there will still be some porosity in the center of the casting.\nMost die casters perform other secondary operations to produce features not readily castable, such as tapping a hole, polishing, plating, buffing, or painting.\nInspection.\nAfter the shakeout of the casting it is inspected for defects. The most common defects are misruns and cold shuts. These defects can be caused by cold dies, low metal temperature, dirty metal, lack of venting, or too much lubricant. Other possible defects are gas porosity, shrinkage porosity, hot tears, and flow marks. \"Flow marks\" are marks left on the surface of the casting due to poor gating, sharp corners, or excessive lubricant.\nLubricants.\nWater-based lubricants are the most used type of lubricant, because of health, environmental, and safety reasons. Unlike solvent-based lubricants, if water is properly treated to remove all minerals from it, it will not leave any by-product in the dies. If the water is not properly treated, then the minerals can cause surface defects and discontinuities.\nToday \"water-in-oil\" and \"oil-in-water\" emulsions are used, because, when the lubricant is applied, the water cools the die surface by evaporating depositing the oil that helps release the shot. A common mixture for this type of emulsion is thirty parts water to one part oil, however in extreme cases a ratio of one-hundred to one is used. Oils that are used include heavy residual oil (HRO), animal fat, vegetable fat, synthetic oil, and all sorts of mixtures of these. HROs are gelatinous at room temperature, but at the high temperatures found in die casting, they form a thin film. Other substances are added to control the viscosity and thermal properties of these emulsions, e.g. graphite, aluminium, mica. Other chemical additives are used to inhibit rusting and oxidation. In addition emulsifiers are added to improve the emulsion manufacturing process, e.g. soap, alcohol esters, ethylene oxides.\nHistorically, solvent-based lubricants, such as diesel fuel and kerosene, were commonly used. These were good at releasing the part from the die, but a small explosion occurred during each shot, which led to a build-up of carbon on the mould cavity walls. However, they were easier to apply evenly than water-based lubricants.\nAdvantages.\nAdvantages of die casting:\nDisadvantages.\nThe main disadvantage to die casting is the very high capital cost. Both the casting equipment required and the dies and related components are very costly, as compared to most other casting processes. Therefore, to make die casting an economic process, a large production volume is needed. Other disadvantages are:\nVariants.\nAcurad.\nAcurad was a die casting process developed by General Motors in the late 1950s and 1960s. The name is an acronym for accurate, reliable, and dense. It was developed to combine a stable fill and directional solidification with the fast cycle times of the traditional die casting process. The process pioneered four breakthrough technologies for die casting: thermal analysis, flow and fill modeling, heat treatable and high integrity die castings, and indirect squeeze casting (explained below).\nThe thermal analysis was the first done for any casting process. This was done by creating an electrical analog of the thermal system. A cross-section of the dies were drawn on Teledeltos paper and then thermal loads and cooling patterns were drawn onto the paper. Water lines were represented by magnets of various sizes. The thermal conductivity was represented by the reciprocal of the resistivity of the paper.\nThe Acurad system employed a bottom fill system that required a stable flow-front. Logical thought processes and trial and error were used because computerized analysis did not exist yet; however this modeling was the precursor to computerized flow and fill modeling.\nThe Acurad system was the first die casting process that could successfully cast low-iron aluminium alloys, such as A356 and A357. In a traditional die casting process these alloys would solder to the die. Similarly, Acurad castings could be heat treated and meet the U.S. military specification .\nFinally, the Acurad system employed a patented double shot piston design. The idea was to use a second piston (located within the primary piston) to apply pressure after the shot had partially solidified around the perimeter of the casting cavity and shot sleeve. While the system was not very effective, it did lead the manufacturer of the Acurad machines, Ube Industries, to discover that it was just as effective to apply sufficient pressure at the right time later in the cycle with the primary piston; this is indirect squeeze casting.\nPore-free.\nWhen no porosity is allowed in a cast part then the pore-free casting process is used. It is identical to the standard process except oxygen is injected into the die before each shot to purge any air from the mould cavity. This causes small dispersed oxides to form when the molten metal fills the die, which virtually eliminates gas porosity. An added advantage to this is greater strength. Unlike standard die castings, these castings can be heat treated and welded. This process can be performed on aluminium, zinc, and lead alloys.\nVacuum-assisted high-pressure die casting.\nIn vacuum assisted high pressure die casting, a.k.a. vacuum high pressure die casting (VHPDC), a vacuum pump removes air and gases from die cavity and metal delivery system before and during injection. Vacuum die casting reduces porosity, allows heat treating and welding, improves surface finish, and can increase strength.\nHeated-manifold direct-injection.\nHeated-manifold direct-injection die casting, also known as direct-injection die casting or runnerless die casting, is a zinc die casting process where molten zinc is forced through a heated manifold and then through heated mini-nozzles, which lead into the moulding cavity. This process has the advantages of lower cost per part, through the reduction of scrap (by the elimination of sprues, gates, and runners) and energy conservation, and better surface quality through slower cooling cycles.\nSemi-solid.\n\"Semi-solid die casting\" uses metal that is heated between its liquidus and either solidus or eutectic temperature, so that it is in its \"mushy region\". This allows for more complex parts and thinner walls.\nLow Pressure Die Casting (LPDC) uses compressed air instead of a piston to inject molten metal into the die. The process begins with the preparation of the die. The die is preheated to a temperature that will ensure good metal flow and avoid premature solidification of the molten metal. A holding furnace (crucible) filled with molten metal is located beneath the sealed die. The furnace is pressurized to inject the molten metal into the die cavity through a tube. The pressure is usually around 7 to 15 psi. The molten metal is left to solidify in the die. This takes anywhere from a few seconds to a few minutes, depending on the size and complexity of the part being cast. Once the casting has solidified, the die is opened, and the casting is ejected or removed manually. ", "Automation-Control": 0.8422400951, "Qwen2": "Yes"} {"id": "44739552", "revid": "5042921", "url": "https://en.wikipedia.org/wiki?curid=44739552", "title": "Oracle Health Sciences", "text": "Oracle Health Sciences is a family of software developed by Oracle Corporation which is primarily used to create clinical trials and to conduct pharmacovigilance based on the database created with it.\nOracle Argus.\nOracle Argus is a pharmacovigilance product line that includes Oracle Argus Affiliate, Oracle Argus Analytics, Oracle Argus Dossier, Oracle Argus Insight, Oracle Argus Reconciliation, Oracle Argus Intercange, and Oracle Argus Safety.", "Automation-Control": 0.7581110001, "Qwen2": "Yes"} {"id": "23399606", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=23399606", "title": "Robocup Rescue Simulation", "text": "Robocup Rescue Simulation is an education and research project intended to promote the development of robotic agents for search and rescue. The project was initiated in reaction to the Great Hanshin earthquake, which hit Hyōgo Prefecture, Japan, on 17 January 1995, killing more than six thousand people, most of them in the city of Kobe.\nAccording to event organizers, \"The intention of the RoboCup Rescue project is to promote research and development in this socially significant domain at various levels involving multi-agent team work coordination, physical robotic agents for search and rescue, information infrastructures, personal digital assistants, a standard simulator and decision support systems, evaluation benchmarks for rescue strategies and robotic systems that [can] all [be] integrated into comprehensive systems in future.\"\nThe RoboCup Rescue Simulation Project challenges teams of researchers to design virtual robots to solve various challenges, or to build real, autonomous robots, which are evaluated in specially designed rescue simulations.\nThe project is one of several competitions operated by RoboCup, which is best known for the Robot Soccer World Cup.", "Automation-Control": 0.7511885166, "Qwen2": "Yes"} {"id": "7841263", "revid": "44120587", "url": "https://en.wikipedia.org/wiki?curid=7841263", "title": "Abrasive flow machining", "text": "Abrasive flow machining (AFM), also known as abrasive flow deburring or extrude honing, is an interior surface finishing process characterized by flowing an abrasive-laden fluid through a workpiece. This fluid is typically very viscous, having the consistency of putty, or dough. AFM smooths and finishes rough surfaces, and is specifically used to remove burrs, polish surfaces, form radii, and even remove material. The nature of AFM makes it ideal for interior surfaces, slots, holes, cavities, and other areas that may be difficult to reach with other polishing or grinding processes. Due to its low material removal rate, AFM is not typically used for large stock-removal operations, although it can be.\nAbrasive flow machining was first patented by the Extrude Hone Corporation in 1970.\nProcess.\nIn abrasive flow machining, the abrasive fluid flows through the workpiece, effectively performing erosion. Abrasive particles in the fluid contact raised features on the surface of the workpiece and remove them. The fluid is forced through the workpiece by a hydraulic ram, where it acts as a flexible file, or slug, molding itself precisely to the shape of the workpiece. The highest amount of material removal occurs in areas where the flow of the fluid is restricted; according to Bernoulli's Principle, the flow speed and pressure of the fluid decrease in these areas, facilitating a higher material removal rate (MRR). The pressure exerted by the fluid on all contacting surfaces also results in a very uniform finish.\nAFM may be performed once, as a one-way flow process, or repeatedly as a two-way flow process. In the two-way flow process, a reservoir of medium exists at either end of the workpiece, and the medium flows back and forth through the workpiece from reservoir to reservoir.\nEquipment.\nAn abrasive flow machine normally includes two medium chambers equipped with hydraulic rams, a fixture for holding the workpiece, and a clamping system that holds all the components tightly together. Most machines allow for the loading of different types of abrasive medium, and include the capacity to adjust the pressure used in extruding the medium through the workpiece. They may be manually operated, or automated using CNC. For machines designed to accommodate high production volumes, accessories such as part-cleaning stations, unloading and reloading stations, media refeed devices, and media heat exchangers may be included.", "Automation-Control": 0.9809465408, "Qwen2": "Yes"} {"id": "2931020", "revid": "13157623", "url": "https://en.wikipedia.org/wiki?curid=2931020", "title": "Pendulum-and-hydrostat control", "text": "Pendulum-and-hydrostat control is a control mechanism developed originally for depth control of the Whitehead torpedo. It is an early example of what is now known as proportional and derivative control.\nThe hydrostat is a mechanism that senses pressure; the torpedo's depth is proportional to pressure. However, with only a hydrostat controlling the depth fins in a negative feedback loop, the torpedo tends to oscillate around the desired depth rather than settling to the desired depth. The addition of a pendulum allows the torpedo to sense the pitch of the torpedo. The pitch information is combined with the depth information to set the torpedo's depth control fins. The pitch information provides a damping term to the depth control response and suppresses the depth oscillations.\nOperation.\nIn control theory the effect of the addition of the pendulum can be explained as turning the simple proportional controller into a proportional-derivative controller since the depth keeping is not controlled by the depth alone anymore but also by the derivative (rate of change) of the depth which is roughly proportional to the angle of the machine. The relative gain of the proportional and derivative functions could be altered by adjusting the linkages. \nIt was mainly used to control the depth of torpedoes until the end of the Second World War, and it reduced depth errors from ±40 feet (12 meters) to as little as ±6 inches (0.15 m).\nThe pendulum and hydrostat control was invented by Robert Whitehead. It was an important advance in torpedo technology, and it was nicknamed \"The Secret\".", "Automation-Control": 0.653106451, "Qwen2": "Yes"} {"id": "36992420", "revid": "16326754", "url": "https://en.wikipedia.org/wiki?curid=36992420", "title": "NiuTrans", "text": "NiuTrans is a machine translation system. It has a platform, an API, and two open-source translation systems.\nIt is developed by the Natural Language Processing Group at Northeastern University (China).\nTranslation systems.\nNiuTrans.SMT is an open-source statistical machine translation system jointly developed by the Natural Language Processing Laboratory of Northeastern University and Shenyang Yayi Network Technology Co., Ltd.\nNiuTrans.NMT is a lightweight and efficient Transformer-based neural machine translation system. It is implemented with pure C++ and it is heavily optimized for fast decoding. The system can run with various systems and devices", "Automation-Control": 0.8945115209, "Qwen2": "Yes"} {"id": "65662229", "revid": "35936988", "url": "https://en.wikipedia.org/wiki?curid=65662229", "title": "Karen Rudie", "text": "Karen Gail Rudie (born 1963) is a Canadian control theorist and electrical engineer known for her work on the decentralized control of discrete event dynamic systems. She is a professor of electrical and computer engineering in Queen's University at Kingston.\nEducation and career.\nRudie majored in mathematics and engineering as an undergraduate at Queen's University, specializing in control and communication; she graduated in 1985. She has a Ph.D. from the University of Toronto, completed in 1992; Her dissertation, \"Decentralized Control of Discrete-Event Systems\", was supervised by Walter Murray Wonham.\nShe returned to Queen's University as a faculty member in 1993, after postdoctoral research at the Institute for Mathematics and its Applications.\nRecognition.\nIn 2018, Rudie was named an IEEE Fellow, as a member of the IEEE Control Systems Society, \"for contributions to the supervisory control theory of discrete event systems\".", "Automation-Control": 0.631239295, "Qwen2": "Yes"} {"id": "12601888", "revid": "18872885", "url": "https://en.wikipedia.org/wiki?curid=12601888", "title": "Universal measuring machine", "text": "Universal measuring machines (UMM) are measurement devices used for objects in which geometric relationships are the most critical element, with dimensions specified from geometric locations (see GD&T) rather than absolute coordinates. The very first uses for these machines was the inspection of gauges and parts produced by jig grinding. While bearing some resemblance to a coordinate-measuring machine (CMM) its usage and accuracy envelope differs significantly. While CMMs typically move in three dimensions and measure with a touch probe, a UMM aligns a spindle (4th axis) with a part geometry using a continuous scanning probe.\nOriginally, universal measuring machines were created to fill a need to continuously measure geometric features in both an absolute and comparative capacity, rather than a point based coordinate measuring system. A CMM provides a rapid method for inspecting absolute points, but geometric relationships, such as runout, parallelism, perpendicularity, etc., must be calculated rather than measured directly. By aligning an accurate spindle with an electronic test indicator with a geometric feature of interest, rather than using non-scanning cartesian probe to estimate an alignment, a universal measuring machine fills this need. The indicator can be accurately controlled and moved across a part, either along a linear axis or radially around the spindle, to continuously record profile and determine geometry. This gives the universal machine a very strong advantage over non-scanning measuring methods when profiling flats, radii, contours, and holes, as the detail of the feature can be of at the resolution of the probe. More modern CMMs do have scanning probes and thus can determine geometry similarly. \nIn practice, the 1970s-era universal measuring machine is a very slow machine that requires a highly skilled and patient operator to use, and the accuracy built into these machines far outstripped the needs of most industries. As a result, the universal measuring machine today is uncommon, only found as a special-purpose machine in metrology laboratories. Because the machine can make comparative length measurements without moving linear axes, it is a valuable tool in comparing master gauges and length standards. While universal measuring machines were never a mass-produced item, they are no longer available on a production basis, and are produced on a to-order basis tailored to the needs of the metrology lab purchasing it. Manufacturers that perform work that must be measured on such a machine will frequently opt to subcontract the measurement to a laboratory which specializes in such.\nUniversal measuring machines placed under corrected interferometric control and using non-contact gauge heads can measure features to millionths of an inch across the entire machine's envelope, where other types of machine are limited either in number of axes or accuracy of the measurement. The accuracy of the machine itself is negligible, as the environment the machine is the limiting factor to effective accuracy. The earlier mechanical machines were built to hold 10 to 20 millionths of an inch accuracy across the entire machine envelope, and due to incredible machine design and forethought, remain as accurate today without computer compensation.", "Automation-Control": 0.6963683367, "Qwen2": "Yes"} {"id": "12610283", "revid": "45148022", "url": "https://en.wikipedia.org/wiki?curid=12610283", "title": "Rotarex", "text": "The ROTAREX Group is a privately owned Luxembourgish group of companies who develop and manufacture high pressure valves, tube fittings and pressure regulators for almost all types of gas, in almost all application fields. Founded in 1922, under the name CEODEUX in Lintgen, Luxembourg, ROTAREX currently employs approximately 1600 people and is present on all continents with a broad range of products.\nStructure.\nThe ROTAREX Group is composed of 5 Divisions:\nCEODEUX Division\nCEODEUX Division is subdivided in 2 main parts:\nLPG Division (SRG)\nFiretec Division\nSolutions Division\nAutomotive Division\nSubcontracting Division\nApplication fields.\nROTAREX products are used in just about any application field where gas is used Just a short list of examples: Industrial gases, breathing and medical gases, food & beverage, fire protection, cryogenics & refrigeration, laboratories, semiconductor industry, aerospace, petrochemical, automotive, welding, transportation & storage, diving, leisure industry, paintball, barbecue and many more.\nPurity.\nROTAREX products can meet any standard of gas purity on the market, ranging from Industrial gases, Breathing quality and Ultra High Purity (UHP) gases (for the semiconductor industry for example)", "Automation-Control": 0.8874559402, "Qwen2": "Yes"} {"id": "12617694", "revid": "88026", "url": "https://en.wikipedia.org/wiki?curid=12617694", "title": "Value function", "text": "The value function of an optimization problem gives the value attained by the objective function at a solution, while only depending on the parameters of the problem. In a controlled dynamical system, the value function represents the optimal payoff of the system over the interval [t, t1] when started at the time-t state variable x(t)=x. If the objective function represents some cost that is to be minimized, the value function can be interpreted as the cost to finish the optimal program, and is thus referred to as \"cost-to-go function.\" In an economic context, where the objective function usually represents utility, the value function is conceptually equivalent to the indirect utility function.\nIn a problem of optimal control, the value function is defined as the supremum of the objective function taken over the set of admissible controls. Given formula_1, a typical optimal control problem is to\nsubject to\nwith initial state variable formula_4. The objective function formula_5 is to be maximized over all admissible controls formula_6, where formula_7 is a Lebesgue measurable function from formula_8 to some prescribed arbitrary set in formula_9. The value function is then defined as \n I(\\tau,x(\\tau), u(\\tau)) \\, \\mathrm{d}\\tau + \\phi(x(t_{1}))\nwith formula_10, where formula_11 is the \"scrap value\". If the optimal pair of control and state trajectories is formula_12, then formula_13. The function formula_14 that gives the optimal control formula_15 based on the current state formula_16 is called a feedback control policy, or simply a policy function.\nBellman's principle of optimality roughly states that any optimal policy at time formula_17, formula_18 taking the current state formula_19 as \"new\" initial condition must be optimal for the remaining problem. If the value function happens to be continuously differentiable, this gives rise to an important partial differential equation known as Hamilton–Jacobi–Bellman equation,\nwhere the maximand on the right-hand side can also be re-written as the Hamiltonian, formula_21, as\nwith formula_23 playing the role of the costate variables. Given this definition, we further have formula_24, and after differentiating both sides of the HJB equation with respect to formula_16,\nwhich after replacing the appropriate terms recovers the costate equation\nwhere formula_28 is Newton notation for the derivative with respect to time.\nThe value function is the unique viscosity solution to the Hamilton–Jacobi–Bellman equation. In an online closed-loop approximate optimal control, the value function is also a Lyapunov function that establishes global asymptotic stability of the closed-loop system.", "Automation-Control": 0.9882210493, "Qwen2": "Yes"} {"id": "40475850", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=40475850", "title": "Robotics Toolbox for MATLAB", "text": "The Robotics Toolbox is MATLAB toolbox software that supports research and teaching into arm-type and mobile robotics. While the Robotics Toolbox is free software, it requires the proprietary MATLAB environment in order to execute. The Toolbox forms the basis of the exercises in several textbooks.\nPurpose.\nThe Toolbox provides functions for manipulating and converting between datatypes such as vectors, homogeneous transformations, roll-pitch-yaw and Euler angles, axis-angle representation, unit-quaternions, and twists, which are necessary to represent 3-dimensional position and orientation. It also plots coordinate frames, supports Plücker coordinates to represent lines, and provides support for Lie group operations such as logarithm, exponentiation, and conversions to and from skew-symmetric matrix form.\nAs the basis of the exercises in several textbooks, the Toolbox is useful for the study and simulation of:\nThe Toolbox requires MATLAB, commercial software from MathWorks, in order to operate.\nRelationship to other toolboxes.\nThe Robotics System Toolbox for MATLAB is proprietary software published by MathWorks which includes support for robot manipulators and mobile robotics. Its functionality significantly overlaps that of the Robotics Toolbox for MATLAB but the programming model is quite different.\nThe Robotics Toolbox for Python is a reimplementation of the Robotics Toolbox for MATLAB for Python 3. Its functionality is a superset of the Robotics Toolbox for MATLAB, the programming model is similar, and it supports additional methods to define a serial link manipulator including URDF and elementary transform sequences.", "Automation-Control": 0.8885654211, "Qwen2": "Yes"} {"id": "40495190", "revid": "2304267", "url": "https://en.wikipedia.org/wiki?curid=40495190", "title": "Turret punch", "text": "A turret punch or turret press is a type of punch press used for metal forming by punching.\nPunching, and press work in general, is a process well suited to mass production. However the initial tooling costs, of both the machine and the job-specific press tool, are high. This limits punch work from being used for much small-volume and prototype work. A turret punch is one way of addressing this cost. The tooling of a turret punch uses a large number of standard punch tools: holes of varying sizes, straight edges, commonly-used notches or mounting holes. By using a large number of strokes, with several different tools in turn, a turret press may make a wide variety of parts without having to first make a specialised press tool for that task. This saves both time and money, allowing rapid prototyping or for low volume production to start without tooling delays.\nA typical CNC turret punch has a choice of up to 60 tools in a \"turret\" that can be rotated to bring any tool to the punching position. A simple shape (e.g., a square, circle, or hexagon) is cut directly from the sheet. A complex shape can be cut out by making many square or rounded cuts around the perimeter. As a press tool requires a matching punch and die set, there are two corresponding turrets, above and below the bed, for punch and die. These two turrets must rotate in precise synchronisation and with their alignment carefully maintained. Several punches of identical shape may be used in the turret, each one turned to a different angle, as there is usually no feature to rotate the sheet workpiece relative to the tool.\nA punch is less flexible than a laser for cutting compound shapes, but faster for repetitive shapes (for example, the grille of an air-conditioning unit). Some units combine both laser and punch features in one machine.\nMost turret punches are CNC-controlled, with automatic positioning of the metal sheet beneath the tool and programmed selection of particular tools. A CAM process first converts the CAD design for the finished item into the number of individual punch operations needed, depending on the tools available in the turret.\nThe precise load-out of tools may change according to a particular job's needs. The CAD stage is also optimised for turret punching: an operation such as rounding a corner may be much quicker with a single chamfered cut than a fully rounded corner requiring several strokes. Changing an unimportant dimension such as the width of a ventilation slot may match an available tool, requiring a single cut, rather than cutting each side separately. CAD support may also manage the selection of tools to be loaded into the turret before starting work.\nAs each tool in a turret press is relatively small, the press requires little power compared to a press manufacturing similar parts with a single press stroke. This allows the tool to be lighter and sometimes cheaper, although this is offset by the increased complexity of the turret and sheet positioning. Turret punches can operate faster per stroke than a heavier tool press, although of course many strokes are required. A turret punch can achieve 600 strokes per minute.\nThe most sophisticated recent machines may also add facilities for forming and bending, as well as punch cutting. Although unlikely to replace a press brake for box making, the ability to form even small lugs may turn a two machine process into a one machine process, reducing materials handling time.\nManual punches.\nManual turret punches have also been used. These are C frame presses, usually with a rack-actuated ram. There is no CNC, for either sheet positioning or tool changing. Using such a manual press requires great familiarity, as the correct tool must be selected from the turret each time for every one of the many press operations performed. Such manual presses are rarely found, but they have their place in labour-intensive tasks such as hand-worked sheetmetal shops, making such products as custom car bodywork. They are often used in conjunction with other highly skilled artisan processes such as an English wheel.", "Automation-Control": 0.9706640244, "Qwen2": "Yes"} {"id": "20154492", "revid": "7226930", "url": "https://en.wikipedia.org/wiki?curid=20154492", "title": "Sequential minimal optimization", "text": "Sequential minimal optimization (SMO) is an algorithm for solving the quadratic programming (QP) problem that arises during the training of support-vector machines (SVM). It was invented by John Platt in 1998 at Microsoft Research. SMO is widely used for training support vector machines and is implemented by the popular LIBSVM tool. The publication of the SMO algorithm in 1998 has generated a lot of excitement in the SVM community, as previously available methods for SVM training were much more complex and required expensive third-party QP solvers.\nOptimization problem.\nConsider a binary classification problem with a dataset (\"x\"1, \"y\"1), ..., (\"x\"\"n\", \"y\"\"n\"), where \"x\"\"i\" is an input vector and is a binary label corresponding to it. A soft-margin support vector machine is trained by solving a quadratic programming problem, which is expressed in the dual form as follows:\nwhere \"C\" is an SVM hyperparameter and \"K\"(\"x\"\"i\", \"x\"\"j\") is the kernel function, both supplied by the user; and the variables formula_4 are Lagrange multipliers.\nAlgorithm.\nSMO is an iterative algorithm for solving the optimization problem described above. SMO breaks this problem into a series of smallest possible sub-problems, which are then solved analytically. Because of the linear equality constraint involving the Lagrange multipliers formula_4, the smallest possible problem involves two such multipliers. Then, for any two multipliers formula_6 and formula_7, the constraints are reduced to:\nand this reduced problem can be solved analytically: one needs to find a minimum of a one-dimensional quadratic function. formula_10 is the negative of the sum over the rest of terms in the equality constraint, which is fixed in each iteration.\nThe algorithm proceeds as follows:\nWhen all the Lagrange multipliers satisfy the KKT conditions (within a user-defined tolerance), the problem has been solved. Although this algorithm is guaranteed to converge, heuristics are used to choose the pair of multipliers so as to accelerate the rate of convergence. This is critical for large data sets since there are formula_14 possible choices for formula_4 and formula_16.\nRelated Work.\nThe first approach to splitting large SVM learning problems into a series of smaller optimization tasks was proposed by Bernhard Boser, Isabelle Guyon, Vladimir Vapnik. It is known as the \"chunking algorithm\". The algorithm starts with a random subset of the data, solves this problem, and iteratively adds examples which violate the optimality conditions. One disadvantage of this algorithm is that it is necessary to solve QP-problems scaling with the number of SVs. On real world sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm.\nIn 1997, E. Osuna, R. Freund, and F. Girosi proved a theorem which suggests a whole new set of QP algorithms for SVMs. By the virtue of this theorem a large QP problem can be broken down into a series of smaller QP sub-problems. A sequence of QP sub-problems that always add at least one violator of the Karush–Kuhn–Tucker (KKT) conditions is guaranteed to converge. The chunking algorithm obeys the conditions of the theorem, and hence will converge. The SMO algorithm can be considered a special case of the Osuna algorithm, where the size of the optimization is two and both Lagrange multipliers are replaced at every step with new multipliers that are chosen via good heuristics.\nThe SMO algorithm is closely related to a family of optimization algorithms called Bregman methods or row-action methods. These methods solve convex programming problems with linear constraints. They are iterative methods where each step projects the current primal point onto each constraint.", "Automation-Control": 0.8879908919, "Qwen2": "Yes"} {"id": "20163518", "revid": "1053126", "url": "https://en.wikipedia.org/wiki?curid=20163518", "title": "Process flowsheeting", "text": "Process flowsheeting is the use of computer aids to perform steady-state heat and mass balancing, sizing and costing calculations for a chemical process. It is an essential and core component of process design.\nThe process design effort may be split into three basic steps\nSynthesis.\nSynthesis is the step where the structure of the flowsheet is chosen. It is also in this step that one initializes values for variables which one is free to set.\nAnalysis.\nAnalysis is usually made up of three steps\nOptimization.\nOptimization involves both structural optimization of the flow sheet itself as well as optimization of parameters in a given flowsheet. In the former one may alter the equipment used and/or its connections with other equipment. In the latter one can change the values of parameters such as temperature and pressure. Parameter Optimization is a more advanced stage of theory than process flowsheet optimization.\nPlant design project.\nThe first step in the sequence leading to the construction of a process plant and its use in the manufacture of a product is the conception of a process. The concept is embodied in the form of a \"flow sheet\". Process design then proceeds on the basis of the flow sheet chosen. Physical property data are the other component needed for process design apart from a flow sheet. The result of process design is a process flow diagram, PFD. Detailed engineering for the project and vessel specifications then begin. Process flowsheeting ends at the point of generation of a suitable PFD.\nGeneral purpose flowsheeting programs became usable and reliable around 1965-1970.", "Automation-Control": 0.6695102453, "Qwen2": "Yes"} {"id": "30919632", "revid": "1461430", "url": "https://en.wikipedia.org/wiki?curid=30919632", "title": "Product intelligence", "text": "Product intelligence is defined as an automated system for gathering and analyzing intelligence about the performance of a product being designed and manufactured, such that this data is automatically fed back to the product managers and engineers designing the product, to assist them in the development of the next iteration or version of that product. The goal of product intelligence is to accelerate the rate of product innovation, thereby making the product and its owners more competitive and increasing customer satisfaction. Product intelligence is often applied to electronic products, but it is not necessarily limited to electronic products.\nKey points of this definition:\nProduct intelligence can also include two additional functions:", "Automation-Control": 0.9181229472, "Qwen2": "Yes"} {"id": "30921568", "revid": "2902776", "url": "https://en.wikipedia.org/wiki?curid=30921568", "title": "ORiN", "text": "ORiN (Open Robot/Resource interface for the Network) is a standard network interface for FA (factory automation) systems. The Japan Robot Association proposed ORiN in 2002, and the ORiN Forum develops and maintains the ORiN standard.\nBackground.\nThe installation of PC (Personal Computer) applications in the factory has increased dramatically recently. Various types of application software systems, such as production management systems, process management systems, operation monitoring systems and failure analysis systems, have become vital to factory operation. These software systems are becoming indispensable for the manufacturing system.\nHowever, most of these software systems are only compatible with specific models or specific manufacturers of the FA system. This is because the software system is “custom made” depending on the specific special network or protocol. Once this type of application is installed in a factory and if there are no resident software engineers for the system, the improvement of the system will stop, the cost-effectiveness of the system will be worsen, and the total value of the system will deteriorate.\nAnother recent problem in production is the rapid increase of the product demand at the initial stage of the product release. The manufactures will lose the chance of possible profit if they cannot meet the demand. To cope with the problem, manufacturing industry is trying to achieve the vertical upstart of the production, and high re-usability of both hardware and software is the key for the goal.\nTo solve these problems, ORiN was developed as a standard PC application platform.\nOutline.\nORiN was originally developed as a standard platform for robot applications. Nowadays, ORiN became a manufacturing application program platform for handling wider range of resources including robots and other FA devices like programmable logic controllers (PLC) and numerical control (NC) systems, or more generic resources like databases and local file systems. ORiN specifications are on software only and are independent from hardware. Therefore, ORiN can be smoothly integrated with other existing technologies only by developing software. By using ORiN, development of manufacture-independent and model-independent application becomes easy.\nBy utilizing ORiN, various application software development and active multi-vender system construction by third-party companies are expected. In addition, on economy side, increase of manufacturing competitiveness, expansion of FA market, advancement of software industry in FA, and creation of FA engineering industry are also expected.\nFeatures.\nORiN is independent from hardware, and all ORiN specifications are for software. ORiN (Version 2) is composed of the following three key technology specifications.\nWith these three key standard technologies, ORiN provides following features.", "Automation-Control": 0.7211655378, "Qwen2": "Yes"} {"id": "5987577", "revid": "88026", "url": "https://en.wikipedia.org/wiki?curid=5987577", "title": "Hamiltonian (control theory)", "text": "The Hamiltonian is a function used to solve a problem of optimal control for a dynamical system. It can be understood as an instantaneous increment of the Lagrangian expression of the problem that is to be optimized over a certain time period. Inspired by—but distinct from—the Hamiltonian of classical mechanics, the Hamiltonian of optimal control theory was developed by Lev Pontryagin as part of his maximum principle. Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to optimize the Hamiltonian.\nProblem statement and definition of the Hamiltonian.\nConsider a dynamical system of formula_1 first-order differential equations\nwhere formula_3 denotes a vector of state variables, and formula_4 a vector of control variables. Once initial conditions formula_5 and controls formula_6 are specified, a solution to the differential equations, called a \"trajectory\" formula_7, can be found. The problem of optimal control is to choose formula_6 (from some set formula_9) so that formula_10 maximizes or minimizes a certain objective function between an initial time formula_11 and a terminal time formula_12 (where formula_13 may be infinity). Specifically, the goal is to optimize a performance index formula_14 at each point in time,\nsubject to the above equations of motion of the state variables. The solution method involves defining an ancillary function known as the control Hamiltonian\n(t) \\mathbf{f}(\\mathbf{x}(t),\\mathbf{u}(t),t)\nwhich combines the objective function and the state equations much like a Lagrangian in a static optimization problem, only that the multipliers formula_16—referred to as \"costate variables\"—are functions of time rather than constants.\nThe goal is to find an optimal control policy function formula_17 and, with it, an optimal trajectory of the state variable formula_18, which by Pontryagin's maximum principle are the arguments that maximize the Hamiltonian,\nThe first-order necessary conditions for a maximum are given by\nthe latter of which are referred to as the costate equations. Together, the state and costate equations describe the Hamiltonian dynamical system (again analogous to but distinct from the Hamiltonian system in physics), the solution of which involves a two-point boundary value problem, given that there are formula_26 boundary conditions involving two different points in time, the initial time (the formula_1 differential equations for the state variables), and the terminal time (the formula_1 differential equations for the costate variables; unless a final function is specified, the boundary conditions are formula_29, or formula_30 for infinite time horizons).\nA sufficient condition for a maximum is the concavity of the Hamiltonian evaluated at the solution, i.e.\nwhere formula_17 is the optimal control, and formula_18 is resulting optimal trajectory for the state variable. Alternatively, by a result due to Olvi L. Mangasarian, the necessary conditions are sufficient if the functions formula_14 and formula_35 are both concave in formula_10 and formula_6.\nDerivation from the Lagrangian.\nA constrained optimization problem as the one stated above usually suggests a Lagrangian expression, specifically \nwhere formula_16 compares to the Lagrange multiplier in a static optimization problem but is now, as noted above, a function of time. In order to eliminate formula_40, the last term on the right-hand side can be rewritten using integration by parts, such that\nwhich can be substituted back into the Lagrangian expression to give\nTo derive the first-order conditions for an optimum, assume that the solution has been found and the Lagrangian is maximized. Then any perturbation to formula_10 or formula_6 must cause the value of the Lagrangian to decline. Specifically, the total derivative of formula_45 obeys\nFor this expression to equal zero necessitates the following optimality conditions:\nIf both the initial value formula_48 and terminal value formula_49 are fixed, i.e. formula_50, no conditions on formula_51 and formula_52 are needed. If the terminal value is free, as is often the case, the additional condition formula_29 is necessary for optimality. The latter is called a transversality condition for a fixed horizon problem.\nIt can be seen that the necessary conditions are identical to the ones stated above for the Hamiltonian. Thus the Hamiltonian can be understood as a device to generate the first-order necessary conditions.\nThe Hamiltonian in discrete time.\nWhen the problem is formulated in discrete time, the Hamiltonian is defined as:\nand the costate equations are\n(Note that the discrete time Hamiltonian at time formula_56 involves the costate variable at time formula_57 This small detail is essential so that when we differentiate with respect to formula_58 we get a term involving formula_59 on the right hand side of the costate equations. Using a wrong convention here can lead to incorrect results, i.e. a costate equation which is not a backwards difference equation).\nBehavior of the Hamiltonian over time.\nFrom Pontryagin's maximum principle, special conditions for the Hamiltonian can be derived. When the final time formula_60 is fixed and the Hamiltonian does not depend explicitly on time formula_61, then:\nor if the terminal time is free, then:\nFurther, if the terminal time tends to infinity, a transversality condition on the Hamiltonian applies.\nThe Hamiltonian of control compared to the Hamiltonian of mechanics.\nWilliam Rowan Hamilton defined the Hamiltonian for describing the mechanics of a system. It is a function of three variables:\nwhere formula_45 is the Lagrangian, the extremizing of which determines the dynamics (\"not\" the Lagrangian defined above), formula_67 is the state variable and formula_68 is its time derivative.\nformula_69 is the so-called \"conjugate momentum\", defined by\nHamilton then formulated his equations to describe the dynamics of the system as\nThe Hamiltonian of control theory describes not the \"dynamics\" of a system but conditions for extremizing some scalar function thereof (the Lagrangian) with respect to a control variable formula_73. As normally defined, it is a function of 4 variables\nwhere formula_67 is the state variable and formula_73 is the control variable with respect to that which we are extremizing.\nThe associated conditions for a maximum are\nThis definition agrees with that given by the article by Sussmann and Willems. (see p. 39, equation 14). Sussmann and Willems show how the control Hamiltonian can be used in dynamics e.g. for the brachistochrone problem, but do not mention the prior work of Carathéodory on this approach.\nCurrent value and present value Hamiltonian.\nIn economics, the objective function in dynamic optimization problems often depends directly on time only through exponential discounting, such that it takes the form\nwhere formula_81 is referred to as the instantaneous utility function, or felicity function. This allows a redefinition of the Hamiltonian as formula_82 where\nwhich is referred to as the current value Hamiltonian, in contrast to the present value Hamiltonian formula_84 defined in the first section. Most notably the costate variables are redefined as formula_85, which leads to modified first-order conditions. \nwhich follows immediately from the product rule. Economically, formula_88 represent current-valued shadow prices for the capital goods formula_10.\nExample: Ramsey–Cass–Koopmans model.\nIn economics, the Ramsey–Cass–Koopmans model is used to determine an optimal savings behavior for an economy. The objective function formula_90 is the social welfare function,\nto be maximized by choice of an optimal consumption path formula_92. The function formula_93 indicates the utility the representative agent of consuming formula_94 at any given point in time. The factor formula_95 represents discounting. The maximization problem is subject to the following differential equation for capital intensity, describing the time evolution of capital per effective worker:\nwhere formula_92 is period t consumption, formula_98 is period t capital per worker (with formula_99), formula_100 is period t production, formula_1 is the population growth rate, formula_102 is the capital depreciation rate, the agent discounts future utility at rate formula_103, with formula_104 and formula_105.\nHere, formula_98 is the state variable which evolves according to the above equation, and formula_92 is the control variable. The Hamiltonian becomes\nThe optimality conditions are\nin addition to the transversality condition formula_111. If we let formula_112, then log-differentiating the first optimality condition with respect to formula_56 yields\nInserting this equation into the second optimality condition yields\nwhich is known as the Keynes–Ramsey rule, which gives a condition for consumption in every period which, if followed, ensures maximum lifetime utility.", "Automation-Control": 0.770019114, "Qwen2": "Yes"} {"id": "26350457", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=26350457", "title": "Electrode boiler", "text": "An electrode boiler (jet type) is a type of boiler that uses electricity flowing through streams of water to create steam. The conductive and resistive properties of water are employed to carry electric current.\nTechnical principle.\nThe most common type of electrode boiler pumps water from the lower part of the vessel to an internal header that has nozzles that allow the water to flow to electrodes. Generally the working pressure is maintained at 10 bar. If more pressure is needed (more steam) the controls speed up the pump to increase flow through additional nozzles. As the needed pressure is reached the pump controls the flow of water to obtain the desired steam output (in kg per hour) at the desired pressure. On larger systems the pump can be controlled by a variable frequency drive so energy is not wasted. This control system can also control de-aerator pumps and controls.\nThe electrodes are connected to a medium voltage (1-35 kV) AC source. Electrode boilers can work on both single phase and three phase supplies. If DC voltage is used, electrolysis of water occurs, decomposing water into its elements H2 at the cathode (negative electrode) and O2 at the anode (positive electrode). The electrode boiler is 99.9% efficient with almost all the energy consumed producing steam. Losses are radiant heat from the vessel only.\nThe conductivity of the water and the voltage applied determine how much steam is generated in each stream of water.\nSafety measures.\nWhen evaporated into steam, deionized or distilled water leaves little or no ions in the boiler respectively. Thus, scale formation is reduced.", "Automation-Control": 0.7462926507, "Qwen2": "Yes"} {"id": "11450801", "revid": "42522270", "url": "https://en.wikipedia.org/wiki?curid=11450801", "title": "CONWIP", "text": "CONWIP (CONstant work in process) are pull-oriented production control systems. Such systems can be classified as pull and push systems (Spearman et al. 1990). In a push system, the production order is scheduled, and the material is pushed into the production line. In a pull system, the start of each product assembly process is triggered by the completion of another at the end of production line. This pull-variant is known for its ease of implementation.\nCONWIP is a kind of single-stage kanban system and is also a hybrid push-pull system. While kanban systems maintain tighter control of system WIP through the individual cards at each workstation, CONWIP systems are easier to implement and adjust, since only one set of system cards is used to manage system WIP. CONWIP uses cards to control the number of WIPs. For example, no part is allowed to enter the system without a card (authority). After a finished part is completed at the last workstation, a card is transferred to the first workstation and a new part is pushed into the sequential process route. In their paper, Spearman et al. (1990) used a simulation to make a comparison among the CONWIP, kanban and push systems, and found that CONWIP systems can achieve a lower WIP level than kanban systems.\nCard control policy.\nIn a CONWIP system, a card is shared by all kinds of products. However, Duenyas (1994) proposed a dedicated card control policy in CONWIP and he stated that this policy could perform as a multiple chain closed queuing network.", "Automation-Control": 0.9006052613, "Qwen2": "Yes"} {"id": "21848374", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=21848374", "title": "I-mate 810-F", "text": "The i-mate 810-F is quad-band Internet-enabled Windows Mobile smartphone. Its name comes from the US military standards for environment tests, MIL-STD-810. I-mate claims the 810-F can withstand temperature extremes of up to 60 degrees and -20 degrees Celsius. It is also waterproofed to 1 metre, and shockproof.\nIt has a rubber exterior, with a filter over the earpiece to maintain waterproofing. Metal screws are exposed so you can check the factory seals have been maintained, and there is a small plastic stylus located in a slot at the back of the phone. The 810-F features a QWERTY-keyboard, a 2.45-inch 320×240 pixel touchscreen and a five-way clickable navigation pad.\nIt runs the Windows Mobile 6.1 Professional operating system, with Internet Explorer 6. It offers a digital compass, A-GPS features and a three-axis accelerometer. There is also a 2MP fixed focus camera and 2 GB of storage space. There's no microSD slot to maintain its environmental coherence. Connectivity includes GSM, GPRS, EDGE, UMTS and HSDPA, plus Wi-Fi and Bluetooth 2.0.\nI-mate provides a lifetime guarantee for the 810-F (subject to warranty terms and conditions, registration and annual service plan).", "Automation-Control": 0.7983161807, "Qwen2": "Yes"} {"id": "1455062", "revid": "45407429", "url": "https://en.wikipedia.org/wiki?curid=1455062", "title": "Empirical risk minimization", "text": "Empirical risk minimization (ERM) is a principle in statistical learning theory which defines a family of learning algorithms and is used to give theoretical bounds on their performance. The core idea is that we cannot know exactly how well an algorithm will work in practice (the true \"risk\") because we don't know the true distribution of data that the algorithm will work on, but we can instead measure its performance on a known set of training data (the \"empirical\" risk).\nBackground.\nConsider the following situation, which is a general setting of many supervised learning problems. We have two spaces of objects formula_1 and formula_2 and would like to learn a function formula_3 (often called \"hypothesis\") which outputs an object formula_4, given formula_5. To do so, we have at our disposal a \"training set\" of formula_6 examples formula_7 where formula_8 is an input and formula_9 is the corresponding response that we wish to get from formula_10.\nTo put it more formally, we assume that there is a joint probability distribution formula_11 over formula_1 and formula_2, and that the training set consists of formula_6 instances formula_7 drawn i.i.d. from formula_11. Note that the assumption of a joint probability distribution allows us to model uncertainty in predictions (e.g. from noise in data) because formula_17 is not a deterministic function of but rather a random variable with conditional distribution formula_18 for a fixed formula_19.\nWe also assume that we are given a non-negative real-valued loss function formula_20 which measures how different the prediction formula_21 of a hypothesis is from the true outcome formula_22 The risk associated with hypothesis formula_23 is then defined as the expectation of the loss function:\nA loss function commonly used in theory is the 0-1 loss function: formula_25.\nThe ultimate goal of a learning algorithm is to find a hypothesis formula_26 among a fixed class of functions formula_27 for which the risk formula_28 is minimal:\nFor classification problems, the Bayes classifier is defined to be the classifier minimizing the risk defined with the 0–1 loss function.\nEmpirical risk minimization.\nIn general, the risk formula_28 cannot be computed because the distribution formula_11 is unknown to the learning algorithm (this situation is referred to as agnostic learning). However, we can compute an approximation, called \"empirical risk\", by averaging the loss function on the training set; more formally, computing the expectation with respect to the empirical measure:\nThe empirical risk minimization principle states that the learning algorithm should choose a hypothesis formula_33 which minimizes the empirical risk:\nThus the learning algorithm defined by the ERM principle consists in solving the above optimization problem.\nProperties.\nComputational complexity.\nEmpirical risk minimization for a classification problem with a 0-1 loss function is known to be an NP-hard problem even for a relatively simple class of functions such as linear classifiers. Nevertheless, it can be solved efficiently when the minimal empirical risk is zero, i.e., data is linearly separable.\nIn practice, machine learning algorithms cope with this issue either by employing a convex approximation to the 0–1 loss function (like hinge loss for SVM), which is easier to optimize, or by imposing assumptions on the distribution formula_11 (and thus stop being agnostic learning algorithms to which the above result applies).", "Automation-Control": 0.9817363024, "Qwen2": "Yes"} {"id": "48324135", "revid": "1163919407", "url": "https://en.wikipedia.org/wiki?curid=48324135", "title": "3D metal moulding", "text": "3D metal moulding, also referred to as metal injection moulding or (MIM), is used to manufacture components with complex geometries. The process uses a mixture of metal powders and polymer binders – also known as \"feedstock\" – which are then injection-moulded.\nAfter moulding, the parts are thermally processed in order to remove the binding agent. They are then sintered to a high-density metal component which has mechanical properties comparable to wrought materials.\n3D metal moulding is mainly used to achieve intricate and complex shapes that are very difficult or expensive to produce using conventional manufacturing methods. \nApplications.\n3D metal molding is used in aerospace, medical and other industries. Its popularity is due to its strength in the form of a custom shape or part. More commonly found as a 3D mold are thermoplastic and thermosetting polymers. Both of these processes are used in the following industries:\n3D metal printing.\n3D metal printing builds components by delivering the powdered metal and binder in alternative layers through a nozzle controlled by a computer system, working to a CAD drawing. The initial process does not achieve the required strength so parts must go through a secondary process which involves fusing another type of metal into the shape.\nThere are multiple methods used in 3D metal printing. Selective laser sintering, or SLS, uses heat from a powerful laser to fuse tiny ceramic, glass or plastic particles together, forming a 3D part. Carl Deckard and Joe Beaman of the University of Texas developed and patented the process in the 1980s.\nDirect metal laser sintering, or DMLS, uses a laser to sinter powdered metal into a solid object in gradual layers built upon each other. Cooling channels can be printed to any shape in this process, which lessens time and waste and improves quality.\nSelective laser melting, or SLM, completely melts the powder to form a homogeneous part. This process can only be used for single materials, so is not suitable for alloys.", "Automation-Control": 0.7834521532, "Qwen2": "Yes"} {"id": "68771087", "revid": "18872885", "url": "https://en.wikipedia.org/wiki?curid=68771087", "title": "Demosthenis Teneketzis", "text": "Demosthenis Teneketzis (Greek: Δημοσθένης Τενεκετζής) IEEE is a Greek-American electrical engineer specializing in Systems Science and Engineering. He is Professor Emeritus in the Department of Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor. His works are in the fields of control, decentralized systems, and networks. His main research publications are in stochastic control (centralized and decentralized), scheduling and resource allocation in networks with strategic and non-strategic users, and fault diagnosis in discrete event systems. He is a Fellow of IEEE for contributions to the theory of decentralized information systems and stochastic control.\nResearch.\nDemosthenis Teneketzis’ research is on Stochastic Control, Decentralized Decision-Making with non-strategic decision-makers (teams) or strategic decision-makers (games), resource allocation in networks with centralized or decentralized information and strategic or non-strategic agents, and fault diagnosis in discrete event systems. In 2015 he received the George S. Axelby Award from the IEEE Control Systems Society for his paper \"Decentralized Stochastic Control with Partial History Sharing: A Common Information Approach\".", "Automation-Control": 0.9980159998, "Qwen2": "Yes"} {"id": "693342", "revid": "1170098033", "url": "https://en.wikipedia.org/wiki?curid=693342", "title": "Numerical control", "text": "Numerical control (also computer numerical control, abbreviated CNC) is the automated control of machining tools (such as drills, lathes, mills, grinders, routers and 3D printers) by means of a computer. A CNC machine processes a piece of material (metal, plastic, wood, ceramic, stone, or composite) to meet specifications by following coded programmed instructions and without a manual operator directly controlling the machining operation.\nA CNC machine is a motorized maneuverable tool and often a motorized maneuverable platform, which are both controlled by a computer, according to specific input instructions. Instructions are delivered to a CNC machine in the form of a sequential program of machine control instructions such as G-code and M-code, and then executed. The program can be written by a person or, far more often, generated by graphical computer-aided design (CAD) or computer-aided manufacturing (CAM) software. In the case of 3D printers, the part to be printed is \"sliced\" before the instructions (or the program) are generated. 3D printers also use G-Code.\nCNC offers greatly increased productivity over non-computerized machining for repetitive production, where the machine must be manually controlled (e.g. using devices such as hand wheels or levers) or mechanically controlled by pre-fabricated pattern guides (see pantograph mill). However, these advantages come at significant cost in terms of both capital expenditure and job setup time. For some prototyping and small batch jobs, a good machine operator can have parts finished to a high standard whilst a CNC workflow is still in setup.\nIn modern CNC systems, the design of a mechanical part and its manufacturing program are highly automated. The part's mechanical dimensions are defined using CAD software and then translated into manufacturing directives by computer-aided manufacturing (CAM) software. The resulting directives are transformed (by \"post processor\" software) into the specific commands necessary for a particular machine to produce the component and then are loaded into the CNC machine.\nSince any particular component might require the use of several different tools – drills, saws, etc. – modern machines often combine multiple tools into a single \"cell\". In other installations, several different machines are used with an external controller and human or robotic operators that move the component from machine to machine. In either case, the series of steps needed to produce any part is highly automated and produces a part that meets every specification in the original CAD drawing, where each specification includes a tolerance.\nDescription.\nMotion is controlling multiple axes, normally at least two (X and Y), and a tool spindle that moves in the Z (depth). The position of the tool is driven by direct-drive stepper motors or servo motors to provide highly accurate movements, or in older designs, motors through a series of step-down gears. Open-loop control works as long as the forces are kept small enough and speeds are not too great. On commercial metalworking machines, closed-loop controls are standard and required to provide the accuracy, speed, and repeatability demanded.\nParts description.\nAs the controller hardware evolved, the mills themselves also evolved. One change has been to enclose the entire mechanism in a large box as a safety measure (with safety glass in the doors to permit the operator to monitor the machine's function), often with additional safety interlocks to ensure the operator is far enough from the working piece for safe operation. Most new CNC systems built today are 100% electronically controlled.\nCNC-like systems are used for any process that can be described as movements and operations. These include laser cutting, welding, friction stir welding, ultrasonic welding, flame and plasma cutting, bending, spinning, hole-punching, pinning, gluing, fabric cutting, sewing, tape and fiber placement, routing, picking and placing, and sawing.\nHistory.\nThe first CNC machines were built in the 1940s and 1950s, based on existing tools that were modified with motors that moved the tool or part to follow points fed into the system on punched tape. These early servomechanisms were rapidly augmented with analog and digital computers, creating the modern CNC machine tools that have revolutionized machining processes.\nOther CNC tools.\nMany other tools have CNC variants, including:\nTool/machine crashing.\nIn CNC, a \"crash\" occurs when the machine moves in such a way that is harmful to the machine, tools, or parts being machined, sometimes resulting in bending or breakage of cutting tools, accessory clamps, vises, and fixtures, or causing damage to the machine itself by bending guide rails, breaking drive screws, or causing structural components to crack or deform under strain. A mild crash may not damage the machine or tools but may damage the part being machined so that it must be scrapped. Many CNC tools have no inherent sense of the absolute position of the table or tools when turned on. They must be manually \"homed\" or \"zeroed\" to have any reference to work from, and these limits are just for figuring out the location of the part to work with it and are no hard motion limit on the mechanism. It is often possible to drive the machine outside the physical bounds of its drive mechanism, resulting in a collision with itself or damage to the drive mechanism. Many machines implement control parameters limiting axis motion past a certain limit in addition to physical limit switches. However, these parameters can often be changed by the operator.\nMany CNC tools also do not know anything about their working environment. Machines may have load sensing systems on spindle and axis drives, but some do not. They blindly follow the machining code provided and it is up to an operator to detect if a crash is either occurring or about to occur, and for the operator to manually abort the active process. Machines equipped with load sensors can stop axis or spindle movement in response to an overload condition, but this does not prevent a crash from occurring. It may only limit the damage resulting from the crash. Some crashes may not ever overload any axis or spindle drives.\nIf the drive system is weaker than the machine's structural integrity, then the drive system simply pushes against the obstruction, and the drive motors \"slip in place\". The machine tool may not detect the collision or the slipping, so for example the tool should now be at 210mm on the X-axis, but is, in fact, at 32mm where it hit the obstruction and kept slipping. All of the next tool motions will be off by −178mm on the X-axis, and all future motions are now invalid, which may result in further collisions with clamps, vises, or the machine itself. This is common in open-loop stepper systems but is not possible in closed-loop systems unless mechanical slippage between the motor and drive mechanism has occurred. Instead, in a closed-loop system, the machine will continue to attempt to move against the load until either the drive motor goes into an overload condition or a servo motor fails to get to the desired position.\nCollision detection and avoidance are possible, through the use of absolute position sensors (optical encoder strips or disks) to verify that motion occurred, or torque sensors or power-draw sensors on the drive system to detect abnormal strain when the machine should just be moving and not cutting, but these are not a common component of most hobby CNC tools. Instead, most hobby CNC tools simply rely on the assumed accuracy of stepper motors that rotate a specific number of degrees in response to magnetic field changes. It is often assumed the stepper is perfectly accurate and never missteps, so tool position monitoring simply involves counting the number of pulses sent to the stepper over time. An alternate means of stepper position monitoring is usually not available, so crash or slip detection is not possible.\nCommercial CNC metalworking machines use closed-loop feedback controls for axis movement. In a closed-loop system, the controller monitors the actual position of each axis with an absolute or incremental encoder. Proper control programming will reduce the possibility of a crash, but it is still up to the operator and programmer to ensure that the machine is operated safely. However, during the 2000s and 2010s, the software for machining simulation has been maturing rapidly, and it is no longer uncommon for the entire machine tool envelope (including all axes, spindles, chucks, turrets, tool holders, tailstocks, fixtures, clamps, and stock) to be modeled accurately with 3D solid models, which allows the simulation software to predict fairly accurately whether a cycle will involve a crash. Although such simulation is not new, its accuracy and market penetration are changing considerably because of computing advancements.\nNumerical precision and equipment backlash.\nWithin the numerical systems of CNC programming, the code generator can assume that the controlled mechanism is always perfectly accurate, or that precision tolerances are identical for all cutting or movement directions. This is not always a true condition of CNC tools. CNC tools with a large amount of mechanical backlash can still be highly precise if the drive or cutting mechanism is only driven to apply cutting force from one direction, and all driving systems are pressed tightly together in that one cutting direction. However, a CNC device with high backlash and a dull cutting tool can lead to cutter chatter and possible workpiece gouging. The backlash also affects the precision of some operations involving axis movement reversals during cutting, such as the milling of a circle, where axis motion is sinusoidal. However, this can be compensated for if the amount of backlash is precisely known by linear encoders or manual measurement.\nThe high backlash mechanism itself is not necessarily relied on to be repeatedly precise for the cutting process, but some other reference object or precision surface may be used to zero the mechanism, by tightly applying pressure against the reference and setting that as the zero references for all following CNC-encoded motions. This is similar to the manual machine tool method of clamping a micrometer onto a reference beam and adjusting the Vernier dial to zero using that object as the reference.\nPositioning control system.\nIn numerical control systems, the position of the tool is defined by a set of instructions called the part program. Positioning control is handled using either an open-loop or a closed-loop system. In an open-loop system, communication takes place in one direction only: from the controller to the motor. In a closed-loop system, feedback is provided to the controller so that it can correct for errors in position, velocity, and acceleration, which can arise due to variations in load or temperature. Open-loop systems are generally cheaper but less accurate. Stepper motors can be used in both types of systems, while servo motors can only be used in closed systems.\nCartesian coordinates.\nThe G & M code positions are all based on a three-dimensional Cartesian coordinate system. This system is a typical plane often seen in mathematics when graphing. This system is required to map out the machine tool paths and any other kind of actions that need to happen in a specific coordinate. Absolute coordinates are what are generally used more commonly for machines and represent the (0,0,0) point on the plane. This point is set on the stock material to give a starting point or \"home position\" before starting the actual machining.\nCoding.\nG-codes.\nG-codes are used to command specific movements of the machine, such as machine moves or drilling functions. The majority of G-Code programs start with a percent (%) symbol on the first line, then followed by an \"O\" with a numerical name for the program (i.e. \"O0001\") on the second line, then another percent (%) symbol on the last line of the program. The format for a G-code is the letter G followed by two to three digits; for example G01. G-codes differ slightly between a mill and lathe application, for example:\nM-codes.\n[Code Miscellaneous Functions (M-Code)]. M-codes are miscellaneous machine commands that do not command axis motion. The format for an M-code is the letter M followed by two to three digits; for example:\nExample.\nO0001\nG20 G40 G80 G90 G94 G54(Inch, Cutter Comp. Cancel, Deactivate all canned cycles, moves axes to machine coordinate, feed per min., origin coordinate system)\nM06 T01 (Tool change to tool 1)\nG43 H01 (Tool length comp. in a positive direction, length compensation for the tool)\nM03 S1200 (Spindle turns CW at 1200RPM)\nG00 X0. Y0. (Rapid Traverse to X=0. Y=0.)\nG00 Z.5 (Rapid Traverse to z=.5)\nG00 X1. Y-.75 (Rapid traverse to X1. Y-.75)\nG01 Z-.1 F10 (Plunge into part at Z-.25 at 10in per min.)\nG03 X.875 Y-.5 I.1875 J-.75 (CCW arc cut to X.875 Y-.5 with radius origin at I.625 J-.75)\nG03 X.5 Y-.75 I0.0 J0.0 (CCW arc cut to X.5 Y-.75 with radius origin at I0.0 J0.0)\nG03 X.75 Y-.9375 I0.0 J0.0(CCW arc cut to X.75 Y-.9375 with radius origin at I0.0 J0.0)\nG02 X1. Y-1.25 I.75 J-1.25 (CW arc cut to X1. Y-1.25 with radius origin at I.75 J-1.25)\nG02 X.75 Y-1.5625 I0.0 J0.0 (CW arc cut to X.75 Y-1.5625 with same radius origin as the previous arc)\nG02 X.5 Y-1.25 I0.0 J0.0 (CW arc cut to X.5 Y-1.25 with same radius origin as the previous arc)\nG00 Z.5 (Rapid traverse to z.5)\nM05 (spindle stops)\nG00 X0.0 Y0.0 (Mill returns to origin)\nM30 (Program End)\nHaving the correct speeds and feeds in the program provides for a more efficient and smoother product run. Incorrect speeds and feeds will cause damage to the tool, machine spindle, and even the product. The quickest and simplest way to find these numbers would be to use a calculator that can be found online. A formula can also be used to calculate the proper speeds and feeds for a material. These values can be found online or in Machinery's Handbook.", "Automation-Control": 0.9901710153, "Qwen2": "Yes"} {"id": "49489681", "revid": "44734958", "url": "https://en.wikipedia.org/wiki?curid=49489681", "title": "MaSMT", "text": "MaSMT is a free, lightweight Multi-agent system development framework, design through the Java environment. The MaSMT3 framework provides three types of agents, namely ordinary agent and managing agent and root agent. The managing agent capable to handle set of ordinary agent and the root agent capable to handle set of manager agents. MaSMT3.0 includes few features than the previous versions. MaSMT 3.0 includes root agent to handle swam of agents, Environment handling features to dynamically store agent's ontology, and notice board has been introducing to see required messages and events. In addition to these main features, agent status monitor has been introducing to view transporting messages. \nMulti-agent technology is modern software palindrome that capable of handling the complexity of a software system and providing intelligent solutions through the power of agent communication. A framework is a useful tool to develop multi-agent system and it saves lot of programmer's time and provides standards for the agent development.\nAbout MaSMT.\nMulti-agent technology is a modern software palindrome that capable of handling the complexity of a software system and providing intelligent solutions through the power of agent communication. A framework is a useful tool to develop a multi-agent system and it saves much programmer's time and provides standards for the agent development. \nMaSMT (Multi-Agent System for Machine Translation) released as Open Source under the GNU General Public License (GPL). Hence, the license allows using the software to examine and modify the source code and to develop applications based on the platform. The framework has been completely developed through the Java and provides Cross-platform capabilities. There is no prerequisite on MaSMT agents.\nFirst Open source version of the MaSMT was released on 2 March 2016 for to provide general infrastructure for the multi agent system development. The new version of the MaSMT includes some modification of the MaSMT to achieve best performance. MaSMT3.0 includes few features than the previous version including root agent to handle swam of agents, Environment handling features to store agent's action and mail transport agent to send and reserving messages as emails. In addition to these main features, activity monitor has been introducing to view transporting messages.\nInfrastructure.\nThis section briefly describes the structure of the MaSMT agents. The MaSMT framework provides three types of agents, namely ordinary agent, manager agent and root agent. The regular agents are the action holding agents of the framework; precede relevant tasks according to the given messages. A manager agent consists of some ordinary agents within its control. The root agent is the top level agent in the system which is capable to handle set of manager agents. Through the root agent, it is much easy to manage a swarm of agents. Further, manager agents can directly communicate with other manager agents as well as its root agent. Each and every ordinary agent in the system assigns to a particular manager agent. and manager can assign to a particular root agent. A regular agent (ordinary agent) in a swarm can directly communicate only with the agents in its swarm and its manager. The framework primarily implements infrastructure of the agents, required actions to handle agent's environment, message parsing activities to agent communication and action monitoring facilities to view an agent's action for implement multi-agent system easily.\nAgent Model.\nMaSMT agents consist of an abstract model of the agents. Each agent in the system consists of group, role, and agent id. According to the MaSMT architecture, any group consists of one-to-many roles and any role includes some MaSMT agents with independent agent id. Further, MaSMT agents are capable of changing their role or group at run-time. Therefore, an agent can appear in the different swam as it required. However, only one place at a time Figure 1 shows the abstract agent model of the MaSMT system. MaSMTAbstractAgent class is used to model the entire requirement of the abstract model.\nMaSMTAbstractAgent.\nMaSMTAbstractAgent is an abstract model of the each agent in the MaSMT framework and which is used to identify each agent independently. The abstract agent consists of a group, a role, and an agent id. In another word, each agent of the MaSMT system has an id and assigns for the particular group and a role. For instance, an agent in a group ‘communicate’ and role is read_message and id is1 then complete agent name of the abstract agent is donated as; read_message.1@communicate\nMaSMT Agent.\nMaSMTAgent is the ordinary agent in the framework, normally called as an action agent. MasMT agent consists of an abstract agent, two message queues (in-queue and out-queue), access rule, communication module, Environment controller, Status monitor and notice-board reader to handle the agent's functionalities.\nMaSMT Agent's Life circle.\nThe MaSMT agent is a Java thread consists of three sections on its life circle such as active, live and end. The active section starts when the agent is going to activate. After completing the active section, the agent moves to its live section. The agent is in the live section while its live property is true. That can be done through the method ‘setLive(false)’ The agent does it all the actions such as read messages reply for the messages are doing in the live section. In addition, a developer can change the delay of the agent's live circle. The end section of the MaSMT agent appears when the agent is going to die. Figure 3 shows the life-circle of the MaSMT agent.\nMaSMT Manager.\nThe MasMT Manager is a controlling agent of the system with additional features than the MaSMT ordinary agents. According to the MaSMT architecture, The manager agent consists of an abstract agent, two message queues (in-queue and out-queue), access rule, communication module, Environment controller, Status monitor, notice-board reader, network access agent and Message transport agent to archive managers functionalities. The MaSMT manager can fully control its client agent(s). The Manager agent in the MaSMT creates all its client-agent automatically (as required) at the initialization stage or whenever its use. The manager can directly access its client agents and send messages to them or control them as required. Further, manager agent can control the priority of the agents and the state of the agents. This facility removes the unnecessary workload from its client agents. The in-Queue of the managers is a queue used to store incoming messages. The manager adds messages for the out-queue, where the messages need to send to others group. In addition to the above basic module, MaSMT manager agent consists of net access agent and message transport agent. The net access agent supports client-server communication for the system. This agent provides facilities to connect any managers on the net through the client-server networking. Message transport agent provides message transport facilities to the manager. Figure 4 shows the architecture of the MaSMT Manager. Compare with MaSMT agent, the manager agent has full control of the notice board. Therefore, message transport agent capable to handle peer-to-peer broadcast and notice-board based agent communications.\nMaSMT Root Agent.\nMasMT Root is a special type of Manager agent (MaSMT Manager) that capable to handles managers in the multi agent system. This MaSMT Root agent can fully control its client managers same as the manager handle its client agents. Further, the root agent can create, remove or control (handle agent priority) its client manager(s). The in-Queue of the root agent is a message queue that uses to store incoming messages (messages are coming from another machine). The Root agent also adds messages on out-queue where messages need to send to others machines. Same as MaSMT Managers The Root agent consists of Message transport agent and net access agent to handle messages. Theoretically, Root agent is the top level manager of MaSMT system. Therefore, only one root agent available for a multi agent system in a machine. However, the root agent able to communicate with another root agents through the Net access agent.\nMaSMT Messages.\nMulti-agent systems are distributed systems that normally run concurrently. Communication among others is the hidden factor behind the success of the multi-agent systems. This agent communication is done through the messages. Therefore, number of standard available for the agent communication including FIFA-ACL. With considering all the above, MaSMT uses a message type named MaSMTMessages. MaSMTMessage is a type of agent's communication message, uses to communicate with agents. The MaSMT Messages design by using FIPA-ACL message standards. MaSMT uses MaSMTMessage (a message) to communicate between MaSMT agents as well as another agent through the FIPA-ACL message standard.\nMaSMT Message Parsing.\nThis section briefly describes message parsing methods used in the MaSMT system. MaSMT support peer-to-peer, broadcast and noticeboard methods to message parsing.\nApplication of MaSMT.\nUsing the framework number of the multi-agent system has been developed including Natural Language Processing applications such as English to Sinhala Agent-based Multi-agent system, Sinhala Ontology Generator and Multi-agent based Morphological Analyzer. In addition to the above Intelligent chatbot and a communication platform for Agricultural domain, File sharing application through the distributed environment has already developed through the MaSMT.", "Automation-Control": 0.9810804129, "Qwen2": "Yes"} {"id": "16577694", "revid": "9355309", "url": "https://en.wikipedia.org/wiki?curid=16577694", "title": "Alpha beta filter", "text": "An alpha beta filter (also called alpha-beta filter, f-g filter or g-h filter) is a simplified form of observer for estimation, data smoothing and control applications. It is closely related to Kalman filters and to linear state observers used in control theory. Its principal advantage is that it does not require a detailed system model.\nFilter equations.\nAn alpha beta filter presumes that a system is adequately approximated by a model having two internal states, where the first state is obtained by integrating the value of the second state over time. Measured system output values correspond to observations of the first model state, plus disturbances. This very low order approximation is adequate for many simple systems, for example, mechanical systems where position is obtained as the time integral of velocity. Based on a mechanical system analogy, the two states can be called \"position x\" and \"velocity v\". Assuming that velocity remains approximately constant over the small time interval \"ΔT\" between measurements, the position state is projected forward to predict its value at the next sampling time using equation 1.\nSince velocity variable \"v\" is presumed constant, its projected value at the next sampling time equals the current value.\nIf additional information is known about how a driving function will change the \"v\" state during each time interval, equation 2 can be modified to include it.\nThe output measurement is expected to deviate from the prediction because of noise and dynamic effects not included in the simplified dynamic model. This prediction error \"r\" is also called the \"residual\" or \"innovation\", based on statistical or Kalman filtering interpretations\nSuppose that residual \"r\" is positive. This could result because the previous \"x\" estimate was low, the previous \"v\" was low, or some combination of the two. The alpha beta filter takes selected \"alpha\" and \"beta\" constants (from which the filter gets its name), uses \"alpha\" times the deviation \"r\" to correct the position estimate, and uses \"beta\" times the deviation \"r\" to correct the velocity estimate. An extra \"ΔT\" factor conventionally serves to normalize magnitudes of the multipliers.\nThe corrections can be considered small steps along an estimate of the gradient direction. As these adjustments accumulate, error in the state estimates is reduced. For convergence and stability, the values of the \"alpha\" and \"beta\" multipliers should be positive and small:\nNoise is suppressed only if formula_9, otherwise the noise is amplified.\nValues of \"alpha\" and \"beta\" typically are adjusted experimentally. In general, larger \"alpha\" and \"beta\" gains tend to produce faster response for tracking transient changes, while smaller \"alpha\" and \"beta\" gains reduce the level of noise in the state estimates. If a good balance between accurate tracking and noise reduction is found, and the algorithm is effective, filtered estimates are more accurate than the direct measurements. This motivates calling the alpha-beta process a \"filter\".\nAlgorithm summary.\nInitialize. \nUpdate. Repeat for each time step ΔT:\nSample program.\nAlpha Beta filter can be implemented in C as follows:\nint main\n float dt = 0.5;\n float xk_1 = 0, vk_1 = 0, a = 0.85, b = 0.005;\n float xk, vk, rk;\n float xm;\n while (1)\n xm = rand % 100; // input signal\n xk = xk_1 + (vk_1 * dt);\n vk = vk_1;\n rk = xm - xk;\n xk += a * rk;\n vk += (b * rk) / dt;\n xk_1 = xk;\n vk_1 = vk;\n printf(\"%f \\t %f\\n\", xm, xk_1);\n sleep(1);\n} \nResult.\nThe following images depict the outcome of the above program in graphical format. In each image, the blue trace is the input signal; the output is red in the first image, yellow in the second, and green in the third. For the first two images, the output signal is visibly smoother than the input signal and lacks extreme spikes seen in the input. Also, the output moves in an estimate of gradient direction of input.\nThe higher the alpha parameter, the higher is the effect of input and the less damping is seen. A low value of beta is effective in controlling sudden surges in velocity. Also, as alpha increases beyond unity, the output becomes rougher and more uneven than the input.\nRelationship to general state observers.\nMore general state observers, such as the Luenberger observer for linear control systems, use a rigorous system model. Linear observers use a gain matrix to determine state estimate corrections from multiple deviations between measured variables and predicted outputs that are linear combinations of state variables. In the case of alpha beta filters, this gain matrix reduces to two terms. There is no general theory for determining the best observer gain terms, and typically gains are adjusted experimentally for both.\nThe linear Luenberger observer equations reduce to the alpha beta filter by applying the following specializations and simplifications.\nRelationship to Kalman filters.\nA Kalman filter estimates the values of state variables and corrects them in a manner similar to an alpha beta filter or a state observer. However, a Kalman filter does this in a much more formal and rigorous manner. The principal differences between Kalman filters and alpha beta filters are the following.\nA Kalman filter designed to track a moving object using a constant-velocity target dynamics (process) model (i.e., constant velocity between measurement updates) with process noise covariance and measurement covariance held constant will converge to the same structure as an alpha-beta filter. However, a Kalman filter's gain is computed recursively at each time step using the assumed process and measurement error statistics, whereas the alpha-beta's gain is computed ad hoc.\nChoice of parameters.\nThe alpha-beta filter becomes a steady-state Kalman filter if filter parameters are calculated from the sampling interval formula_10, the process variance formula_11 and the noise variance formula_12 like this\nThis choice of filter parameters minimizes the mean square error.\nThe steady state innovation variance formula_17 can be expressed as:\nVariations.\nAlpha filter.\nA simpler member of this family of filters is the alpha filter which observes only one state:\nwith the optimal parameter calculated like this:\nThis calculation is identical for a moving average and a low-pass filter. Exponential smoothing is mathematically identical to the purposed Alpha filter.\nAlpha beta gamma filter.\nWhen the second state variable varies quickly, i.e. when the acceleration of the first state is large, it can be useful to extend the states of the alpha beta filter by one level. In this extension, the second state variable \"v\" is obtained from integrating a third \"acceleration\" state, analogous to the way that the first state is obtained by integrating the second. An equation for the \"a\" state is added to the equation system. A third multiplier, \"gamma\", is selected for applying corrections to the new \"a\" state estimates. This yields the \"alpha beta gamma\" update equations.\nSimilar extensions to additional higher orders are possible, but most systems of higher order tend to have significant interactions among the multiple states, so approximating the system dynamics as a simple integrator chain is less likely to prove useful.\nCalculating optimal parameters for the alpha-beta-gamma filter is a bit more involved than for the alpha-beta filter:", "Automation-Control": 0.8761444688, "Qwen2": "Yes"} {"id": "55056771", "revid": "4637213", "url": "https://en.wikipedia.org/wiki?curid=55056771", "title": "Drawer slides roll forming machine", "text": "A drawer slide roll forming machine is a cold roll forming machine used to manufacture drawer slides. They have similarities with roofing roll formed products, but require a higher performance and skills in profile forming.\nThese machines are also known under various names such as slide rail making machine, slide make machine and telescopic channel roll forming equipment. \nProcess of slide roll forming.\nThe basic production flow of drawer slide machine is roll forming, punching, and cut off to length. \nProcesses of drawer slides roll forming is a continuous cold rolled steel strip passing through a plurality set of upper and lower shaped rollers, and then punching, embossing, straightening, and cut off to length. Straightening is an important part to avoid material twisting or curling. A roll forming line is often provided with straightening mechanism to make sure the material is nicely formed in a predetermined shape to meet the original design.\nEvery slide rail varies in some detailed design, which requires a customized production line to meet the expected profile. This means that the manufacture of a different types of roll slides may require a different machine or reconfiguration of the setup. The development takes time and costs, especially for undermount drawer slides. The profile is not as easy as contoured profile like roofing; in other words, slides roll forming requires advanced technology in manufacturing.\nOne disadvantage of drawer slides roll forming equipment is rollers can be only used to roll form one kind of profile design, which rollers must be changed when making different types or model of drawer slides profile. Complicated designs can incur large costs.\nTypes of drawer slides.\nA drawer slide or drawer runner is the part of a drawer which allows the sliding movement. Examples of uses are in home furniture hardware, office appliance, and industrial equipment, including kitchen cabinets, oven slides, rails for sliding doors, fridge slides (used for coolers, etc.\nThere are various types of drawer slide in the market to apply for different usages, price points and features. A good slide rail defined by smoothness, tight tolerance, and loading capacity.\nFeatures which may be incorporated in a drawer slide include:", "Automation-Control": 0.8726420403, "Qwen2": "Yes"} {"id": "55058695", "revid": "41169124", "url": "https://en.wikipedia.org/wiki?curid=55058695", "title": "Service automation framework", "text": "The Service Automation Framework (SAF) is a set of best practices for the automated delivery of services. The concept builds further on the self-service practices of ITIL and IT Service Management. In its current form, the SAF is published as in a series of volumes, covering different processes of service automation. The Service Automation Framework is maintained and updated by the Service Automation Framework Alliance, an independent body of knowledge for the advancement of service automation.\nSAF describes processes, procedures, tasks, and checklists which are not organization-specific, but can be applied by an organization for establishing integration with the organization's strategy, delivering value, and maintaining a minimum level of competency. It allows the organization to establish a baseline from which it can plan, implement, and measure. It is used to demonstrate compliance and to measure improvement. Since December 2016, APMG-International provides the examination for the SAF.", "Automation-Control": 0.8529884815, "Qwen2": "Yes"} {"id": "3359147", "revid": "7611264", "url": "https://en.wikipedia.org/wiki?curid=3359147", "title": "Nonlinear control", "text": "Nonlinear control theory is the area of control theory which deals with systems that are nonlinear, time-variant, or both. Control theory is an interdisciplinary branch of engineering and mathematics that is concerned with the behavior of dynamical systems with inputs, and how to modify the output by changes in the input using feedback, feedforward, or signal filtering. The system to be controlled is called the \"plant\". One way to make the output of a system follow a desired reference signal is to compare the output of the plant to the desired output, and provide feedback to the plant to modify the output to bring it closer to the desired output. \nControl theory is divided into two branches. Linear control theory applies to systems made of devices which obey the superposition principle. They are governed by linear differential equations. A major subclass is systems which in addition have parameters which do not change with time, called \"linear time invariant\" (LTI) systems. These systems can be solved by powerful frequency domain mathematical techniques of great generality, such as the Laplace transform, Fourier transform, Z transform, Bode plot, root locus, and Nyquist stability criterion.\nNonlinear control theory covers a wider class of systems that do not obey the superposition principle. It applies to more real-world systems, because all real control systems are nonlinear. These systems are often governed by nonlinear differential equations. The mathematical techniques which have been developed to handle them are more rigorous and much less general, often applying only to narrow categories of systems. These include limit cycle theory, Poincaré maps, Lyapunov stability theory, and describing functions. If only solutions near a stable point are of interest, nonlinear systems can often be linearized by approximating them by a linear system obtained by expanding the nonlinear solution in a series, and then linear techniques can be used. Nonlinear systems are often analyzed using numerical methods on computers, for example by simulating their operation using a simulation language. Even if the plant is linear, a nonlinear controller can often have attractive features such as simpler implementation, faster speed, more accuracy, or reduced control energy, which justify the more difficult design procedure.\nAn example of a nonlinear control system is a thermostat-controlled heating system. A building heating system such as a furnace has a nonlinear response to changes in temperature; it is either \"on\" or \"off\", it does not have the fine control in response to temperature differences that a proportional (linear) device would have. Therefore, the furnace is off until the temperature falls below the \"turn on\" setpoint of the thermostat, when it turns on. Due to the heat added by the furnace, the temperature increases until it reaches the \"turn off\" setpoint of the thermostat, which turns the furnace off, and the cycle repeats. This cycling of the temperature about the desired temperature is called a \"limit cycle\", and is characteristic of nonlinear control systems.\nProperties of nonlinear systems.\nSome properties of nonlinear dynamic systems are\nAnalysis and control of nonlinear systems.\nThere are several well-developed techniques for analyzing nonlinear feedback systems:\nControl design techniques for nonlinear systems also exist. These can be subdivided into techniques which attempt to treat the system as a linear system in a limited range of operation and use (well-known) linear design techniques for each region:\nThose that attempt to introduce auxiliary nonlinear feedback in such a way that the system can be treated as linear for purposes of control design:\nAnd Lyapunov based methods:\nNonlinear feedback analysis – The Lur'e problem.\nAn early nonlinear feedback system analysis problem was formulated by A. I. Lur'e.\nControl systems described by the Lur'e problem have a forward path that is linear and time-invariant, and a feedback path that contains a memory-less, possibly time-varying, static nonlinearity.\nThe linear part can be characterized by four matrices (\"A\",\"B\",\"C\",\"D\"), while the nonlinear part is Φ(\"y\") with formula_1\nwhere formula_2, formula_3 are vector fields belonging to a distribution formula_4 and formula_5 are control functions, the integral curves of formula_6 are restricted to a manifold of dimension formula_7 if formula_8 and formula_4 is an involutive distribution.", "Automation-Control": 0.9998323321, "Qwen2": "Yes"} {"id": "54426651", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=54426651", "title": "Ackermann's formula", "text": "In control theory, Ackermann's formula is a control system design method for solving the pole allocation problem for invariant-time systems by Jürgen Ackermann. One of the primary problems in control system design is the creation of controllers that will change the dynamics of a system by changing the eigenvalues of the matrix representing the dynamics of the closed-loop system. This is equivalent to changing the poles of the associated transfer function in the case that there is no cancellation of poles and zeros.\nState feedback control.\nConsider a linear continuous-time invariant system with a state-space representation\nwhere \"x\" is the state vector, \"u\" is the input vector, and \"A\", \"B\" and \"C\" are matrices of compatible dimensions that represent the dynamics of the system. An input-output description of this system is given by the transfer function\nSince the denominator of the right equation is given by the characteristic polynomial of \"A\", the poles of \"G\" are eigenvalues of \"A\" (note that the converse is not necessarily true, since there may be cancellations between terms of the numerator and the denominator). If the system is unstable, or has a slow response or any other characteristic that does not specify the design criteria, it could be advantageous to make changes to it. The matrices \"A\", \"B\" and \"C\", however, may represent physical parameters of a system that cannot be altered. Thus, one approach to this problem might be to create a feedback loop with a gain \"K\" that will feed the state variable \"x\" into the input \"u\".\nIf the system is controllable, there is always an input formula_4 such that any state formula_5 can be transferred to any other state formula_6. With that in mind, a feedback loop can be added to the system with the control input formula_7, such that the new dynamics of the system will be\nIn this new realization, the poles will be dependent on the characteristic polynomial formula_10 of formula_11, that is\nAckermann's formula.\nComputing the characteristic polynomial and choosing a suitable feedback matrix can be a challenging task, especially in larger systems. One way to make computations easier is through Ackermann's formula. For simplicity's sake, consider a single input vector with no reference parameter formula_13, such as\nwhere formula_16 is a feedback vector of compatible dimensions. Ackermann's formula states that the design process can be simplified by only computing the following equation:\nin which formula_18 is the desired characteristic polynomial evaluated at matrix formula_19, and formula_20 is the controllability matrix of the system.\nProof.\nThis proof is based on Encyclopedia of Life Support Systems entry on Pole Placement Control. Assume that the system is controllable. The characteristic polynomial of formula_21 is given by\nCalculating the powers of formula_23 results in\nReplacing the previous equations into formula_25 yieldsformula_26Rewriting the above equation as a matrix product and omitting terms that formula_16 does not appear isolated yields\nFrom the Cayley–Hamilton theorem, formula_29, thus\nformula_30\nNote that formula_31 is the controllability matrix of the system. Since the system is controllable, formula_20 is invertible. Thus,\nTo find formula_34, both sides can be multiplied by the vector formula_35 giving\nThus,\nExample.\nConsider\nformula_38\nWe know from the characteristic polynomial of formula_19 that the system is unstable since formula_40, the matrix formula_19 will only have positive eigenvalues. Thus, to stabilize the system we shall put a feedback gain formula_42\nFrom Ackermann's formula, we can find a matrix formula_43 that will change the system so that its characteristic equation will be equal to a desired polynomial. Suppose we want formula_44.\nThus, formula_45 and computing the controllability matrix yields\nAlso, we have that formula_48\nFinally, from Ackermann's formula\nState observer design.\nAckermann's formula can also be used for the design of state observers. Consider the linear discrete-time observed system\nwith observer gain \"L\". Then Ackermann's formula for the design of state observers is noted as\nwith observability matrix formula_55. Here it is important to note, that the observability matrix and the system matrix are transposed: formula_56 and formula_57.\nAckermann's formula can also be applied on continuous-time observed systems.", "Automation-Control": 0.9949514866, "Qwen2": "Yes"} {"id": "67554304", "revid": "25619094", "url": "https://en.wikipedia.org/wiki?curid=67554304", "title": "IEC 63382", "text": "IEC 63382 is an international standard defining a protocol for the management of distributed energy storage systems based on electric vehicles, which is currently under development. IEC 63382 is one of the International Electrotechnical Commission's group of standards for electric road vehicles and electric industrial trucks, and is the responsibility of Joint Working Group 15 (JWG15) of IEC Technical Committee 69 (TC69).\nStandard documents.\nIEC 63382 consists of the following parts, detailed in separate IEC 63382 standard documents:", "Automation-Control": 0.8973210454, "Qwen2": "Yes"} {"id": "19667111", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=19667111", "title": "Statistical relational learning", "text": "Statistical relational learning (SRL) is a subdiscipline of artificial intelligence and machine learning that is concerned with domain models that exhibit both uncertainty (which can be dealt with using statistical methods) and complex, relational structure.\nNote that SRL is sometimes called Relational Machine Learning (RML) in the literature. Typically, the knowledge representation formalisms developed in SRL use (a subset of) first-order logic to describe relational properties of a domain in a general manner (universal quantification) and draw upon probabilistic graphical models (such as Bayesian networks or Markov networks) to model the uncertainty; some also build upon the methods of inductive logic programming. Significant contributions to the field have been made since the late 1990s.\nAs is evident from the characterization above, the field is not strictly limited to learning aspects; it is equally concerned with reasoning (specifically probabilistic inference) and knowledge representation. Therefore, alternative terms that reflect the main foci of the field include \"statistical relational learning and reasoning\" (emphasizing the importance of reasoning) and \"first-order probabilistic languages\" (emphasizing the key properties of the languages with which models are represented).\nCanonical tasks.\nA number of canonical tasks are associated with statistical relational learning, the most common ones being.\nRepresentation formalisms.\nOne of the fundamental design goals of the representation formalisms developed in SRL is to abstract away from concrete entities and to represent instead general principles that are intended to be universally applicable. Since there are countless ways in which such principles can be represented, many representation formalisms have been proposed in recent years. In the following, some of the more common ones are listed in alphabetical order:", "Automation-Control": 0.8942111731, "Qwen2": "Yes"} {"id": "46716834", "revid": "197845", "url": "https://en.wikipedia.org/wiki?curid=46716834", "title": "Octuple-precision floating-point format", "text": "In computing, octuple precision is a binary floating-point-based computer number format that occupies 32 bytes (256 bits) in computer memory. This 256-bit octuple precision is for applications requiring results in higher than quadruple precision. This format is rarely (if ever) used and very few environments support it.\nIEEE 754 octuple-precision binary floating-point format: binary256.\nIn its 2008 revision, the IEEE 754 standard specifies a binary256 format among the \"interchange formats\" (it is not a basic format), as having:\nThe format is written with an implicit lead bit with value 1 unless the exponent is all zeros. Thus only 236 bits of the significand appear in the memory format, but the total precision is 237 bits (approximately 71 decimal digits: ).\nThe bits are laid out as follows:\nExponent encoding.\nThe octuple-precision binary floating-point exponent is encoded using an offset binary representation, with the zero offset being 262143; also known as exponent bias in the IEEE 754 standard.\nThus, as defined by the offset binary representation, in order to get the true exponent the offset of 262143 has to be subtracted from the stored exponent.\nThe stored exponents 0000016 and 7FFFF16 are interpreted specially.\nThe minimum strictly positive (subnormal) value is and has a precision of only one bit.\nThe minimum positive normal value is 2−262142 ≈ 2.4824 × 10−78913.\nThe maximum representable value is 2262144 − 2261907 ≈ 1.6113 × 1078913.\nOctuple-precision examples.\nThese examples are given in bit \"representation\", in hexadecimal,\nof the floating-point value. This includes the sign, (biased) exponent, and significand.\n 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 000016 = +0\n 8000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 000016 = −0\n 7fff f000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 000016 = +infinity\n ffff f000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 000016 = −infinity\n 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 000116\n = 2−262142 × 2−236 = 2−262378\n ≈ 2.24800708647703657297018614776265182597360918266100276294348974547709294462 × 10−78984\n (smallest positive subnormal number)\n 0000 0fff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff16\n = 2−262142 × (1 − 2−236)\n ≈ 2.4824279514643497882993282229138717236776877060796468692709532979137875392 × 10−78913\n (largest subnormal number)\n 0000 1000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 000016\n = 2−262142\n ≈ 2.48242795146434978829932822291387172367768770607964686927095329791378756168 × 10−78913\n (smallest positive normal number)\n 7fff efff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff16\n = 2262143 × (2 − 2−236)\n ≈ 1.61132571748576047361957211845200501064402387454966951747637125049607182699 × 1078913\n (largest normal number)\n 3fff efff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff ffff16\n = 1 − 2−237\n ≈ 0.999999999999999999999999999999999999999999999999999999999999999999999995472\n (largest number less than one)\n 3fff f000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 000016\n = 1 (one)\n 3fff f000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 000116\n = 1 + 2−236\n ≈ 1.00000000000000000000000000000000000000000000000000000000000000000000000906\n (smallest number larger than one)\nBy default, 1/3 rounds down like double precision, because of the odd number of bits in the significand.\nSo the bits beyond the rounding point are codice_1 which is less than 1/2 of a unit in the last place.\nImplementations.\nOctuple precision is rarely implemented since usage of it is extremely rare. Apple Inc. had an implementation of addition, subtraction and multiplication of octuple-precision numbers with a 224-bit two's complement significand and a 32-bit exponent. One can use general arbitrary-precision arithmetic libraries to obtain octuple (or higher) precision, but specialized octuple-precision implementations may achieve higher performance.\nHardware support.\nThere is no known hardware implementation of octuple precision.", "Automation-Control": 0.7363609076, "Qwen2": "Yes"} {"id": "46724044", "revid": "40804922", "url": "https://en.wikipedia.org/wiki?curid=46724044", "title": "Outstar", "text": "Outstar is an output from the neurodes of the hidden layer of the neural network architecture which works as an input for output layer. Neurode of hidden layer provides input to neurode of the output layer.", "Automation-Control": 0.8371582627, "Qwen2": "Yes"} {"id": "6251874", "revid": "7034620", "url": "https://en.wikipedia.org/wiki?curid=6251874", "title": "Gear cutting", "text": "Gear cutting is any machining process for creating a gear. The most common gear-cutting processes include hobbing, broaching, milling, grinding, and skiving. Such cutting operations may occur either after or instead of forming processes such as forging, extruding, investment casting, or sand casting.\nGears are commonly made from metal, plastic, and wood. Although gear cutting is a substantial industry, many metal and plastic gears are made without cutting, by processes such as die casting or injection molding. Some metal gears made with powder metallurgy require subsequent machining, whereas others are complete after sintering. Likewise, metal or plastic gears made with additive manufacturing may or may not require finishing by cutting, depending on application.\nProcesses.\nBroaching.\nFor very large gears or spline, a vertical broach is used. It consists of a vertical rail that carries a single tooth cutter formed to create the tooth shape. A rotary table and a Y axis are the customary axes available. Some machines will cut to a depth on the Y axis and index the rotary table automatically. The largest gears are produced on these machines.\nOther operations such as broaching work particularly well for cutting teeth on the inside. The downside to this is that it is expensive and different broach sticks are required to make different sized gears. Therefore, it is mostly used in very high production runs.\nHobbing.\nHobbing is a method by which a \"hob\" is used to cut teeth into a blank. We gear hobbing with a master hob or index hob on CNC gear hobbing machines who cut, gears, wheels, pinions, shafts and worms. The cutter and gear blank are rotated at the same time to transfer the profile of the hob onto the gear blank. Used very often for all sizes of production runs, but works best for medium to high. The hobbing features for gears are straight, helical, straight bevel, face, crowned, worm, cylkro and chamfering.\nMilling or grinding.\nSpur may be cut or ground on a milling machine or jig grinder utilizing a numbered gear cutter, and any indexing head or rotary table. The number of the gear cutter is determined by the tooth count of the gear to be cut.\nTo machine a helical gear on a manual machine, a true indexing fixture must be used. Indexing fixtures can disengage the drive worm, and be attached via an external gear train to the machine table's handle (like a power feed). It then operates similarly to a carriage on a lathe. As the table moves on the X axis, the fixture will rotate in a fixed ratio with the table. The indexing fixture itself receives its name from the original purpose of the tool: moving the table in precise, fixed increments. If the indexing worm is not disengaged from the table, one can move the table in a highly controlled fashion via the indexing plate to produce linear movement of great precision (such as a vernier scale).\nThere are a few different types of cutters used when creating gears. One is a rack shaper. These are straight and move in a direction tangent to the gear, while the gear. They have six to twelve teeth and eventually have to be moved back to the starting point to begin another cut.\nShaping.\nThe old method of gear cutting is mounting a gear blank in a shaper and using a tool shaped in the profile of the tooth to be cut. This method also works for cutting internal splines.\nAnother is a pinion-shaped cutter that is used in a gear shaper machine. It is basically when a cutter that looks similar to a gear cuts a gear blank. The cutter and the blank must have a rotating axis parallel to each other. This process works well for low and high production runs.\nFinishing.\nAfter being cut the gear can be finished by shaving, burnishing, grinding, honing or lapping.\nFurther reading.\nA guide to cutting; by Zuber Beekhory", "Automation-Control": 0.9324207306, "Qwen2": "Yes"} {"id": "23426266", "revid": "38132428", "url": "https://en.wikipedia.org/wiki?curid=23426266", "title": "Distributed multi-agent reasoning system", "text": "In artificial intelligence, the distributed multi-agent reasoning system (dMARS) was a platform for intelligent software agents developed at the AAII that makes uses of the belief–desire–intention software model (BDI). The design for dMARS was an extension of the intelligent agent cognitive architecture developed at SRI International called procedural reasoning system (PRS). The most recent incarnation of this framework is the JACK Intelligent Agents platform.\nOverview.\ndMARS was an agent-oriented development and implementation environment written in C++ for building complex, distributed, time-critical systems.", "Automation-Control": 0.9927278161, "Qwen2": "Yes"} {"id": "34866277", "revid": "10951369", "url": "https://en.wikipedia.org/wiki?curid=34866277", "title": "Peter J. Fleming", "text": "Peter John Fleming is a professor of Industrial Systems and Control in the Department of Automatic Control and Systems Engineering at the University of Sheffield, and till June 2012 he was the director of the Rolls-Royce University Technology Centre for Control and Systems Engineering. He works in the field of control and systems engineering and is known for his work on evolutionary computation applied to systems engineering. Fleming is Editor-in-Chief of the \"International Journal of Systems Science\".\nResearch.\nFleming's primary area of research involves the development of evolutionary algorithms, including genetic algorithm for multi-objective optimization. He also works in the area of control & systems engineering. He has authored about 400 research publications, including six books. His research interests have led to the development of close links with a variety of industries in sectors such as automotive, aerospace, power generation, food processing, pharmaceuticals, and manufacturing. Two of his most cited articles are:\nHe is a Fellow of the Royal Academy of Engineering since 2005, a Fellow of the International Federation of Automatic Control since 2009, a Fellow of the Institution of Engineering Technology, and a Fellow of the Institute of Measurement and Control.", "Automation-Control": 0.9938998818, "Qwen2": "Yes"} {"id": "16791605", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=16791605", "title": "IranOpen", "text": "Iranian teams have been active participants of RoboCup events since 1998. The number of Iranian teams has been largely increasing over the past years. Thereby, the need to have a regional event seemed rather necessary. Furthermore, since the overall number world interested teams in RoboCup has increased; regional events may and can be a proper field for RoboCup leagues Technical Committees to see teams qualities for RoboCup World Competitions. IranOpen is a place for teams willing to take part in RoboCup World Competitions in order to show their qualities and standards. It is also a place for fresh teams to gain experience and become ready to join the world teams.\nRoboCup.\nRoboCup is an international joint project to promote AI, robotics, and related field. It is an attempt to foster AI and intelligent robotics research by providing a standard problem where a wide range of technologies can be integrated and examined. RoboCup chose to use soccer games as a central topic of research, aiming at innovations to be applied for socially significant problems and industries. The ultimate goal of the RoboCup project is by 2050, develop a team of fully autonomous humanoid robots that can win against the human world champion team in soccer. In order for a robot team to actually perform a soccer game, various technologies must be incorporated including: design principles of autonomous agents, multi-agent collaboration, strategy acquisition, real-time reasoning, robotics, and sensor-fusion. RoboCup is a task for a team of multiple fast-moving robots under a dynamic environment. RoboCup also offers a software platform for research on the software aspects of RoboCup. One of the major applications of RoboCup technologies is a search and rescue in large scale disaster. RoboCup initiated RoboCupRescue project to specifically promote research in socially significant issues.\nIranOpen 2006.\nAzad University of Qazvin as the pioneer in organizing a RoboCup event in Iran prepared a strong proposal for the RoboCup Federation which was then approved by the Federation to organize RoboCup IranOpen2006. RoboCup IranOpen2006 took place at Tehran International Fair from April 7 to April 9. It was the first but successful experience of organizing a RoboCup event in Iran. About 100 teams competed in 7 leagues and over 3000 visitors enjoined watching the games.\nIranOpen 2007.\nAzad University of Qazvin having the successful experience of organizing RoboCup IranOpen2006 decided to organize RoboCup IranOpen2007 in a more broad range. The proposal of the Azad University was approved by Iranian RoboCup National Committee and sent to the RoboCup Federation for final approval. It was then finally approved by the Federation. Therefore, RoboCup IranOpen2007 took place in Tehran International Fair from 5–7 April 2007. About 260 teams competed in 15 leagues and over 5000 visitors enjoyed watching the games.\nIranOpen 2008.\nAzad University of Qazvin for the third consequent year was approved by Iranian RoboCup National Committee to organize RoboCup IranOpen2008. This has also been approved by the RoboCup Federation. This year for the first time the event will take place in Azad University of Qazvin main campus from 3–5 April 2008. RoboCup IranOpen2008 welcomes all interested teams to join the event and compete in 16 leagues.\nIranOpen 2009.\nRoboCup IranOpen 2009 being the fourth annual RoboCup IranOpen competitions will be held on April 4–6, 2009. Iranian RoboCup National Committee and Azad University of Qazvin as organizers of this event wish all the participants luck and success. We will try to make the 2009 competitions even more enthusiastic and put all the efforts in our power to make the competitions environment as much comfortable as possible for the participants.\nIranOpen 2011.\nRoboCup IranOpen 2011 was held at Tehran International Fair from 5 - 9 April 2011. RoboCup IranOpen 2011 hosted about 350 teams from 14 countries within 22 leagues.\nIranOpen 2012.\nRoboCup IranOpen 2012 was held at Tehran International Fair from 3 - 7 April, 2012. RoboCup IranOpen 2012 hosted 330 teams from 13 countries competing in 24 leagues.\nArian team which is led by Arash Poori participated in the competition.", "Automation-Control": 0.6723388433, "Qwen2": "Yes"} {"id": "55315552", "revid": "1155105872", "url": "https://en.wikipedia.org/wiki?curid=55315552", "title": "Vibration welding of thermoplastics", "text": "Vibration welding (also known as linear or friction welding) refers to a process in which two workpieces are brought in contact under pressure, and a reciprocating motion (vibration) is applied along the common interface in order to generate heat. The resulting heat melts the workpieces, and they become welded when the vibration stops and the interface cools. Most machinery operates at 120 Hz, although equipment is available that runs between 100–240 Hz. Vibration can be achieved either through linear vibration welding, which uses a one dimensional back and forth motion, or orbital vibration welding which moves the pieces in small orbits relative to each other. Linear vibration welding is more common due to simpler and relatively cheaper machinery required.\nVibration welding is often used for larger applications where the parts to be joined have relatively flat seams, although the process can accommodate some out of plane curvature. Recently, the automotive industry has made extensive use of the process to produce parts like manifolds and lighting assemblies whose complex geometries prevent single component molding processes.\nAdvantages and disadvantages.\nVibration welding has numerous advantages over other conventional plastic welding processes. Since the heat is created at an interface, the molten polymers are not exposed to open air, preventing oxidation and contamination of the weld during the process. No filler material is required, and when welding components of the same material the joint can be expected to be just as strong as the bulk material. Heating is localized to the interface, decreasing the chances of material degradation seen with other processes which require a heat source well above the melt temperature of the material. The process itself is cost effective, with no consumables and short cycle times. Vibration welding produces virtually no smoke or fume, requires little surface preparation, and works well for a multitude of applications, making it well suited to mass production environments.\nVibration welding does have its drawbacks, however. The process does not lend itself well to low modulus thermoplastics or to joints between plastics with relatively high differences in melting temperatures. Vibration welding requires part specific fixturing and joint designs, and the part will be exposed to rigorous vibration during the welding cycle which may damage sensitive or miniature components. The finished weld will be surrounded by a significant amount of flash, which must be removed if appearance is an issue. Alternatively, joint geometries which hide the excess flash can be used. Lastly, the process is not well suited to welding of anything other than relatively flat joints.\nVibration welding process.\nThe vibration welding process consists of four steps: solid friction, transient flow, steady state flow, and solidification.\nSolid friction.\nIn this first stage, vibration is commenced between two cold parts pressed together at a constant pressure. The frictional energy causes the polymers to heat. In this stage there is no weld penetration as melting has not yet occurred.\nTransient flow.\nIn the transient flow step the polymer's surface begins to melt. The melt layer thickness quickly grows, causing the frictional forces to decrease. This decrease in friction decreases the heat input to the system, and a lateral flow of molten material begins to occur.\nSteady state flow.\nIn this phase the melting rate of the material matches the flow of material extruded at the lateral surfaces. The material flow and thickness of the melt layer become constant. This is the step that determines the quality of the weld. This step is maintained until the desired ‘melt down’ thickness (thickness of the molten material) is achieved. At that time the vibration is stopped and the weld is allowed to cool.\nSolidification.\nDuring solidification the vibration is stopped, while pressure is maintained on the workpieces until no more molten material remains. Once cooled to room temperature, the joint should have near the strength of the bulk material. Pressure is only relieved once the joint reaches an acceptable strength.\nEquipment.\nA vibration welding machine is essentially a vertical machine press in which one side has been modified to vibrate.  The main components are the vibrating assembly, a lifting table, and a tooling fixture.\nVibrating assembly.\nThe vibrating assembly is a moving element driven either by hydraulics or more commonly, electromagnets. In the electromagnetic version, the heart of this assembly is a tuned spring-mass system powered by electrical coils acting on oppositely charged lamination stacks. The frequency of the electrical charges is matched to the mechanical frequency of the system. Although the amplitude can be adjusted on the machine the frequency can only be changed by changing the mass of the vibrating assembly. The moving portion of the tooling is affixed to the vibrating assembly.\nLifting table.\nThe lifting table is a hydraulic assembly attached to the fixed portion of the tooling. The lifting table brings the workpieces together, and applies pressure between the moving and stationary portions of the tooling.\nTooling.\nTooling refers to the fixtures which are attached to the vibrating assembly and lifting table that hold the work pieces in place. Tooling is application specific, and must allow for workpieces to be quickly switched out after every welding cycle. It is imperative that the tooling matches the workpieces closely enough to prevent any relative motion between the tooling and the workpieces, as this would reduce the amplitude of the weld and lower heat input as well as dimensional tolerances.\nProcess variables.\nThe vibration welding process has five main variables: frequency, amplitude, pressure, time, and depth.\nFrequency.\nFrequency refers to how many times per second a vibration cycle is completed. Most machinery runs at 120 Hz, although machinery is available that runs from 100–240 Hz. Frequency is dependent on the mass of the vibrating assembly, and as such can only be changed by switching out components of the assembly.\nAmplitude.\nAmplitude refers to the distance traveled during each vibratory cycle. Higher amplitudes tend to be used with lower frequencies, and vice versa. Higher amplitudes increase heat input at the cost of cleanliness and dimensional tolerances, making them more useful for larger parts. Lower amplitudes range from 0.7-1.8mm, while higher amplitudes describe cycles that cover 2-4mm.\nPressure.\nPressure is the primary controller of melt layer thickness, and must be kept within an optimal range in order to produce quality joints. Although pressure can vary between 0.5-20MPa across different materials and geometries, the tolerances for a given application are quite tight. Too little pressure will prevent sufficient heat generation, while too much pressure can cause all of the molten material to squeeze out of the joint. Both scenarios will result in a weak weld. Pressure is controlled by the lifting table.\nTime.\nThe length of time that vibration is applied to the workpiece is another key factor. Time is directly proportional to heat generation and material loss to flash. Processes can be either time or depth controlled, with most modern processes being depth controlled. A depth controlled process will have a variable time, and vice versa.\nDepth.\nDepth refers to the distance traveled by the workpieces after vibration is started. Sometimes referred to as displacement, it is directly related to the amount of material loss to flash. In general depth should be kept close to or above the thickness of the melt layer at the beginning of the steady state stage. After this value, more depth only results in loss of material without an accompanying rise in joint strength.\nWeld design.\nWeld design for vibration welding must include a relatively large flat surface, although some out of plane curvature can be accommodated for. The most common type of joint is a butt joint, where two flat pieces with the same cross section are welded together. Variations on this joint can include u-flanges, tongue and groove joints, and even double tongue and groove joints. When appearances are important, flash traps can be used. Flash traps refer to  hollow areas in the cross section next to the weld area that collect the flash and hide it from view.", "Automation-Control": 0.8418526053, "Qwen2": "Yes"} {"id": "45302060", "revid": "7034620", "url": "https://en.wikipedia.org/wiki?curid=45302060", "title": "Parametric programming", "text": "Parametric programming is a type of mathematical optimization, where the optimization problem is solved as a function of one or multiple parameters. Developed in parallel to sensitivity analysis, its earliest mention can be found in a thesis from 1952. Since then, there have been considerable developments for the cases of multiple parameters, presence of integer variables as well as nonlinearities. \nNotation.\nIn general, the following optimization problem is considered\nwhere formula_2 is the optimization variable, formula_3 are the parameters, formula_4 is the objective function and formula_5 denote the constraints. formula_6 denotes a function whose output is the optimal value of the objective function formula_7. The set formula_8 is generally referred to as parameter space.\nThe optimal value (i.e. result of solving the optimization problem) is obtained by evaluating the function with an argument formula_3.\nClassification.\nDepending on the nature of formula_4 and formula_5 and whether the optimization problem features integer variables, parametric programming problems are classified into different sub-classes:\nApplications.\nIn control theory generally and in process industries.\nThe connection between parametric programming and model predictive control for process manufacturing, established in 2000, has contributed to an increased interest in the topic. Parametric programming supplies the idea that optimization problems can be parametrized as functions that can be evaluated (similar to a lookup table). This in turns allows the optimization algorithms in optimal controllers to be implemented as pre-computed (off-line) mathematical functions, which may in some cases be simpler and faster to evaluate than solving a full optimization problem on-line. This also opens up the possibility of creating optimal controllers on chips (MPC on chip). However, the off-line parametrization of optimal solutions runs into the curse of dimensionality as the number of possible solutions grows with the dimensionality and number of constraints in the problem.\nIn CNC programming.\nParametric programming in the context of CNC (computer numerical control) is defining part-cutting cycles in terms of variables with reassignable values rather than via hardcoded/hardwired instances. An archetypically simple example is writing a G-code program to machine a family of washers: there is often no need to write 15 programs for 15 members of the family with various hole diameters, outer diameters, thicknesses, and materials, when it is practical instead to write 1 program that calls various variables and reads their current values from a table of assignments. The program then instructs the machine slides and spindles to move to various positions at various velocities, accordingly, addressing not only the sizes of the part (i.e., OD, ID, thickness) but also even the speeds and feeds needed for any given material (e.g., low-carbon steel, high-carbon steel; stainless steel of whichever grade; bronze, brass, or aluminum of whichever grade; polymer of whichever type). Custom Macros are often used in such programming.", "Automation-Control": 0.9924701452, "Qwen2": "Yes"} {"id": "38682282", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=38682282", "title": "Dlib", "text": "Dlib is a general purpose cross-platform software library written in the programming language C++. Its design is heavily influenced by ideas from design by contract and component-based software engineering. Thus it is, first and foremost, a set of independent software components. It is open-source software released under a Boost Software License.\nSince development began in 2002, Dlib has grown to include a wide variety of tools. As of 2016, it contains software components for dealing with networking, threads, graphical user interfaces, data structures, linear algebra, machine learning, image processing, data mining, XML and text parsing, numerical optimization, Bayesian networks, and many other tasks. In recent years, much of the development has been focused on creating a broad set of statistical machine learning tools and in 2009 Dlib was published in the \"Journal of Machine Learning Research\". Since then it has been used in a wide range of domains.", "Automation-Control": 0.6178896427, "Qwen2": "Yes"} {"id": "1123698", "revid": "910180", "url": "https://en.wikipedia.org/wiki?curid=1123698", "title": "End-to-end delay", "text": "End-to-end delay or one-way delay (OWD) refers to the time taken for a packet to be transmitted across a network from source to destination. It is a common term in IP network monitoring, and differs from round-trip time (RTT) in that only path in the one direction from source to destination is measured.\nMeasurement.\nThe ping utility measures the RTT, that is, the time to go and come back to a host. Half the RTT is often used as an approximation of OWD but this assumes that the forward and back paths are the same in terms of congestion, number of hops, or quality of service (QoS). This is not always a good assumption. To avoid such problems, the OWD may be measured directly.\nDirect.\nOWDs may be measured between two points \"A\" and \"B\" of an IP network through the use of synchronized clocks; \"A\" records a timestamp on the packet and sends it to \"B\", which notes the receiving time and calculates the OWD as their difference. The transmitted packets need to be identified at source and destination in order to avoid packet loss or packet reordering. However, this method suffers several limitations, such as requiring intensive cooperation between both parties, and the accuracy of the measured delay is subject to the synchronization precision.\nThe Minimum-Pairs Protocol is an example by which several cooperating entities, \"A\", \"B\", and \"C\", could measure OWDs between one of them and a fourth less cooperative one (e.g., between \"B\" and \"X\").\nEstimate.\nTransmission between two network nodes may be asymmetric, and the forward and reverse delays are not equal. Half the RTT value is the average of the forward and reverse delays and so may be sometimes used as an approximation to the end-to-end delay. The accuracy of such an estimate depends on the nature of delay distribution in both directions. As delays in both directions become more symmetric, the accuracy increases.\nThe probability mass function (PMF) of absolute error, \"E\", between the smaller of the forward and reverse OWDs and their average (i.e., RTT/2) can be expressed as a function of the network delay distribution as follows:\nwhere \"a\" and \"b\" are the forward and reverse edges, and \"fy(z)\" is the PMF of delay of edge \"z\" (that is, \"fy(z) = Pr{delay on edge z = y}).\nDelay components.\nEnd-to-end delay in networks comes from several sources including transmission delay, propagation delay, processing delay and queuing delay.", "Automation-Control": 0.9247368574, "Qwen2": "Yes"} {"id": "7256112", "revid": "44071894", "url": "https://en.wikipedia.org/wiki?curid=7256112", "title": "CRS Robotics", "text": "CRS Robotics Corporation (currently operating as Thermo CRS Limited) was a robotics company based out of Burlington, Ontario, Canada. CRS Robotics designed, manufactured, distributed, and serviced human scale articulated robots, and laboratory automation systems. Human scale robots have approximately the same reach, speed, the range of motion, the degree of articulation and lifting capacity as a human being and are designed specifically to perform tasks that are hazardous, highly repetitive or generally unsuited for humans. Laboratory Automation applications are used to speed the effort of drug discovery for pharmaceutical and biotechnology customers.\nCRS Robotics was notable in the field of automated lab systems due to their developments in high throughput and ultra-high throughput automated systems. Among other things, these developments included their advanced scheduling software, called POLARA, which was an open and extensible platform for the management and control of complex automated systems. As an example, a \"good portion of the work\" for \"the preliminary map of the human genetic code was performed on CRS Automated Lab Systems\".\nThe company commenced operations in 1982 as an engineering firm providing consulting services to Canadian machine tool manufacturers in the area of machine controls. The company sold its first robot, the M1 small robot system, in 1985. The company shipped its first laboratory automation system in 1997. In 1998, they introduced the F3 Robot, a 6-axis robotic arm.\nThe company changed its name to CRS Robotics Corporation in 1994. In 1995, the company completed an initial public offering. The company was listed on the Toronto Stock Exchange.\nIn May 2002, the company was acquired 100% by the Thermo Electron Corporation, (NYSE: TMO), and its name was subsequently changed to Thermo CRS. After the acquisition, the company still continues to develop laboratory automation systems.", "Automation-Control": 0.9010363817, "Qwen2": "Yes"} {"id": "7327033", "revid": "1189543", "url": "https://en.wikipedia.org/wiki?curid=7327033", "title": "Charles L. Coffin", "text": "Charles L. Coffin of Detroit was awarded for an arc welding process using a metal electrode. This was the first time that metal melted from the electrode carried across the arc to deposit filler metal in the joint to make a weld. Two years earlier, Nikolay Slavyanov presented the same idea of transferring metal across an arc, but to cast metal in a mold.", "Automation-Control": 0.6616959572, "Qwen2": "Yes"} {"id": "7333367", "revid": "1329099", "url": "https://en.wikipedia.org/wiki?curid=7333367", "title": "Industrial control system", "text": "An industrial control system (ICS) is an electronic control system and associated instrumentation used for industrial process control. Control systems can range in size from a few modular panel-mounted controllers to large interconnected and interactive distributed control systems (DCSs) with many thousands of field connections. Control systems receive data from remote sensors measuring process variables (PVs), compare the collected data with desired setpoints (SPs), and derive command functions that are used to control a process through the final control elements (FCEs), such as control valves.\nLarger systems are usually implemented by supervisory control and data acquisition (SCADA) systems, or DCSs, and programmable logic controllers (PLCs), though SCADA and PLC systems are scalable down to small systems with few control loops. Such systems are extensively used in industries such as chemical processing, pulp and paper manufacture, power generation, oil and gas processing, and telecommunications.\nDiscrete controllers.\nThe simplest control systems are based around small discrete controllers with a single control loop each. These are usually panel mounted which allows direct viewing of the front panel and provides means of manual intervention by the operator, either to manually control the process or to change control setpoints. Originally these would be pneumatic controllers, a few of which are still in use, but nearly all are now electronic.\nQuite complex systems can be created with networks of these controllers communicating using industry-standard protocols. Networking allow the use of local or remote SCADA operator interfaces, and enables the cascading and interlocking of controllers. However, as the number of control loops increase for a system design there is a point where the use of a programmable logic controller (PLC) or distributed control system (DCS) is more manageable or cost-effective.\nDistributed control systems.\nA distributed control system (DCS) is a digital process control system (PCS) for a process or plant, wherein controller functions and field connection modules are distributed throughout the system. As the number of control loops grows, DCS becomes more cost effective than discrete controllers. Additionally, a DCS provides supervisory viewing and management over large industrial processes. In a DCS, a hierarchy of controllers is connected by communication networks, allowing centralised control rooms and local on-plant monitoring and control.\nA DCS enables easy configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other computer systems such as production control. It also enables more sophisticated alarm handling, introduces automatic event logging, removes the need for physical records such as chart recorders and allows the control equipment to be networked and thereby located locally to the equipment being controlled to reduce cabling.\nA DCS typically uses custom-designed processors as controllers and uses either proprietary interconnections or standard protocols for communication. Input and output modules form the peripheral components of the system.\nThe processors receive information from input modules, process the information and decide control actions to be performed by the output modules. The input modules receive information from sensing instruments in the process (or field) and the output modules transmit instructions to the final control elements, such as control valves.\nThe field inputs and outputs can either be continuously changing analog signals e.g. current loop or 2 state signals that switch either \"on\" or \"off\", such as relay contacts or a semiconductor switch.\nDistributed control systems can normally also support Foundation Fieldbus, PROFIBUS, HART, Modbus and other digital communication buses that carry not only input and output signals but also advanced messages such as error diagnostics and status signals.\nSCADA systems.\nSupervisory control and data acquisition (SCADA) is a control system architecture that uses computers, networked data communications and graphical user interfaces for high-level process supervisory management. The operator interfaces which enable monitoring and the issuing of process commands, such as controller setpoint changes, are handled through the SCADA supervisory computer system. However, the real-time control logic or controller calculations are performed by networked modules which connect to other peripheral devices such as programmable logic controllers and discrete PID controllers which interface to the process plant or machinery.\nThe SCADA concept was developed as a universal means of remote access to a variety of local control modules, which could be from different manufacturers allowing access through standard automation protocols. In practice, large SCADA systems have grown to become very similar to distributed control systems in function, but using multiple means of interfacing with the plant. They can control large-scale processes that can include multiple sites, and work over large distances. This is a commonly-used architecture industrial control systems, however there are concerns about SCADA systems being vulnerable to cyberwarfare or cyberterrorism attacks.\nThe SCADA software operates on a supervisory level as control actions are performed automatically by RTUs or PLCs. SCADA control functions are usually restricted to basic overriding or supervisory level intervention. A feedback control loop is directly controlled by the RTU or PLC, but the SCADA software monitors the overall performance of the loop. For example, a PLC may control the flow of cooling water through part of an industrial process to a set point level, but the SCADA system software will allow operators to change the set points for the flow. The SCADA also enables alarm conditions, such as loss of flow or high temperature, to be displayed and recorded.\nProgrammable logic controllers.\nPLCs can range from small modular devices with tens of inputs and outputs (I/O) in a housing integral with the processor, to large rack-mounted modular devices with a count of thousands of I/O, and which are often networked to other PLC and SCADA systems. They can be designed for multiple arrangements of digital and analog inputs and outputs, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed-up or non-volatile memory.\nHistory.\nProcess control of large industrial plants has evolved through many stages. Initially, control was from panels local to the process plant. However this required personnel to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-staffed central control room. Often the controllers were behind the control room panels, and all automatic and manual control outputs were individually transmitted back to plant in the form of pneumatic or electrical signals. Effectively this was the centralisation of all the localised panels, with the advantages of reduced manpower requirements and consolidated overview of the process.\nHowever, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware so system changes required reconfiguration of signals by re-piping or re-wiring. It also required continual operator movement within a large control room in order to monitor the whole process. With the coming of electronic processors, high-speed electronic signalling networks and electronic graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around the plant and would communicate with the graphic displays in the control room. The concept of \"distributed control\" was realised.\nThe introduction of distributed control allowed flexible interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high-level overviews of plant status and production levels. For large control systems, the general commercial name \"distributed control system\" (DCS) was coined to refer to proprietary modular systems from many manufacturers which integrated high-speed networking and a full suite of displays and control racks.\nWhile the DCS was tailored to meet the needs of large continuous industrial processes, in industries where combinatorial and sequential logic was the primary requirement, the PLC evolved out of a need to replace racks of relays and timers used for event-driven control. The old controls were difficult to re-configure and debug, and PLC control enabled networking of signals to a central control area with electronic displays. PLCs were first developed for the automotive industry on vehicle production lines, where sequential logic was becoming very complex. It was soon adopted in a large number of other event-driven applications as varied as printing presses and water treatment plants.\nSCADA's history is rooted in distribution applications, such as power, natural gas, and water pipelines, where there is a need to gather remote data through potentially unreliable or intermittent low-bandwidth and high-latency links. SCADA systems use open-loop control with sites that are widely separated geographically. A SCADA system uses remote terminal units (RTUs) to send supervisory data back to a control centre. Most RTU systems always had some capacity to handle local control while the master station is not available. However, over the years RTU systems have grown more and more capable of handling local control.\nThe boundaries between DCS and SCADA/PLC systems are blurring as time goes on. The technical limits that drove the designs of these various systems are no longer as much of an issue. Many PLC platforms can now perform quite well as a small DCS, using remote I/O and are sufficiently reliable that some SCADA systems actually manage closed-loop control over long distances. With the increasing speed of today's processors, many DCS products have a full line of PLC-like subsystems that weren't offered when they were initially developed.\nIn 1993, with the release of IEC-1131, later to become IEC-61131-3, the industry moved towards increased code standardization with reusable, hardware-independent control software. For the first time, object-oriented programming (OOP) became possible within industrial control systems. This led to the development of both programmable automation controllers (PAC) and industrial PCs (IPC). These are platforms programmed in the five standardized IEC languages: ladder logic, structured text, function block, instruction list and sequential function chart. They can also be programmed in modern high-level languages such as C or C++. Additionally, they accept models developed in analytical tools such as MATLAB and Simulink. Unlike traditional PLCs, which use proprietary operating systems, IPCs utilize Windows IoT. IPC's have the advantage of powerful multi-core processors with much lower hardware costs than traditional PLCs and fit well into multiple form factors such as DIN rail mount, combined with a touch-screen as a panel PC, or as an embedded PC. New hardware platforms and technology have contributed significantly to the evolution of DCS and SCADA systems, further blurring the boundaries and changing definitions.\nSecurity.\nSCADA and PLCs are vulnerable to cyber attack. The U.S. Government Joint Capability Technology Demonstration (JCTD) known as MOSAICS (More Situational Awareness for Industrial Control Systems) is the initial demonstration of cybersecurity defensive capability for critical infrastructure control systems. MOSAICS addresses the Department of Defense (DOD) operational need for cyber defense capabilities to defend critical infrastructure control systems from cyber attack, such as power, water and wastewater, and safety controls, affect the physical environment. The MOSAICS JCTD prototype will be shared with commercial industry through Industry Days for further research and development, an approach intended to lead to an innovative, game-changing capabilities for cybersecurity for critical infrastructure control systems.", "Automation-Control": 0.9497350454, "Qwen2": "Yes"} {"id": "7338650", "revid": "910180", "url": "https://en.wikipedia.org/wiki?curid=7338650", "title": "Process variable", "text": "In control theory, a process variable (PV; also process value or process parameter) is the current measured value of a particular part of a process which is being monitored or controlled. An example of this would be the temperature of a furnace. The current temperature is the process variable, while the desired temperature is known as the set-point (SP).\nControl system use.\nMeasurement of process variables is essential in control systems to controlling a process. The value of the process variable is continuously monitored so that control may be exerted.\nFour commonly measured variables that affect chemical and physical processes are: pressure, temperature, level and flow. but there are in fact a large number of measurement quantities which for international purposes use the International System of Units (SI)\nThe SP-PV error is used to exert control on a process so that the value of PV equals the value of the SP. A classic use of this is in the \nPID controller.", "Automation-Control": 0.998123467, "Qwen2": "Yes"} {"id": "63233310", "revid": "32990417", "url": "https://en.wikipedia.org/wiki?curid=63233310", "title": "Fuzzy relation", "text": "A fuzzy relation is the cartesian product of mathematical fuzzy sets. Two fuzzy sets are taken as input, the fuzzy relation is then equal to the cross product of the sets which is created by vector multiplication. Usually, a rule base is stored in a matrix notation which allows the fuzzy controller to update its internal values.\nFrom a historical perspective, the first fuzzy relation was mentioned in the year 1971 by Lotfi A. Zadeh.\nA practical approach to describe a fuzzy relation is based on a 2d table. At first, a table is created which consists of fuzzy values from 0..1. The next step is to apply the if-then-rules to the values. The resulting numbers are stored in the table as an array.\nFuzzy relations can be utilized in fuzzy databases.", "Automation-Control": 0.8472914696, "Qwen2": "Yes"} {"id": "34220486", "revid": "35252436", "url": "https://en.wikipedia.org/wiki?curid=34220486", "title": "IEC 62325", "text": "IEC 62325 is a set of standards related to deregulated energy market communications, based on the Common Information Model. IEC 62325 is a part of the International Electrotechnical Commission's (IEC) Technical Committee 57 (TC57) reference architecture for electric power systems, and is the responsibility of Working Group 16 (WG16).\nStandard documents.\nIEC 62325 consists of the following parts, detailed in separate IEC 62325 standard documents:", "Automation-Control": 0.9789178967, "Qwen2": "Yes"} {"id": "23608428", "revid": "43732327", "url": "https://en.wikipedia.org/wiki?curid=23608428", "title": "Electron-beam freeform fabrication", "text": "Electron-beam freeform fabrication (EBF3) is an additive manufacturing process that builds near-net-shape parts. It requires far less raw material and finish machining than traditional manufacturing methods. EBF3 is done in a vacuum chamber where an electron beam is focused on a constantly feeding source of metal, which is melted and applied as called for by a three-dimensional layered drawing - one layer at a time - on top of a rotating metallic substrate until the part is complete.\nHistory.\nThe use of electron beam welding for additive manufacturing was first developed by Vivek Davee in 1995 as part of his PhD thesis at MIT. The process was referred to as electron beam solid freeform fabrication (EBSFF). A team at NASA Langley Research Center (LaRC) led by Karen Taminger developed the process, calling it electron beam freeform fabrication (EBF3). EBF3 is a NASA-patented additive manufacturing process designed to build near-net-shape parts requiring less raw material and finish machining than traditional manufacturing methods. EBF3 is a process by which NASA plans to build metal parts in zero-gravity environments; this layer-additive process uses an electron beam and a solid wire feedstocks to fabricate metallic parts. Future astronauts stationed on the Moon or Mars may be able to employ EBF3 to produce replacement parts locally rather than relying on parts launched from Earth, possibly even mining feedstock from the surrounding soils. The aviation industry has the most potential for the procedure, say experts at the NASA LaRC, because there should be significant progress made in reducing machining waste byproducts. Typically, an aircraft maker would start with a 6,000-pound block of titanium and use thousands of litres of cutting fluid to reduce it to a 300-pound item, leaving 5,700 pounds of material that needed to be recycled. According to Taminger, \"With EBF3 you can build up the same part using only 350 pounds of titanium and machine away just 50 pounds to get the part into its final configuration. And the EBF3 process uses much less electricity to create the same part.\"\nProcess.\nThe operational concept of EBF3 is to build a near-net-shape metal part directly from a computer-aided design (CAD) file. Current computer-aided machining practices start with a CAD model and use a post-processor to write the machining instructions (G-code) defining the cutting tool paths needed to make the part. EBF3 uses a similar process, starting with a CAD model, numerically reducing it into layers, then using a post-processor to write the G-code defining the deposition path and process parameters for the EBF3 equipment. It uses a focused electron beam in a vacuum environment to create a molten pool on a metallic substrate. The beam is translated by the surface of the substrate while the metal wire is fed into the molten pool. The deposit solidifies immediately after the electron beam has passed, having sufficient structural strength to support itself. The sequence is repeated in a layer-additive manner to produce a near-net-shape part needing only finish machining. The EBF3 process is scalable for components from fractions of an inch to tens of feet in size, limited mainly by the size of the vacuum chamber and the amount of wire feedstock available.", "Automation-Control": 0.8499031067, "Qwen2": "Yes"} {"id": "38604479", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=38604479", "title": "Israel Association for Automatic Control", "text": "The Israel Association for Automatic Control (IAAC) is a national member organization of the International Federation of Automatic Control (IFAC). IAAC was one of the first member organizations of IFAC, and was founded in the early sixties by control system researchers from the Technion – Israel Institute of Technology, and from Rafael Advanced Defense Systems Ltd.\nThe mission of IAAC is the advancement of science, applications, and education in the area of automatic control, with an emphasis of the needs of Israel. To meet this goal, IAAC pro-actively organizes conferences, lectures, courses, and workshops; promotes information exchange between the various organizations involved in automatic control; promotes co-operations between organizations involved in automatic control in Israel and abroad.", "Automation-Control": 1.0000053644, "Qwen2": "Yes"} {"id": "38605977", "revid": "45455882", "url": "https://en.wikipedia.org/wiki?curid=38605977", "title": "Adevs", "text": "adevs is a C++ library for building discrete event simulations. Adevs based on the Discrete Event System Specification DEVS and Dynamic DEVS modeling formalisms; it supports parallel discrete event simulation and a runtime system for OpenModelica. Adevs is developed by Jim Nutaro.\nAdevs is free software and releases before the 2.8 release were released under GNU LGPL 2.0. The adevs 2.8 version is released under a BSD license.", "Automation-Control": 0.9236922264, "Qwen2": "Yes"} {"id": "6078504", "revid": "35465059", "url": "https://en.wikipedia.org/wiki?curid=6078504", "title": "Kharitonov's theorem", "text": "Kharitonov's theorem is a result used in control theory to assess the stability of a dynamical system when the physical parameters of the system are not known precisely. When the coefficients of the characteristic polynomial are known, the Routh–Hurwitz stability criterion can be used to check if the system is stable (i.e. if all roots have negative real parts). Kharitonov's theorem can be used in the case where the coefficients are only known to be within specified ranges. It provides a test of stability for a so-called interval polynomial, while Routh–Hurwitz is concerned with an ordinary polynomial.\nDefinition.\nAn interval polynomial is the family of all polynomials\nwhere each coefficient formula_2 can take any value in the specified intervals\nIt is also assumed that the leading coefficient cannot be zero: formula_4.\nTheorem.\nAn interval polynomial is stable (i.e. all members of the family are stable) if and only if the four so-called Kharitonov polynomials\nare stable.\nWhat is somewhat surprising about Kharitonov's result is that although in principle we are testing an infinite number of polynomials for stability, in fact we need to test only four. This we can do using Routh–Hurwitz or any other method. So it only takes four times more work to be informed about the stability of an interval polynomial than it takes to test one ordinary polynomial for stability.\nKharitonov's theorem is useful in the field of robust control, which seeks to design systems that will work well despite uncertainties in component behavior due to measurement errors, changes in operating conditions, equipment wear and so on.", "Automation-Control": 0.9691771269, "Qwen2": "Yes"} {"id": "9733872", "revid": "29463730", "url": "https://en.wikipedia.org/wiki?curid=9733872", "title": "Map-based controller", "text": "In the field of control engineering, a map-based controller is a controller whose outputs are based on values derived from a pre-defined lookup table. The inputs to the controller are usually values taken from one or more sensors and are used to index the output values in the lookup table. By effectively placing the transfer function as discrete entries within a lookup table, engineers free to modify smaller sections or update the whole list of entries as required.", "Automation-Control": 0.9998643994, "Qwen2": "Yes"} {"id": "42801684", "revid": "40561892", "url": "https://en.wikipedia.org/wiki?curid=42801684", "title": "Rockman Industries", "text": "Rockman Industries, formerly Rockman Cycles limited, is an Indian auto components manufacturer, based in New Delhi, India. The company is one of India's largest auto component manufacturers. Rockman Industries is primarily engaged in the manufacturing of aluminum die casting components, machined and painted assemblies, auto chains and parts. In January 2017, Rockman Industries entered the carbon composites sector with the acquisition of Moldex Composites, a UK-India carbon composite design and manufacturing company. Rockman was founded in 1960 and is led by Suman Kant Munjal, Chairman and Ujjwal Munjal, Managing Director.\nHistory.\nA part of the Hero Group, Rockman Industries (formerly Rockman Cycles limited.) was set up in 1960 and started to manufacture bicycle chains and hubs for Hero Cycles. In 1999, it diversified into high pressure aluminium die cast components and automotive chains for Hero MotoCorp (Erstwhile Hero Honda). In 2005 it closed the bicycle chains and hubs business and from November 2005, is only manufacturing die casting components and auto parts. In 2008, Rockman Industries set up a new auto components plant in Uttaranchal. In February 2014, Rockman Industries acquired Sargam Die Casting company and started its new facility at Bawal (Haryana). In January 2017, Rockman Industries acquired a majority stake in Moldex Composites to enter the aerospace, motorsport and high-end auto component manufacturing space. In 2019, company inaugurated two new plants at Vadodara and Tirupati. In Tirupati plant company is manufacturing four wheeler alloy wheels.\nManufacturing units.\nRockman Industries has eight manufacturing plants at Ludhiana, Haridwar, Chennai , Bawal, Surat, Vadodara and Tirupati. In Ludhiana they have two plants, one for chains and the other for aluminium die casting products. All other locations manufacture die casting components. Surat's unit is manufacturing advanced composites parts.\nCustomers.\nRockman Industries supplies its products to Hero MotoCorp and various global automotive companies including: TVS, Honda, Royal Enfield, Revolt, Ather, Hyundai, Kia, Ford Motor, Mahindra, Tata, Bosch, Stanadyne, Dana, Denso, Nemak, PSA AVTEC, KSP Automotive, Continental, Magna, Hanon System, Wabco, Mando, Getrag, BorgWarner, iwis and others.", "Automation-Control": 0.7735803127, "Qwen2": "Yes"} {"id": "2850606", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=2850606", "title": "Hoist controller", "text": "A hoist controller is the controller for a hoist. The term is used primarily in the context of electrically operated hoists, but it is apparent that the control systems of many 20th century steam hoists also incorporated controllers of significant complexity. Consider the control system of the Quincy Mine No. 2 Hoist. This control system included interlocks to close the throttle valve at the end of trip and to prevent opening the throttle again until the winding engine was reversed. The control system also incorporated a governor to control the speed of the hoist and indicator wheels to show the hoist operator the positions of the skips in the mine shaft.\nThe hoist controllers for modern electric mining hoists have long included such features as automatic starting of the hoist when the weight of coal or ore in the skip reaches a set point, automatic acceleration of the hoist to full speed and automatic deceleration at the end of travel.\nHoist controllers need both velocity and absolute position references taken, typically taken from the winding drum of the hoist. Modern hoist controllers replace many of the mechanical analog mechanisms of earlier controllers with digital control systems.", "Automation-Control": 0.9774546027, "Qwen2": "Yes"} {"id": "25616444", "revid": "20483999", "url": "https://en.wikipedia.org/wiki?curid=25616444", "title": "TurboPrint", "text": "TurboPrint is a closed source printer driver system for Linux, AmigaOS and MorphOS. It supports a number of printers that don't yet have a free driver, and fuller printer functionality on some printer models. In recent versions, it integrates with the CUPS printing system.", "Automation-Control": 0.8660828471, "Qwen2": "Yes"} {"id": "372426", "revid": "154991", "url": "https://en.wikipedia.org/wiki?curid=372426", "title": "Production equipment control", "text": "Production equipment control involves production equipment that resides in the shop floor of a manufacturing company and its purpose is to produce goods of a wanted quality when provided with production resources of a required quality. In modern production lines the production equipment is fully automated using industrial control methods and involves limited unskilled labour participation. Modern production equipment consists of mechatronic modules that are integrated according to a control architecture. The most widely known architectures involve hierarchy, polyarchy, hetaerarchy and hybrid. The methods for achieving a technical effect are described by control algorithms, which may or may not utilize formal methods in their design.", "Automation-Control": 0.9999412894, "Qwen2": "Yes"} {"id": "39010846", "revid": "41882264", "url": "https://en.wikipedia.org/wiki?curid=39010846", "title": "Lyman filament extruder", "text": " The Lyman filament extruder is a device for making 3-D printer filament suitable for use in 3-D printers like the RepRap. It is named after its developer Hugh Lyman and was the winner of the Desktop Factory Competition.\nThe goal in the competition was to build an open source filament extruder for less than $250 in components that can take ABS or PLA resin pellets, mix them with colorant, and extrude enough 1.75 mm diameter ± 0.05 mm filament that can be wrapped on a 1 kg spool. The machine must use the Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0) license.\nThe use of DIY filament extruders like the Lyman can significantly reduce the cost of printing with 3-D printers. The Lyman filament extruder was designed to handle pellets, but can also be used to make filament from other sources of plastic such as post-consumer waste like other RecycleBots. Producing plastic filament from recycled plastic has a significant positive environmental impact.", "Automation-Control": 0.7267571092, "Qwen2": "Yes"} {"id": "39050173", "revid": "6046731", "url": "https://en.wikipedia.org/wiki?curid=39050173", "title": "Fixed position assembly", "text": "Fixed position assembly refers to an assembly system or situation in which the product does not move while being assembled, this configuration is usually contrasted in operations management and industrial engineering with assembly lines. Dimensioning this system is very simple: considering CP as productive capacity and T as average assembly time, then N, number of working stations, is given by N=CP*T.", "Automation-Control": 0.9998643398, "Qwen2": "Yes"} {"id": "50292266", "revid": "38132428", "url": "https://en.wikipedia.org/wiki?curid=50292266", "title": "Agent mining", "text": "Agent mining is an interdisciplinary area that synergizes multiagent systems with data mining and machine learning.\nThe interaction and integration between multiagent systems and data mining have a long history. The very early work on agent mining focused on agent-based knowledge discovery, agent-based distributed data mining, and agent-based distributed machine learning, and using data mining to enhance agent intelligence.\nThe International Workshop on Agents and Data Mining Interaction has been held for more than 10 times, co-located with the International Conference on Autonomous Agents and Multi-Agent Systems. Several proceedings are available from Springer Lecture Notes in Computer Science.", "Automation-Control": 0.967669785, "Qwen2": "Yes"} {"id": "37052063", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=37052063", "title": "Moving horizon estimation", "text": "Moving horizon estimation (MHE) is an optimization approach that uses a series of measurements observed over time, containing noise (random variations) and other inaccuracies, and produces estimates of unknown variables or parameters. Unlike deterministic approaches, MHE requires an iterative approach that relies on linear programming or nonlinear programming solvers to find a solution.\nMHE reduces to the Kalman filter under certain simplifying conditions. A critical evaluation of the extended Kalman filter and the MHE found that the MHE improved performance at the cost of increased computational expense. Because of the computational expense, MHE has generally been applied to systems where there are greater computational resources and moderate to slow system dynamics. However, in the literature there are some methods to accelerate this method.\nOverview.\nThe application of MHE is generally to estimate measured or unmeasured states of dynamical systems. Initial conditions and parameters within a model are adjusted by MHE to align measured and predicted values. MHE is based on a finite horizon optimization of a process model and measurements. At time the current process state is sampled and a minimizing strategy is computed (via a numerical minimization algorithm) for a relatively short time horizon in the past: formula_1. Specifically, an online or on-the-fly calculation is used to explore state trajectories that find (via the solution of Euler–Lagrange equations) an objective-minimizing strategy until time formula_2. Only the last step of the estimation strategy is used, then the process state is sampled again and the calculations are repeated starting from the time-shifted states, yielding a new state path and predicted parameters. The estimation horizon keeps being shifted forward and for this reason the technique is called moving horizon estimation. Although this approach is not optimal, in practice it has given very good results when compared with the Kalman filter and other estimation strategies.\nPrinciples of MHE.\nMoving horizon estimation (MHE) is a multivariable estimation algorithm that uses:\nto calculate the optimum states and parameters.\nThe optimization estimation function is given by:\nformula_3\nwithout violating state or parameter constraints (low/high limits)\nWith:\nformula_4 = \"i\" -th model predicted variable (e.g. predicted temperature)\nformula_5 = \"i\" -th measured variable (e.g. measured temperature)\nformula_6 = \"i\" -th estimated parameter (e.g. heat transfer coefficient)\nformula_7 = weighting coefficient reflecting the relative importance of measured values formula_5\nformula_9 = weighting coefficient reflecting the relative importance of prior model predictions formula_10\nformula_11 = weighting coefficient penalizing relative big changes in formula_6\nMoving horizon estimation uses a sliding time window. At each sampling time the window moves one step forward. It estimates the states in the window by analyzing the measured output sequence and uses the last estimated state out of the window, as the prior knowledge.", "Automation-Control": 0.9144655466, "Qwen2": "Yes"} {"id": "2618828", "revid": "1272505", "url": "https://en.wikipedia.org/wiki?curid=2618828", "title": "Rotary table", "text": "A rotary table is a precision work positioning device used in metalworking. It enables the operator to drill or cut work at exact intervals around a fixed (usually horizontal or vertical) axis. Some rotary tables allow the use of index plates for indexing operations, and some can also be fitted with dividing plates that enable regular work positioning at divisions for which indexing plates are not available. A rotary fixture used in this fashion is more appropriately called a dividing head (indexing head).\nConstruction.\nThe table shown is a manually operated type. Powered tables under the control of CNC machines are now available, and provide a fourth axis to CNC milling machines. Rotary tables are made with a solid base, which has provision for clamping onto another table or fixture. The actual table is a precision-machined disc to which the work piece is clamped (T slots are generally provided for this purpose). This disc can rotate freely, for indexing, or under the control of a worm (handwheel), with the worm wheel portion being made part of the actual table. High precision tables are driven by backlash compensating duplex worms.\nThe ratio between worm and table is generally 40:1, 72:1 or 90:1 but may be any ratio that can be easily divided exactly into 360°. This is for ease of use when indexing plates are available. A graduated dial and, often, a vernier scale enable the operator to position the table, and thus the work affixed to it with great accuracy.\nA through hole is usually machined into the table. Most commonly, this hole is machined to admit a Morse taper center or fixture.\nUse.\nRotary tables are most commonly mounted \"flat\", with the table rotating around a vertical axis, in the same plane as the cutter of a vertical milling machine. An alternate setup is to mount the rotary table on its end (or mount it \"flat\" on a 90° angle plate), so that it rotates about a horizontal axis. In this configuration a tailstock can also be used, thus holding the workpiece \"between centers.\"\nWith the table mounted on a secondary table, the workpiece is accurately centered on the rotary table's axis, which in turn is centered on the cutting tool's axis. All three axes are thus coaxial. From this point, the secondary table can be offset in either the X or Y direction to set the cutter the desired distance from the workpiece's center. This allows concentric machining operations on the workpiece. Placing the workpiece eccentrically a set distance from the center permits more complex curves to be cut. As with other setups on a vertical mill, the milling operation can be either drilling a series of concentric, and possibly equidistant holes, or face or end milling either circular or semicircular shapes and contours.\nA rotary table can be used:\nAdditionally, if converted to stepper motor operation, with a CNC milling machine and a tailstock, a rotary table allows many parts to be made on a mill that otherwise would require a lathe.\nApplications.\nRotary tables have many applications, including being used in the manufacture and inspection process of important elements in aerospace, automation and scientific industries. The use of rotary tables stretches as far as the film and animation industry, being used to obtain accuracy and precision in filming and photography. ", "Automation-Control": 0.999409616, "Qwen2": "Yes"} {"id": "24466505", "revid": "7474875", "url": "https://en.wikipedia.org/wiki?curid=24466505", "title": "Sumitomo Precision Products", "text": "Sumitomo Precision Products Co., Ltd. is an integrated manufacturer of aerospace equipment, heat exchangers, hydraulic controls, wireless sensor networks, sensors, micro-electronics technology, and environmental systems. The aerospace division supplies its products to aerospace industries worldwide, including Boeing, Airbus, Bombardier, and Embraer.\nHistory.\nFormerly a division of Sumitomo Metal Industries, the aerospace business was spun off as Sumitomo Precision Products in 1961.", "Automation-Control": 0.6911299229, "Qwen2": "Yes"} {"id": "6707472", "revid": "45331202", "url": "https://en.wikipedia.org/wiki?curid=6707472", "title": "Elisra", "text": "Elisra Group is an Israeli manufacturer of high-tech electronic devices, mainly but not exclusively for military use. It makes equipment for electronic communication and surveillance, missile tracking and controlling systems, radar and lidar equipment. The group is composed of three companies: Elisra Electronic Systems, Tadiran Electronic Systems Ltd. and Tadiran Spectralink Ltd.\nHistory.\nPreviously, Elisra was owned 70% by Elbit Systems and 30% by Israel Aerospace Industries (IAI); however, Elbit later acquired IAI's share and Elisra is now a wholly owned subsidiary of Elbit.\nElisra Electronic Systems supplies missile warning systems, active jammer systems, laser and IR detection systems for several kinds of aircraft and ground vessels of the Israel Defense Forces (IDF). Tadiran Electronic Systems manufactured the Tadiran Mastiff reconnaissance UAV, used by the IDF during the 1982 Lebanon War.", "Automation-Control": 0.8209574223, "Qwen2": "Yes"} {"id": "28759479", "revid": "41840956", "url": "https://en.wikipedia.org/wiki?curid=28759479", "title": "DESA company", "text": "Iran Heavy Diesel Manufacturing Company (DESA), is an Iranian company which is a manufacturer of heavy diesel engines from 200 to 3500 kW for railway, marine and power generation purposes.\nHistory.\nThe \"Iran Heavy Diesel Engine Mfg Co.\" (DESA) company was established in 1991 in Amol with the aim of developing industrial production of diesel engines; a factory of over 10,000m2 was constructed on a 80ha site, with facilities for quality testing and research and design.\nThe company obtained licenses to manufacture Wärtsilä engines and Ruston RK 215 engines under license from MAN. Wärtsilä engine assembly began in 1996, RK 215 engine production began in 2000.\nIn 2009 the company unveiled Iran's first indigenous heavy diesel engine, the D87. A dual fuel (gas / diesel) version is also to be produced.\nProduction.\nThe company produces diesel generators for permanent and emergency electricity supply. 104 units have been supplied to the Iran Telecommunications Company.\nThe company is a supplier to both the Iranian Islamic Republic Railways and the Raja Passenger Train Company.\nDESA has manufactured the Ruston RK 215 diesel engine for the AD43C mainline diesel locomotive, of which 70 units were manufactured by Wagon Pars.\nThe company is also assembling 16V 4000 MTU type diesel engines for a contract for 150 IranRunner locomotives for passenger trains to be manufactured by Siemens and the MLC (Mapna Locomotive Engineering and Manufacturing Company). The first 30 units will be supplied by Siemens, the remaining 120 will be primarily manufactured domestically capacities and expertise over six years.\nThe company has also supplied 120 engines for trainsets, and 60 engines for railbuses.\nThe company also supplies engines for marine use, and dual fuel engines for powerplants.", "Automation-Control": 0.7400152683, "Qwen2": "Yes"} {"id": "4450467", "revid": "17959258", "url": "https://en.wikipedia.org/wiki?curid=4450467", "title": "Minimum energy control", "text": "In control theory, the minimum energy control is the control formula_1 that will bring a linear time invariant system to a desired state with a minimum expenditure of energy.\nLet the linear time invariant (LTI) system be\nwith initial state formula_4. One seeks an input formula_5 so that the system will be in the state formula_6 at time formula_7, and for any other input formula_8, which also drives the system from formula_9 to formula_6 at time formula_7, the energy expenditure would be larger, i.e., \nTo choose this input, first compute the controllability Gramian\nAssuming formula_14 is nonsingular (if and only if the system is controllable), the minimum energy control is then\nSubstitution into the solution\nverifies the achievement of state formula_6 at formula_7.", "Automation-Control": 1.0000091791, "Qwen2": "Yes"} {"id": "21171254", "revid": "1129476715", "url": "https://en.wikipedia.org/wiki?curid=21171254", "title": "Type-2 fuzzy sets and systems", "text": "Type-2 fuzzy sets and systems generalize standard Type-1 fuzzy sets and systems so that more uncertainty can be handled. From the beginning of fuzzy sets, criticism was made about the fact that the membership function of a type-1 fuzzy set has no uncertainty associated with it, something that seems to contradict the word \"fuzzy\", since that word has the connotation of much uncertainty. So, what does one do when there is uncertainty about the value of the membership function? The answer to this question was provided in 1975 by the inventor of fuzzy sets, Lotfi A. Zadeh, when he proposed more sophisticated kinds of fuzzy sets, the first of which he called a \"type-2 fuzzy set\". A type-2 fuzzy set lets us incorporate uncertainty about the membership function into fuzzy set theory, and is a way to address the above criticism of type-1 fuzzy sets head-on. And, if there is no uncertainty, then a type-2 fuzzy set reduces to a type-1 fuzzy set, which is analogous to probability reducing to determinism when unpredictability vanishes.\nType1 fuzzy systems are working with a fixed membership function, while in type-2 fuzzy systems the membership function is fluctuating. A fuzzy set determines how input values are converted into fuzzy variables.\nOverview.\nIn order to symbolically distinguish between a type-1 fuzzy set and a type-2 fuzzy set, a tilde symbol is put over the symbol for the fuzzy set; so, A denotes a type-1 fuzzy set, whereas à denotes the comparable type-2 fuzzy set. When the latter is done, the resulting type-2 fuzzy set is called a \"general type-2 fuzzy set\" (to distinguish it from the special interval type-2 fuzzy set).\nZadeh didn't stop with type-2 fuzzy sets, because in that 1976 paper he also generalized all of this to type-\"n\" fuzzy sets. The present article focuses only on type-2 fuzzy sets because they are the \"next step\" in the logical progression from type-1 to type-\"n\" fuzzy sets, where \"n\" = 1, 2, ... . Although some researchers are beginning to explore higher than type-2 fuzzy sets, as of early 2009, this work is in its infancy.\nThe membership function of a general type-2 fuzzy set, Ã, is three-dimensional (Fig. 1), where the third dimension is the value of the membership function at each point on its two-dimensional domain that is called its \"footprint of uncertainty\"(FOU).\nFor an interval type-2 fuzzy set that third-dimension value is the same (e.g., 1) everywhere, which means that no new information is contained in the third dimension of an interval type-2 fuzzy set. So, for such a set, the third dimension is ignored, and only the FOU is used to describe it. It is for this reason that an interval type-2 fuzzy set is sometimes called a \"first-order uncertainty\" fuzzy set model, whereas a general type-2 fuzzy set (with its useful third-dimension) is sometimes referred to as a \"second-order uncertainty\" fuzzy set model.\nThe FOU represents the blurring of a type-1 membership function, and is completely described by its two bounding functions (Fig. 2), a lower membership function (LMF) and an upper membership function (UMF), both of which are type-1 fuzzy sets! Consequently, it is possible to use type-1 fuzzy set mathematics to characterize and work with interval type-2 fuzzy sets. This means that engineers and scientists who already know type-1 fuzzy sets will not have to invest a lot of time learning about general type-2 fuzzy set mathematics in order to understand and use interval type-2 fuzzy sets.\nWork on type-2 fuzzy sets languished during the 1980s and early-to-mid 1990s, although a small number of articles were published about them. People were still trying to figure out what to do with type-1 fuzzy sets, so even though Zadeh proposed type-2 fuzzy sets in 1976, the time was not right for researchers to drop what they were doing with type-1 fuzzy sets to focus on type-2 fuzzy sets. This changed in the latter part of the 1990s as a result of Jerry Mendel and his student's works on type-2 fuzzy sets and systems. Since then, more and more researchers around the world are writing articles about type-2 fuzzy sets and systems.\nInterval type-2 fuzzy sets.\nInterval type-2 fuzzy sets have received the most attention because the mathematics that is needed for such sets—primarily Interval arithmetic—is much simpler than the mathematics that is needed for general type-2 fuzzy sets. So, the literature about interval type-2 fuzzy sets is large, whereas the literature about general type-2 fuzzy sets is much smaller. Both kinds of fuzzy sets are being actively researched by an ever-growing number of researchers around the world and have resulted in successful employment in a variety of domains such as robot control.\nFormally, the following have already been worked out for interval type-2 fuzzy sets:\nInterval type-2 fuzzy logic systems.\nType-2 fuzzy sets are finding very wide applicability in rule-based fuzzy logic systems (FLSs) because they let uncertainties be modeled by them whereas such uncertainties cannot be modeled by type-1 fuzzy sets. A block diagram of a type-2 FLS is depicted in Fig. 3. This kind of FLS is used in fuzzy logic control, fuzzy logic signal processing, rule-based classification, etc., and is sometimes referred to as a \"function approximation\" application of fuzzy sets, because the FLS is designed to minimize an error function.\nThe following discussions, about the four components in Fig. 3 rule-based FLS, are given for an interval type-2 FLS, because to-date they are the most popular kind of type-2 FLS; however, most of the discussions are also applicable for a general type-2 FLS.\nRules, that are either provided by subject experts or are extracted from numerical data, are expressed as a collection of IF-THEN statements, e.g.,\nFuzzy sets are associated with the terms that appear in the antecedents (IF-part) or consequents (THEN-part) of rules, and with the inputs to and the outputs of the FLS. Membership functions are used to describe these fuzzy sets, and in a type-1 FLS they are all type-1 fuzzy sets, whereas in an interval type-2 FLS at least one membership function is an interval type-2 fuzzy set.\nAn interval type-2 FLS lets any one or all of the following kinds of uncertainties be quantified:\nIn Fig. 3, measured (crisp) inputs are first transformed into fuzzy sets in the Fuzzifier block because it is fuzzy sets and not numbers that activate the rules which are described in terms of fuzzy sets and not numbers. Three kinds of fuzzifiers are possible in an interval type-2 FLS. When measurements are:\nIn Fig. 3, after measurements are fuzzified, the resulting input fuzzy sets are mapped into fuzzy output sets by the Inference block. This is accomplished by first quantifying each rule using fuzzy set theory, and by then using the mathematics of fuzzy sets to establish the output of each rule, with the help of an inference mechanism. If there are \"M\" rules then the fuzzy input sets to the Inference block will activate only a subset of those rules, where the subset contains at least one rule and usually way fewer than \"M\" rules. The inference is done one rule at a time. So, at the output of the Inference block, there will be one or more \"fired-rule fuzzy output sets\".\nIn most engineering applications of an FLS, a number (and not a fuzzy set) is needed as its final output, e.g., the consequent of the rule given above is \"Rotate the valve a bit to the right.\" No automatic valve will know what this means because \"a bit to the right\" is a linguistic expression, and a valve must be turned by numerical values, i.e. by a certain number of degrees. Consequently, the fired-rule output fuzzy sets have to be converted into a number, and this is done in the Fig. 3 Output Processing block.\nIn a type-1 FLS, output processing, called \"defuzzification\", maps a type-1 fuzzy set into a number. There are many ways for doing this, e.g., compute the union of the fired-rule output fuzzy sets (the result is another type-1 fuzzy set) and then compute the center of gravity of the membership function for that set; compute a weighted average of the centers of gravity of each of the fired rule consequent membership functions; etc.\nThings are somewhat more complicated for an interval type-2 FLS, because to go from an interval type-2 fuzzy set to a number (usually) requires two steps (Fig. 3). The first step, called \"type-reduction\", is where an interval type-2 fuzzy set is reduced to an interval-valued type-1 fuzzy set. There are as many type-reduction methods as there are type-1 defuzzification methods. An algorithm developed by Karnik and Mendel now known as the \"KM algorithm\" is used for type-reduction. Although this algorithm is iterative, it is very fast.\nThe second step of Output Processing, which occurs after type-reduction, is still called \"defuzzification\". Because a type-reduced set of an interval type-2 fuzzy set is always a finite interval of numbers, the defuzzified value is just the average of the two end-points of this interval.\nIt is clear from Fig. 3 that there can be two outputs to an interval type-2 FLS—crisp numerical values and the type-reduced set. The latter provides a measure of the uncertainties that have flowed through the interval type-2 FLS, due to the (possibly) uncertain input measurements that have activated rules whose antecedents or consequents or both are uncertain. Just as standard deviation is widely used in probability and statistics to provide a measure of unpredictable uncertainty about a mean value, the type-reduced set can provide a measure of uncertainty about the crisp output of an interval type-2 FLS.\nComputing with words.\nAnother application for fuzzy sets has also been inspired by Zadeh — \"Computing with Words\". Different acronyms have been used for \"computing with words,\" e.g., CW and CWW. According to Zadeh:\nOf course, he did not mean that computers would actually compute using words—single words or phrases—rather than numbers. He meant that computers would be activated by words, which would be converted into a mathematical representation using fuzzy sets and that these fuzzy sets would be mapped by a CWW engine into some other fuzzy set after which the latter would be converted back into a word. A natural question to ask is: Which kind of fuzzy set—type-1 or type-2—should be used as a model for a word? Mendel has argued, on the basis of Karl Popper's concept of \"falsificationism\", that using a type-1 fuzzy set as a model for a word is scientifically incorrect. An interval type-2 fuzzy set should be used as a (first-order uncertainty) model for a word. Much research is underway about CWW.\nApplications.\nType-2 fuzzy sets were applied in the following areas:\nSoftware.\nFreeware MATLAB implementations, which cover general and interval type-2 fuzzy sets and systems, as well as type-1 fuzzy systems, are available at: http://sipi.usc.edu/~mendel/software.\nSoftware supporting discrete interval type-2 fuzzy logic systems is available at:\nDIT2FLS Toolbox - http://dit2fls.com/projects/dit2fls-toolbox/\nDIT2FLS Library Package - http://dit2fls.com/projects/dit2fls-library-package/\nJava libraries including source code for type-1, interval- and general type-2 fuzzy systems are available at: http://juzzy.wagnerweb.net/.\nPython library for type 1 and type 2 fuzzy sets is available at: https://github.com/carmelgafa/type2fuzzy\nPython library for interval type 2 fuzzy sets and systems is available at: https://github.com/Haghrah/PyIT2FLS\nAn open source Matlab/Simulink Toolbox for Interval Type-2 Fuzzy Logic Systems is available at: http://web.itu.edu.tr/kumbasart/type2fuzzy.htm\nExternal links.\nThere are two \"IEEE Expert Now\" multi-media modules that can be accessed from the IEEE at: http://www.ieee.org/web/education/Expert_Now_IEEE/Catalog/AI.html", "Automation-Control": 0.7916178703, "Qwen2": "Yes"} {"id": "39174308", "revid": "575347", "url": "https://en.wikipedia.org/wiki?curid=39174308", "title": "Solid ground curing", "text": "Solid ground curing (SGC) is a photo-polymer-based additive manufacturing (or 3D printing) technology used for producing models, prototypes, patterns, and production parts, in which the production of the layer geometry is carried out by means of a high-powered UV lamp through a mask. As the basis of solid ground curing is the exposure of each layer of the model by means of a lamp through a mask, the processing time for the generation of a layer is independent of the complexity of the layer. SGC was developed and commercialized by Cubital Ltd. of Israel in 1986 in the alternative name of Soldier System. While the method offered good accuracy and a very high fabrication rate, it suffered from high acquisition and operating costs due to system complexity. This led to poor market\nacceptance. While the company still exists, systems are no longer being sold. Nevertheless, it's still an interesting example of the many technologies other than stereolithography, its predeceasing rapid prototyping process that also utilizes photo-polymer materials. Though Objet Geometries Ltd. of Israel retains intellectual property of the process after the closure of Cubital Ltd. in 2002, the technology is no longer being produced.\nTechnology.\nSolid ground curing utilizes the general process of hardening of photopolymers by a complete lighting and hardening of the entire surface, using specially prepared masks. In SGC process, each layer of the prototype is cured by exposing to an ultra violet (UV) lamp instead of by laser scanning. So that, every portion in a layer are simultaneously cured and do not require any post-curing processes. The process contains the following steps.\nAdvantages and disadvantages.\nThe primary advantage of the solid ground curing system is that it does not require a support structure since wax is used to fill the voids, highly accurate products can be obtained. The model produced by SGC process is comparatively accurate in the Z-direction because the layer is milled after each light-exposure process. Although it offers good accuracy coupled with high throughput, it produces too much waste and its operating costs are comparatively high due to system complexity.", "Automation-Control": 0.6179499626, "Qwen2": "Yes"} {"id": "50091315", "revid": "45839555", "url": "https://en.wikipedia.org/wiki?curid=50091315", "title": "Jawaharpur Super Thermal Power Station", "text": "Jawaharpur Vidyut Utpadan Nigam Ltd is an upcoming thermal power plant coming up at Town Malawan district Etah in Uttar Pradesh, India. The power plant is one of the coal-based power plants of Jawaharpur Vidyut Utpadan Nigam Ltd. (JVUNL), a 100% subsidiary of UP Rajya Vidyut Utpadan Nigam Limited (UPRVUNL)\nThe Contract was awarded to Doosan Power Systems India (DPSI).\nCapacity.\nThe planned capacity of the power plant is 1320 MW (2x660MW). Expected commissioning in year 2021.", "Automation-Control": 0.7988624573, "Qwen2": "Yes"} {"id": "56722248", "revid": "892079", "url": "https://en.wikipedia.org/wiki?curid=56722248", "title": "Friction stir spot welding", "text": "Friction stir spot welding is a pressure welding process that operates below the melting point of the workpieces. It is a variant of friction stir welding.\nProcess description.\nIn friction stir spot welding, individual spot welds are created by pressing a rotating tool with high force onto the top surface of two sheets that overlap each other in the lap joint. The frictional heat and the high pressure plastify the workpiece material, so that the tip of the pin plunges into the joint area between the two sheets and stirs-up the oxides. The pin of the tool is plunged into the sheets until the shoulder is in contact with the surface of the top sheet. The shoulder applies a high forging pressure, which bonds the components metallurgically without melting. After a short dwell time, the tool is pulled out of the workpieces again so that a spot weld can be made about every 5 seconds.\nThe tool consists of a rotating pin and a shoulder. The pin is the part of the tool that penetrates into the materials. Both the pin and the shoulder may be profiled to push the plasticized material in a particular direction and to efficiently break-up and disperse the oxide skins on the adjacent surfaces. After retracting the tool, a hole remains, when using one-piece tools, which have already proven themselves as very reliable in the automotive and the rail vehicle industry. Often the rotating tool is surrounded by a non-rotating clamping ring with which the workpieces are pressed firmly against each other before and during welding by applying a clamping force. The clamping ring can also be used to reduce the pressing out of plasticized material to avoid the formation of burrs or beads to apply inert gas or to cool the tool via compressed air.\nThe most important process parameters are the speed and contact pressure. This results in the plunge feed rate for a given workpiece material. Modern spot welding guns can be used either via position control or force control or via a product-specific programmed force-displacement control. Often, position control is used until a certain displacement is reached, and then the control system is switched to force control during the dwell time. Even during the force-controlled dwell time, certain position values can be specified, which should not be undermatched or exceeded.\nSpot welding guns.\nFriction stir spot welding is performed with a spot welding gun, which is mounted on a console, flanged to an articulated robot or manually operated with a balancer to the component.\nProcess advantages.\nFriction spot welding is characterized by a number of process advantages. Any damage to the material caused by the extreme heat, such as that produced by laser or arc welding, will not occur. In particular, in the case of artificially aged aluminum alloys, the strength in the weld seam and the heat-affected zone is much higher than in conventional welding methods.\nIndustrial use.\nFriction stir spot welds have a high strength, so they are even suitable for parts that are exposed to particularly high loads. In addition to automotive and rail vehicle construction, the aerospace industry is developing the process e.g. for welding cockpit doors for helicopters. In the electrical industry aluminum and copper can be friction stir spot welded. Other applications are in façade and furniture manufacture, where the low heat input, especially in anodized sheets, leads to excellent optical properties.", "Automation-Control": 0.953012228, "Qwen2": "Yes"} {"id": "4021739", "revid": "141808", "url": "https://en.wikipedia.org/wiki?curid=4021739", "title": "LaSalle's invariance principle", "text": "LaSalle's invariance principle (also known as the invariance principle, Barbashin-Krasovskii-LaSalle principle, or Krasovskii-LaSalle principle) is a criterion for the asymptotic stability of an autonomous (possibly nonlinear) dynamical system.\nGlobal version.\nSuppose a system is represented as\nwhere formula_2 is the vector of variables, with\nIf a formula_4(see Smoothness) function formula_5 can be found such that\nthen the set of accumulation points of any trajectory is contained in formula_8 where formula_8 is the union of complete trajectories contained entirely in the set formula_10. \nIf we additionally have that the function formula_11 is positive definite, i.e.\nand if formula_8 contains no trajectory of the system except the trivial trajectory formula_16 for formula_17, then the origin is asymptotically stable.\nFurthermore, if formula_11 is radially unbounded, i.e.\nthen the origin is globally asymptotically stable.\nLocal version.\nIf \nhold only for formula_24 in some neighborhood formula_25 of the origin, and the set\ndoes not contain any trajectories of the system besides the trajectory formula_27, then the local version of the invariance principle states that the origin is locally asymptotically stable.\nRelation to Lyapunov theory.\nIf formula_28 is negative definite, then the global asymptotic stability of the origin is a consequence of Lyapunov's second theorem. The invariance principle gives a criterion for asymptotic stability in the case when formula_29 is only negative semidefinite.\nExamples.\nSimple example.\nExample taken from.\nConsider the vector field formula_30 in the plane. The function formula_31 satisfies formula_32, and is radially unbounded, showing that the origin is globally asymptotically stable.\nPendulum with friction.\nThis section will apply the invariance principle to establish the local asymptotic stability of a simple system, the pendulum with friction. This system can be modeled with the differential equation\nwhere formula_34 is the angle the pendulum makes with the vertical normal, formula_35 is the mass of the pendulum, formula_36 is the length of the pendulum, formula_37 is the friction coefficient, and \"g\" is acceleration due to gravity.\nThis, in turn, can be written as the system of equations\nUsing the invariance principle, it can be shown that all trajectories that begin in a ball of certain size around the origin formula_40 asymptotically converge to the origin. We define formula_41 as\nThis formula_41 is simply the scaled energy of the system. Clearly, formula_44 is positive definite in an open ball of radius formula_45 around the origin. Computing the derivative,\nObserve that formula_47. If it were true that formula_48, we could conclude that every trajectory approaches the origin by Lyapunov's second theorem. Unfortunately, formula_49 and formula_50 is only negative semidefinite since formula_51 can be non-zero when formula_52. However, the set\nwhich is simply the set\ndoes not contain any trajectory of the system, except the trivial trajectory x = 0. Indeed, if at some time formula_55, formula_56, then because \nformula_51 must be less than formula_45 away from the origin, formula_59 and formula_60. As a result, the trajectory will not stay in the set formula_61.\nAll the conditions of the local version of the invariance principle are satisfied, and we can conclude that every trajectory that begins in some neighborhood of the origin will converge to the origin as formula_62.\nHistory.\nThe general result was independently discovered by J.P. LaSalle (then at RIAS) and N.N. Krasovskii, who published in 1960 and 1959 respectively. While LaSalle was the first author in the West to publish the general theorem in 1960, a special case of the theorem was communicated in 1952 by Barbashin and Krasovskii, followed by a publication of the general result in 1959 by Krasovskii.", "Automation-Control": 0.7337114811, "Qwen2": "Yes"} {"id": "2688537", "revid": "35246606", "url": "https://en.wikipedia.org/wiki?curid=2688537", "title": "Adaptive Binary Optimization", "text": "Adaptive Binary Optimization, (ABO), is a supposed lossless image compression algorithm by MatrixView Ltd. It uses a patented method to compress \"the high correlation found in digital content signals\" and additional compression with standard entropy encoding algorithms such as Huffman coding.", "Automation-Control": 0.98618716, "Qwen2": "Yes"} {"id": "6842104", "revid": "28903366", "url": "https://en.wikipedia.org/wiki?curid=6842104", "title": "Galahad library", "text": "The Galahad library is a thread-safe library of packages for the solution of mathematical optimization problems. The areas covered by the library are unconstrained and bound-constrained optimization, quadratic programming, nonlinear programming, systems of nonlinear equations and inequalities, and non-linear least squares problems. The library is mostly written in the Fortran 90 programming language.\nThe name of the library originates from its major package for general nonlinear programming, LANCELOT-B, the successor of the original augmented Lagrangian package LANCELOT of Conn, Gould and Toint.\nOther packages in the library include:\nPackages in the GALAHAD library accept problems modeled in either the Standard Input Format (SIF), or the AMPL modeling language. For problems modeled in the SIF, the GALAHAD library naturally relies upon the CUTEr package, an optimization toolbox providing all low-level functionalities required by solvers.\nThe library is available on several popular computing platforms, including Compaq (DEC) Alpha, Cray, HP, IBM RS/6000, Intel-like PCs, SGI and Sun. It is designed to be easily adapted to other platforms. Support is provided for many operating systems, including Tru64, Linux, HP-UX, AIX, IRIX and Solaris, and for a variety of popular Fortran 90 compilers on these platforms and operating systems.\nThe GALAHAD Library is authored and maintained by N.I.M. Gould, D. Orban and Ph.L. Toint.", "Automation-Control": 0.86267066, "Qwen2": "Yes"} {"id": "29251171", "revid": "1024557680", "url": "https://en.wikipedia.org/wiki?curid=29251171", "title": "Multiscanning", "text": "Multiscanning is running multiple anti-malware or antivirus engines concurrently. Traditionally, only a single engine can actively scan a system at a given time. Using multiple engines simultaneously can result in conflicts that lead to system freezes and application failures. However, a number of security applications and application suites have optimized multiple engines to work together.\nReason.\nTesting agencies published results showing that no single antivirus engine is 100% effective against every malware threat. Because each engine uses different scanning methodologies and updates their malware definition files at various frequencies, using multiple engines increases the likelihood of catching malware before it can affect a system or network.", "Automation-Control": 0.9259859324, "Qwen2": "Yes"} {"id": "38718692", "revid": "27823944", "url": "https://en.wikipedia.org/wiki?curid=38718692", "title": "Login VSI", "text": "Login VSI maximizes the end-user experience for all digital workspaces. Login VSI's flagship product, Login Enterprise, is an automated testing platform that predicts performance, ensures business continuity and reduces risk. Login Enterprise tests the desktop and applications as a whole, from pre-production through to production. Login Enterprise includes standard “out-of-the-box” application template workloads. The technology is used by large organizations using Citrix XenApp, Citrix XenDesktop, VMware Horizon and Microsoft RDS. Login VSI has over 400 customers in 50 countries.\nLogin VSI evolved from the consulting firm Login Consultants. The first product was commercially released in 2008 and was free. As large enterprises began adopting the product, the need to commercialize the product became evident. Login VSI was formed in 2012.\nIn 2016, Login VSI announced the public launch of its second product, Login PI. Login PI is an active monitoring tool that constantly runs a single virtual user, to monitor and safeguard a good performance and availability of virtual desktop infrastructures and associated business applications. Both products use virtual users (or synthetic users) to test systems, without the need for real users.\nIn 2018, Login VSI announced the Login VSI Enterprise Edition (a combination of Login VSI and Login PI), and the Login VSI Vendor Edition (focused on the industry-standard tests done by this audience).", "Automation-Control": 0.6680128574, "Qwen2": "Yes"} {"id": "33829132", "revid": "7611264", "url": "https://en.wikipedia.org/wiki?curid=33829132", "title": "Differential dynamic programming", "text": "Differential dynamic programming (DDP) is an optimal control algorithm of the trajectory optimization class. The algorithm was introduced in 1966 by Mayne and subsequently analysed in Jacobson and Mayne's eponymous book. The algorithm uses locally-quadratic models of the dynamics and cost functions, and displays quadratic convergence. It is closely related to Pantoja's step-wise Newton's method.\nFinite-horizon discrete-time problems.\nThe dynamics\ndescribe the evolution of the state formula_1 given the control formula_2 from time formula_3 to time formula_4. The \"total cost\" formula_5 is the sum of running costs formula_6 and final cost formula_7, incurred when starting from state formula_8 and applying the control sequence formula_9 until the horizon is reached:\nwhere formula_11, and the formula_12 for formula_13 are given by . The solution of the optimal control problem is the minimizing control sequence\nformula_14\n\"Trajectory optimization\" means finding formula_15 for a particular formula_16, rather than for all possible initial states.\nDynamic programming.\nLet formula_17 be the partial control sequence formula_18 and define the \"cost-to-go\" formula_19 as the partial sum of costs from formula_3 to formula_21:\nThe optimal cost-to-go or \"value function\" at time formula_3 is the cost-to-go given the minimizing control sequence:\nSetting formula_25, the dynamic programming principle reduces the minimization over an entire sequence of controls to a sequence of minimizations over a single control, proceeding backwards in time:\nThis is the Bellman equation.\nDifferential dynamic programming.\nDDP proceeds by iteratively performing a backward pass on the nominal trajectory to generate a new control sequence, and then a forward-pass to compute and evaluate a new nominal trajectory. We begin with the backward pass. If\nis the argument of the formula_27 operator in , let formula_28 be the variation of this quantity around the formula_3-th formula_30 pair:\nand expand to second order\n & Q_{\\mathbf{x}\\mathbf{u}}\\\\\n1\\\\\n\\delta\\mathbf{x}\\\\\nThe formula_28 notation used here is a variant of the notation of Morimoto where subscripts denote differentiation in denominator layout.\nDropping the index formula_3 for readability, primes denoting the next time-step formula_34, the expansion coefficients are\nThe last terms in the last three equations denote contraction of a vector with a tensor. Minimizing the quadratic approximation with respect to formula_36 we have\n^* = \\operatorname{argmin}\\limits_{\\delta \\mathbf{u}}Q(\\delta \\mathbf{x},\\delta\n\\mathbf{u})=-Q_{\\mathbf{u}\\mathbf{u}}^{-1}(Q_\\mathbf{u}+Q_{\\mathbf{u}\\mathbf{x}}\\delta \\mathbf{x}),\ngiving an open-loop term formula_37 and a feedback gain term formula_38. Plugging the result back into , we now have a quadratic model of the value at time formula_3:\nRecursively computing the local quadratic models of formula_41 and the control modifications formula_42, from formula_43 down to formula_44, constitutes the backward pass. As above, the Value is initialized with formula_25. Once the backward pass is completed, a forward pass computes a new trajectory:\nThe backward passes and forward passes are iterated until convergence.\nRegularization and line-search.\nDifferential dynamic programming is a second-order algorithm like Newton's method. It therefore takes large steps toward the minimum and often requires regularization and/or line-search to achieve convergence\n. Regularization in the DDP context means ensuring that the formula_47 matrix in is positive definite. Line-search in DDP amounts to scaling the open-loop control modification formula_48 by some formula_49.\nMonte Carlo version.\nSampled differential dynamic programming (SaDDP) is a Monte Carlo variant of differential dynamic programming. It is based on treating the quadratic cost of differential dynamic programming as the energy of a Boltzmann distribution. This way the quantities of DDP can be matched to the statistics of a multidimensional normal distribution. The statistics can be recomputed from sampled trajectories without differentiation.\nSampled differential dynamic programming has been extended to Path Integral Policy Improvement with Differential Dynamic Programming. This creates a link between differential dynamic programming and path integral control, which is a framework of stochastic optimal control.\nConstrained problems.\nInterior Point Differential dynamic programming (IPDDP) is an interior-point method generalization of DDP that can address the optimal control problem with nonlinear state and input constraints.", "Automation-Control": 0.8085710406, "Qwen2": "Yes"} {"id": "19892153", "revid": "37666339", "url": "https://en.wikipedia.org/wiki?curid=19892153", "title": "Online machine learning", "text": "In computer science, online machine learning is a method of machine learning in which data becomes available in a sequential order and is used to update the best predictor for future data at each step, as opposed to batch learning techniques which generate the best predictor by learning on the entire training data set at once. Online learning is a common technique used in areas of machine learning where it is computationally infeasible to train over the entire dataset, requiring the need of out-of-core algorithms. It is also used in situations where it is necessary for the algorithm to dynamically adapt to new patterns in the data, or when the data itself is generated as a function of time, e.g., stock price prediction.\nOnline learning algorithms may be prone to catastrophic interference, a problem that can be addressed by incremental learning approaches.\nIntroduction.\nIn the setting of supervised learning, a function of formula_1 is to be learned, where formula_2 is thought of as a space of inputs and formula_3 as a space of outputs, that predicts well on instances that are drawn from a joint probability distribution formula_4 on formula_5. In reality, the learner never knows the true distribution formula_4 over instances. Instead, the learner usually has access to a training set of examples formula_7. In this setting, the loss function is given as formula_8, such that formula_9 measures the difference between the predicted value formula_10 and the true value formula_11. The ideal goal is to select a function formula_12, where formula_13 is a space of functions called a hypothesis space, so that some notion of total loss is minimised. Depending on the type of model (statistical or adversarial), one can devise different notions of loss, which lead to different learning algorithms.\nStatistical view of online learning.\nIn statistical learning models, the training sample formula_14 are assumed to have been drawn from the true distribution formula_4 and the objective is to minimize the expected \"risk\"\nA common paradigm in this situation is to estimate a function formula_17 through empirical risk minimization or regularized empirical risk minimization (usually Tikhonov regularization). The choice of loss function here gives rise to several well-known learning algorithms such as regularized least squares and support vector machines.\nA purely online model in this category would learn based on just the new input formula_18, the current best predictor formula_19 and some extra stored information (which is usually expected to have storage requirements independent of training data size). For many formulations, for example nonlinear kernel methods, true online learning is not possible, though a form of hybrid online learning with recursive algorithms can be used where formula_20 is permitted to depend on formula_21 and all previous data points formula_22. In this case, the space requirements are no longer guaranteed to be constant since it requires storing all previous data points, but the solution may take less time to compute with the addition of a new data point, as compared to batch learning techniques.\nA common strategy to overcome the above issues is to learn using mini-batches, which process a small batch of formula_23 data points at a time, this can be considered as pseudo-online learning for formula_24 much smaller than the total number of training points. Mini-batch techniques are used with repeated passing over the training data to obtain optimized out-of-core versions of machine learning algorithms, for example, stochastic gradient descent. When combined with backpropagation, this is currently the de facto training method for training artificial neural networks.\nExample: linear least squares.\nThe simple example of linear least squares is used to explain a variety of ideas in online learning. The ideas are general enough to be applied to other settings, for example, with other convex loss functions.\nBatch learning.\nConsider the setting of supervised learning with formula_25 being a linear function to be learned:\nwhere formula_27 is a vector of inputs (data points) and formula_28 is a linear filter vector.\nThe goal is to compute the filter vector formula_29.\nTo this end, a square loss function \nis used to compute the vector formula_29 that minimizes the empirical loss\nwhere\nLet formula_2 be the formula_35 data matrix and formula_36 is the column vector of target values after the arrival of the first formula_37 data points.\nAssuming that the covariance matrix formula_38 is invertible (otherwise it is preferential to proceed in a similar fashion with Tikhonov regularization), the best solution formula_39 to the linear least squares problem is given by\nNow, calculating the covariance matrix formula_41 takes time formula_42, inverting the formula_43 matrix takes time formula_44, while the rest of the multiplication takes time formula_45, giving a total time of formula_46. When there are formula_47 total points in the dataset, to recompute the solution after the arrival of every datapoint formula_48, the naive approach will have a total complexity formula_49. Note that when storing the matrix formula_50, then updating it at each step needs only adding formula_51, which takes formula_52 time, reducing the total time to formula_53, but with an additional storage space of formula_52 to store formula_50.\nOnline learning: recursive least squares.\nThe recursive least squares (RLS) algorithm considers an online approach to the least squares problem. It can be shown that by initialising formula_56 and formula_57, the solution of the linear least squares problem given in the previous section can be computed by the following iteration:\nThe above iteration algorithm can be proved using induction on formula_60. The proof also shows that formula_61. \nOne can look at RLS also in the context of adaptive filters (see RLS).\nThe complexity for formula_47 steps of this algorithm is formula_63, which is an order of magnitude faster than the corresponding batch learning complexity. The storage requirements at every step formula_37 here are to store the matrix formula_65, which is constant at formula_45. For the case when formula_50 is not invertible, consider the regularised version of the problem \nloss function formula_68. Then, it's easy to show that the same algorithm works with formula_69, and the iterations proceed to give formula_70.\nStochastic gradient descent.\nWhen this \nis replaced by\nor formula_73 by formula_74, this becomes the stochastic gradient descent algorithm. In this case, the complexity for formula_47 steps of this algorithm reduces to formula_76. The storage requirements at every step formula_37 are constant at formula_78.\nHowever, the stepsize formula_79 needs to be chosen carefully to solve the expected risk minimization problem, as detailed above. By choosing a decaying step size formula_80 one can prove the convergence of the average iterate formula_81. This setting is a special case of stochastic optimization, a well known problem in optimization.\nIncremental stochastic gradient descent.\nIn practice, one can perform multiple stochastic gradient passes (also called cycles or epochs) over the data. The algorithm thus obtained is\ncalled incremental gradient method and corresponds to an iteration\nThe main difference with the stochastic gradient method is that here a sequence formula_83 is chosen to decide which training point is visited in the formula_60-th step. Such a sequence can be stochastic or deterministic. The number of iterations is then decoupled to the number of points (each point can be considered more than once). The incremental gradient method can be shown to provide a minimizer to the empirical risk. Incremental techniques can be advantageous when considering objective functions made up of a sum of many terms e.g. an empirical error corresponding to a very large dataset.\nKernel methods.\nKernels can be used to extend the above algorithms to non-parametric models (or models where the parameters form an infinite dimensional space). The corresponding procedure will no longer be truly online and instead involve storing all the data points, but is still faster than the brute force method.\nThis discussion is restricted to the case of the square loss, though it can be extended to any convex loss. It can be shown by an easy induction that if formula_85 is the data matrix and formula_86 is the output after formula_60 steps of the SGD algorithm, then,\nwhere formula_89 and the sequence formula_90 satisfies the recursion:\nNotice that here formula_94 is just the standard Kernel on formula_95, and the predictor is of the form \nNow, if a general kernel formula_97 is introduced instead and let the predictor be \nthen the same proof will also show that predictor minimising the least squares loss is obtained by changing the above recursion to\nThe above expression requires storing all the data for updating formula_90. The total time complexity for the recursion when evaluating for the formula_101-th datapoint is formula_102, where formula_103 is the cost of evaluating the kernel on a single pair of points.\nThus, the use of the kernel has allowed the movement from a finite dimensional parameter space formula_104 to a possibly infinite dimensional feature represented by a kernel formula_97 by instead performing the recursion on the space of parameters formula_106, whose dimension is the same as the size of the training dataset. In general, this is a consequence of the representer theorem.\nOnline convex optimization.\nOnline convex optimization (OCO) is a general framework for decision making which leverages convex optimization to allow for efficient algorithms. The framework is that of repeated game playing as follows:\nFor formula_107\nThe goal is to minimize regret, or the difference between cumulative loss and the loss of the best fixed point formula_113 in hindsight.\nAs an example, consider the case of online least squares linear regression. Here, the weight vectors come from the convex set formula_114, and nature sends back the convex loss function formula_115. Note here that formula_116 is implicitly sent with formula_117.\nSome online prediction problems however cannot fit in the framework of OCO. For example, in online classification, the prediction domain and the loss functions are not convex. In such scenarios, two simple techniques for convexification are used: randomisation and surrogate loss functions.\nSome simple online convex optimisation algorithms are:\nFollow the leader (FTL).\nThe simplest learning rule to try is to select (at the current step) the hypothesis that has the least loss over all past rounds. This algorithm is called Follow the leader, and is simply given round formula_118 by:\nThis method can thus be looked as a greedy algorithm. For the case of online quadratic optimization (where the loss function is formula_120), one can show a regret bound that grows as formula_121. However, similar bounds cannot be obtained for the FTL algorithm for other important families of models like online linear optimization. To do so, one modifies FTL by adding regularisation.\nFollow the regularised leader (FTRL).\nThis is a natural modification of FTL that is used to stabilise the FTL solutions and obtain better regret bounds. A regularisation function formula_122 is chosen and learning performed in round as follows:\nAs a special example, consider the case of online linear optimisation i.e. where nature sends back loss functions of the form formula_124. Also, let formula_114. Suppose the regularisation function formula_126 is chosen for some positive number formula_127. Then, one can show that the regret minimising iteration becomes \nNote that this can be rewritten as formula_129, which looks exactly like online gradient descent.\nIf is instead some convex subspace of formula_95, would need to be projected onto, leading to the modified update rule\nThis algorithm is known as lazy projection, as the vector formula_132 accumulates the gradients. It is also known as Nesterov's dual averaging algorithm. In this scenario of linear loss functions and quadratic regularisation, the regret is bounded by formula_133, and thus the average regret goes to as desired.\nOnline subgradient descent (OSD).\nThe above proved a regret bound for linear loss functions formula_134. To generalise the algorithm to any convex loss function, the subgradient formula_135 of formula_117 is used as a linear approximation to formula_117 near formula_109, leading to the online subgradient descent algorithm:\nInitialise parameter formula_139\nFor formula_107\nOne can use the OSD algorithm to derive formula_133 regret bounds for the online version of SVM's for classification, which use the hinge lossformula_150\nOther algorithms.\nQuadratically regularised FTRL algorithms lead to lazily projected gradient algorithms as described above. To use the above for arbitrary convex functions and regularisers, one uses online mirror descent. The optimal regularization in hindsight can be derived for linear loss functions, this leads to the AdaGrad algorithm.\nFor the Euclidean regularisation, one can show a regret bound of formula_133, which can be improved further to a formula_152 for strongly convex and exp-concave loss functions.\nContinual learning.\nContinual learning means constantly improving the learned model by processing continuous\nstreams of information.\nContinual learning capabilities are essential for software systems and autonomous agents interacting in an ever changing real world.\nHowever, continual learning is a challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions\ngenerally leads to catastrophic forgetting. \nInterpretations of online learning.\nThe paradigm of online learning has different interpretations depending on the choice of the learning model, each of which has distinct implications about the predictive quality of the sequence of functions formula_153. The prototypical stochastic gradient descent algorithm is used for this discussion. As noted above, its recursion is given by\nThe first interpretation consider the stochastic gradient descent method as applied to the problem of minimizing the expected risk formula_155 defined above. Indeed, in the case of an infinite stream of data, since the examples formula_156 are assumed to be drawn i.i.d. from the distribution formula_4, the sequence of gradients of formula_158 in the above iteration are an i.i.d. sample of stochastic estimates of the gradient of the expected risk formula_155 and therefore one can apply complexity results for the stochastic gradient descent method to bound the deviation formula_160, where formula_161 is the minimizer of formula_155. This interpretation is also valid in the case of a finite training set; although with multiple passes through the data the gradients are no longer independent, still complexity results can be obtained in special cases.\nThe second interpretation applies to the case of a finite training set and considers the SGD algorithm as an instance of incremental gradient descent method. In this case, one instead looks at the empirical risk:\nSince the gradients of formula_158 in the incremental gradient descent iterations are also stochastic estimates of the gradient of formula_165, this interpretation is also related to the stochastic gradient descent method, but applied to minimize the empirical risk as opposed to the expected risk. Since this interpretation concerns the empirical risk and not the expected risk, multiple passes through the data are readily allowed and actually lead to tighter bounds on the deviations formula_166, where formula_167 is the minimizer of formula_165.\nSee also.\nLearning paradigms\nGeneral algorithms\nLearning models", "Automation-Control": 0.9130781889, "Qwen2": "Yes"} {"id": "31146", "revid": "44938552", "url": "https://en.wikipedia.org/wiki?curid=31146", "title": "Transfer function", "text": "In engineering, a transfer function (also known as system function or network function) of a system, sub-system, or component is a mathematical function that models the system's output for each possible input. They are widely used in electronic engineering tools like circuit simulators and control systems. In some simple cases, this function can be represented as two-dimensional graph of an independent scalar input versus the dependent scalar output, called a transfer curve or characteristic curve. Transfer functions for components are used to design and analyze systems assembled from components, particularly using the block diagram technique, in electronics and control theory.\nThe dimensions and units of the transfer function model the output response of the device for a range of possible inputs. For example, the transfer function of a two-port electronic circuit like an amplifier might be a two-dimensional graph of the scalar voltage at the output as a function of the scalar voltage applied to the input; the transfer function of an electromechanical actuator might be the mechanical displacement of the movable arm as a function of electric current applied to the device; the transfer function of a photodetector might be the output voltage as a function of the luminous intensity of incident light of a given wavelength.\nThe term \"transfer function\" is also used in the frequency domain analysis of systems using transform methods such as the Laplace transform; here it means the amplitude of the output as a function of the frequency of the input signal. For example, the transfer function of an electronic filter is the voltage amplitude at the output as a function of the frequency of a constant amplitude sine wave applied to the input. For optical imaging devices, the optical transfer function is the Fourier transform of the point spread function (hence a function of spatial frequency).\nLinear time-invariant systems.\nTransfer functions are commonly used in the analysis of systems such as single-input single-output filters in the fields of signal processing, communication theory, and control theory. The term is often used exclusively to refer to linear time-invariant (LTI) systems. Most real systems have non-linear input/output characteristics, but many systems, when operated within nominal parameters (not \"over-driven\") have behavior close enough to linear that LTI system theory is an acceptable representation of the input/output behavior.\nThe descriptions below are given in terms of a complex variable, formula_1, which bears a brief explanation. In many applications, it is sufficient to define formula_2 (thus formula_3), which reduces the Laplace transforms with complex arguments to Fourier transforms with real argument ω. The applications where this is common are ones where there is interest only in the steady-state response of an LTI system, not the fleeting turn-on and turn-off behaviors or stability issues. That is usually the case for signal processing and communication theory.\nThus, for continuous-time input signal formula_4 and output formula_5, the transfer function formula_6 is the linear mapping of the Laplace transform of the input, formula_7, to the Laplace transform of the output formula_8:\nor\nIn discrete-time systems, the relation between an input signal formula_4 and output formula_5 is dealt with using the z-transform, and then the transfer function is similarly written as formula_13 and this is often referred to as the pulse-transfer function.\nDirect derivation from differential equations.\nConsider a linear differential equation with constant coefficients\nwhere \"u\" and \"r\" are suitably smooth functions of \"t\", and \"L\" is the operator defined on the relevant function space, that transforms \"u\" into \"r\". That kind of equation can be used to constrain the output function \"u\" in terms of the \"forcing\" function \"r\". The transfer function can be used to define an operator formula_15 that serves as a right inverse of \"L\", meaning that formula_16.\nSolutions of the \"homogeneous\", constant-coefficient differential equation formula_17 can be found by trying formula_18. That substitution yields the characteristic polynomial\nThe inhomogeneous case can be easily solved if the input function \"r\" is also of the form formula_20. In that case, by substituting formula_21 one finds that formula_22 if we define\nTaking that as the definition of the transfer function requires careful disambiguation between complex vs. real values, which is traditionally influenced by the interpretation of abs(\"H\"(\"s\")) as the gain and −atan(\"H\"(\"s\")) as the phase lag. Other definitions of the transfer function are used: for example formula_24\nGain, transient behavior and stability.\nA general sinusoidal input to a system of frequency formula_25 may be written formula_26. The response of a system to a sinusoidal input beginning at time formula_27 will consist of the sum of the steady-state response and a transient response. The steady-state response is the output of the system in the limit of infinite time, and the transient response is the difference between the response and the steady state response (it corresponds to the homogeneous solution of the above differential equation). The transfer function for an LTI system may be written as the product:\nwhere \"sPi\" are the \"N\" roots of the characteristic polynomial and will therefore be the poles of the transfer function. Consider the case of a transfer function with a single pole formula_29 where formula_30. The Laplace transform of a general sinusoid of unit amplitude will be formula_31. The Laplace transform of the output will be formula_32 and the temporal output will be the inverse Laplace transform of that function:\nThe second term in the numerator is the transient response, and in the limit of infinite time it will diverge to infinity if \"σP\" is positive. In order for a system to be stable, its transfer function must have no poles whose real parts are positive. If the transfer function is strictly stable, the real parts of all poles will be negative, and the transient behavior will tend to zero in the limit of infinite time. The steady-state output will be:\nThe frequency response (or \"gain\") \"G\" of the system is defined as the absolute value of the ratio of the output amplitude to the steady-state input amplitude:\nwhich is just the absolute value of the transfer function formula_36 evaluated at formula_37. This result can be shown to be valid for any number of transfer function poles.\nSignal processing.\nLet formula_38 be the input to a general linear time-invariant system, and formula_39 be the output, and the bilateral Laplace transform of formula_38 and formula_39 be\nThen the output is related to the input by the transfer function formula_36 as\nand the transfer function itself is therefore\nIn particular, if a complex harmonic signal with a sinusoidal component with amplitude formula_46, angular frequency formula_47 and phase formula_48, where arg is the argument\nis input to a linear time-invariant system, then the corresponding component in the output is:\nNote that, in a linear time-invariant system, the input frequency formula_52 has not changed, only the amplitude and the phase angle of the sinusoid has been changed by the system. The frequency response formula_53 describes this change for every frequency formula_52 in terms of \"gain\":\nand \"phase shift\":\nThe phase delay (i.e., the frequency-dependent amount of delay introduced to the sinusoid by the transfer function) is:\nThe group delay (i.e., the frequency-dependent amount of delay introduced to the envelope of the sinusoid by the transfer function) is found by computing the derivative of the phase shift with respect to angular frequency formula_52,\nThe transfer function can also be shown using the Fourier transform which is only a special case of the bilateral Laplace transform for the case where formula_60.\nCommon transfer function families.\nWhile any LTI system can be described by some transfer function or another, there are certain \"families\" of special transfer functions that are commonly used.\nSome common transfer function families and their particular characteristics are:\nControl engineering.\nIn control engineering and control theory the transfer function is derived using the Laplace transform.\nThe transfer function was the primary tool used in classical control engineering. However, it has proven to be unwieldy for the analysis of multiple-input multiple-output (MIMO) systems, and has been largely supplanted by state space representations for such systems. In spite of this, a transfer matrix can always be obtained for any linear system, in order to analyze its dynamics and other properties: each element of a transfer matrix is a transfer function relating a particular input variable to an output variable.\nA useful representation bridging state space and transfer function methods was proposed by Howard H. Rosenbrock and is referred to as Rosenbrock system matrix.\nOptics.\nIn optics, modulation transfer function indicates the capability of optical contrast transmission.\nFor example, when observing a series of black-white-light fringes drawn with a specific spatial frequency, the image quality may decay. White fringes fade while black ones turn brighter.\nThe modulation transfer function in a specific spatial frequency is defined by\nwhere modulation (M) is computed from the following image or light brightness:\nImaging.\nIn imaging, transfer functions are used to describe the relationship between the scene light, the image signal and the displayed light.\nNon-linear systems.\nTransfer functions do not properly exist for many non-linear systems. For example, they do not exist for relaxation oscillators; however, describing functions can sometimes be used to approximate such nonlinear time-invariant systems.", "Automation-Control": 0.950664103, "Qwen2": "Yes"} {"id": "17076044", "revid": "31364895", "url": "https://en.wikipedia.org/wiki?curid=17076044", "title": "Richard E. Bellman Control Heritage Award", "text": "The Richard E. Bellman Control Heritage Award is an annual award (since 1979) given by the American Automatic Control Council (AACC) for achievements in control theory, named after the applied mathematician Richard E. Bellman. The award is given for \"distinguished career contributions to the theory or applications of automatic control\", and it is the \"highest recognition of professional achievement for U.S. control systems engineers and scientists\".\nThe original name was Control Heritage Award, and it was initially only given for the engineering side of control.\nRecipients.\nThe following individuals have received the AACC Richard E. Bellman Control Heritage Award:", "Automation-Control": 1.0000098944, "Qwen2": "Yes"} {"id": "17076143", "revid": "31364895", "url": "https://en.wikipedia.org/wiki?curid=17076143", "title": "American Automatic Control Council", "text": "The American Automatic Control Council (AACC) is an organization founded in 1957 for research in control theory. AACC is a member of the International Federation of Automatic Control (IFAC) and is an association of the control systems divisions of nine member societies:\nAmerican Control Conference.\nThe American Control Conference (ACC) is an annual research conference sponsored by the AACC and is one of the most prestigious conferences in the field of control theory. Dating back to 1960, the attendees of the ACC are about 50% from the Americas and about 50% from other countries, consisting mostly of researchers with a large portion being students.\nAwards.\nThe AACC issues five awards for achievements in control theory:", "Automation-Control": 0.9999880791, "Qwen2": "Yes"} {"id": "64004864", "revid": "36069288", "url": "https://en.wikipedia.org/wiki?curid=64004864", "title": "Reishauer", "text": " \nReishauer is a Swiss machine tool builder based in Wallisellen, which manufactures gear grinding machines.\nThe company was founded in 1788 by the toolmaker Hans Jakob Däniker as a craft enterprise in Zurich. In 1870, the company was officially registered as a tool factory. In 1945, the first continuous generating gear grinding machine, ZA, was launched on the market, introducing a form of gear grinding that is today known as the Reishauer process. Soon after this, production expanded to gear parts outside machine-tool-engineering and met the requirements of the aircraft industry and the automotive industry. Reishauer AG is a subsidiary of Reishauer Beteiligungen AG, to which the German Felsomat AG has been a part of since 2010. The most important customers are the automotive industry and its suppliers.\nReishauer manufactures: gear grinding machines, grinding and dressing tools, clamping systems, and automation solutions. All components are supplied from one source with more than 80% vertical integration. Reishauer describes its performance system as a Circle of Competence, in which all machine components, tooling, and automation are manufactured in-house.\nHistory.\nFoundation as a toolmaker (from 1788).\nThe company was founded in 1788 by the toolmaker Hans Jakob Däniker as a craft enterprise in Zurich. Däniker's son Gottfried Reishauer trained as a toolmaker in the business and took over the management in 1824. In 1870, the company was officially registered as a tool factory. In 1882, the \"Aktiengesellschaft für Fabrikation Reishauer'scher Werkzeuge\" was founded, and the portfolio was expanded to include thread gauges in addition to thread cutting tools.\nThe step to mechanical engineering (from 1924).\nAs the thread grinding machines that were currently available on the market did not meet Reishauer's requirements, they designed their own thread grinding machine in 1924. The \"RK Gewinde\" started to work in the factory in 1928 and marked the step towards becoming a machine tool manufacturer. In 1931, the first in-house made machine for grinding taps was put into operation. Soon Reishauer began to produce the machines not only for his own needs but also to sell them to other companies. This enabled the company to bridge the declining demand for tools in the years after 1929.\nThe introduction of the continuous generating grinding process and the rise to become an international company (from 1945).\nIn 1945, the first continuous generating gear grinding machine, ZA, was launched on the market, introducing a form of gear grinding, today known as the Reishauer process. This machine had been preceded by a 15-year development period, as Reishauer wanted to find a more accurate, faster, and cheaper method of manufacturing gears. 1968, the AZA, a new gear grinding machine, was produced. The AZA was based on the same continuous generating process but allowed one person to operate several grinders at the same time, thanks to streamlining the operating process. Reishauer thus took the first step towards automating the gear grinding process. At the same time, production at Reishauer’s customers expanded to gear parts outside machine tool engineering, andt included gears for printing machines, trucks, tractors, and pumps. The electronic generating gearbox, introduced in 1977 with the RZ300E, ensured a level of precision that met the requirements of the aircraft industry. In 1986, the RZ301S enhanced generating grinding with shift grinding, which enabled constant grinding forces and higher profile accuracy. In 1993, the RZ362A, the first high-performance gear grinding machine, made its entry into the automotive industry. With this machine, Reishauer introduces the Low Noise Shifting (LNS) process, which reduced unwanted gear noise. In 1998, the company started its own diamond tool production and laid the foundation for its performance system, the Circle of Competence.\nUniversal machine and technological development (from 2001).\nIn 2001, the RZ400, the first universal machine, was launched on the market. It included the electronic generating gearbox developed by Reishauer with interfering signal suppression and extremely high drive rigidity. Furthermore, the RZ400 featured a Windows user interface, safety monitoring of the drive axes, and grinding at 63 m/s cutting speed and dressing of and grinding with multi-start threaded grinding wheels. With the RZ150, developed in 2003, two-spindle technology was introduced, which achieved a further increase in productivity. The machine was specially designed for automotive transmission gears. 2006 saw the launch of the RZ1000, which, just like the RZ400, was particularly adapted to job shops.\nIn 2008, Reishauer started the production of vitrified grinding wheels and built a new fully automated plant for this purpose in Pfaffnau, the canton of Lucerne, Switzerland. In 2009, the RZ60 series (RZ60, 160, 260) was designed, mainly for the automotive industry, but also for job shops, and further increased the productivity of the Reishauer process. In 2010, Reishauer started the development of clamping devices, which were launched in 2012. In 2014, Reishauer automation was introduced as part of the company's own performance system.\nCorporate structure.\nReishauer AG is a subsidiary of Reishauer Beteiligungen AG, to which the German Felsomat AG has been part since 2010. The most important customers are the automotive industry and its suppliers. Reishauer has branches in Germany, France, Japan, China, and the USA.\nProducts.\nReishauer manufactures gear grinding machines, grinding and dressing tools, clamping systems, and automation solutions. All components are supplied from one source with more than 80% vertical integration. Machines specifically customized for each customer. Reishauer offers complete systems for the production of high-quality gears, including loading and unloading systems for its gear grinding machines. Almost 100% of the products are exported. Reishauer describes its performance system as a Circle of Competence, in which all machine components, tooling, and automation are manufactured in-house.\nSee also.\n ", "Automation-Control": 0.9837591052, "Qwen2": "Yes"} {"id": "65939595", "revid": "35936988", "url": "https://en.wikipedia.org/wiki?curid=65939595", "title": "Maria Domenica Di Benedetto", "text": "Maria Domenica Di Benedetto (born 1953) is an Italian electrical engineer and control theorist whose interests include the control of hybrid systems, embedded control systems, automotive engine control, and aerospace flight control. She is Professor of Automatic Control at the University of L'Aquila, president of the European Embedded Control Institute, and the former president of the Italian Society of researchers in Automatic Control.\nShe should be distinguished from Maria-Gabriella Di Benedetto, another Italian electrical engineer with similar career details.\nEducation and career.\nDi Benedetto earned a master's degree (Dr. Ing.) at Sapienza University of Rome in 1976. She earned a French doctorate of engineering at Paris-Sud University in 1981 and a state doctorate there in 1987.\nShe worked as a research engineer for IBM in Paris and Rome from 1979 to 1983, as an assistant professor at Sapienza University from 1983 to 1987, as an associate professor at the Parthenope University of Naples (then called the Istituto Universitario Navale) from 1987 to 1990, and again at Sapienza University from 1990 to 1993. She joined the Department of Information Engineering at the University of L'Aquila as Professor of Automatic Control in 1994. At L'Aquila, she directs the Center of Excellence for Research in Design methodologies of Embedded controllers, Wireless interconnect and Systems-on-chip (DEWS).\nShe has headed the European Embedded Control Institute (EECI) since 2009, and served as president of the Italian Society of researchers in Automatic Control (SIDRA) from 2013 to 2019.\nBook.\nWith Elena De Santis, Di Benedetto is the coauthor of the book \"Observability of Hybrid Dynamical Systems\" (Now Publishers, 2016).\nRecognition.\nDi Benedetto was named an IEEE Fellow in the class of 2002, affiliated with the IEEE Control Systems Society, \"for contributions to the theory of nonlinear and hybrid control system design\". She was named a Fellow of the International Federation of Automatic Control in 2019, \"for contributions to nonlinear and hybrid system theory and leadership in control research and education\".", "Automation-Control": 0.7308529615, "Qwen2": "Yes"} {"id": "49884670", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=49884670", "title": "Multiple models", "text": " \nIn control theory, multiple models is an approach to improve efficiency of adaptive system or observer system. It uses large number of models, which are distributed in the region of uncertainty, and based on the responses of the plant and the models. One model is chosen at every instant, which is closest to the plant according to some metric. The method offers satisfactory performance when no restrictions are put on the number of available models. \nApproaches.\nThere are two multiple model methods:\nApplications.\nMultiple model method can be used for:", "Automation-Control": 0.9985942841, "Qwen2": "Yes"} {"id": "49885521", "revid": "712163", "url": "https://en.wikipedia.org/wiki?curid=49885521", "title": "Armature Controlled DC Motor", "text": "An armature controlled DC motor is a direct current (DC) motor that uses a permanent magnet driven by the armature coils only.\nBasic operation of DC motor.\nA motor is an actuator, converting electrical energy in to rotational mechanical energy. A motor requiring a DC power supply for operation is termed a DC motor. DC motors are widely used in control applications like robotics, tape drives, machines and many more. \nSeparately excited DC motors are suitable for control applications because of separate field and armature circuit. Two ways to control DC separately excited motors are: Armature Control and Field Control.\nA DC motor consists of two parts: a rotor and a stator. The stator consists of field windings while the rotor (also called the armature) consists of an armature winding. When both the armature and the field windings are excited by a DC supply, current flows through the windings and a magnetic flux proportional to the current is produced. When the flux from the field interacts with the flux from the armature, it results in motion of the rotor. Armature control is the most common control technique for DC motors. In order to implement this control, the stator flux must be kept constant. To achieve this, either the stator voltage is kept constant or the stator coils are replaced by a permanent magnet. In the latter case, the motor is said to be a permanent magnet DC motor and is driven by the armature coils only.\nEquations for motor operation.\nEquations governing the operation of motor are made linear by simplifying the effects of the magnetic field from the stator to only its flux, formula_1, and a term that describes the effect of the stator field on the rotor, formula_2. formula_2 is unlikely to be a constant and may be a function of formula_1:\nformula_5 (1)\nwhere formula_6 is motor torque and formula_7 is armature current. When field flux is constant, equation (1) becomes\nformula_8 (2)\nwhere formula_9 as formula_1 is constant.\nIn addition, the motor has an intrinsic negative feedback structure, hence at the steady state, the speed ω is proportional to the reference input Va.\nThese two facts, in addition to the cheaper price of a permanent magnet motor with respect to a standard DC motor (because only the rotor coils need to be wound), are the main reasons why armature controlled motors are widely used. However, several disadvantages arise from this control technique, of which major is the flow of large currents during transients. For example, when started speed ω is zero initially, hence back EMF (electromotive force) governed by the following relation, would be zero.\nformula_11 (3)\nAlso, armature current is given by formula_12(4)\nwhich will be very high causing increase in heating of machine and it may damage the insulation.\nEquations for transfer function.\nEssential Equations for transfer function:\nformula_11 in Laplace domain formula_14(5)\nformula_12 in Laplace domain formula_16 (6)\nformula_5 in Laplace domain formula_18 (7)\nformula_19 in Laplace domain formula_20 (8)\nVarious parameters in figure are described as \nThe transfer matrix of the system may be written as\nformula_28 (9)\nwhere formula_29 (10)\nformula_30 (11)", "Automation-Control": 0.6716313362, "Qwen2": "Yes"} {"id": "70081354", "revid": "44920599", "url": "https://en.wikipedia.org/wiki?curid=70081354", "title": "Belur Industrial Area", "text": "Belur Industrial Area (Abbreviation : BIA) is an industrial area of the Dharwad city in India and it is one of the biggest industrial areas in Karnataka. lies on the Dharwad-Belgaum Highway. It houses small, medium, and large-scale industries. The industrial area is known for engineering, electrical goods such as: CNC Machine tools, GDC dies & moulds transformers, motors and generators, textile (silk), hydraulics, machine tool industries and Rubber moulding industries.", "Automation-Control": 0.9824905396, "Qwen2": "Yes"} {"id": "19290483", "revid": "15996738", "url": "https://en.wikipedia.org/wiki?curid=19290483", "title": "Moldflow", "text": "Moldflow is a producer of simulation software for high-end plastic injection molding computer-aided engineering. It is owned by Autodesk.\nAutodesk stable release is Moldflow 2023.\nMoldflow was founded in Melbourne, Australia as Moldflow Pty. Ltd. in 1978 by Colin Austin. In 2008 Moldflow was acquired by Autodesk for $297M.\nProducts.\nMoldflow has two core products: Moldflow Adviser which provides manufacturability guidance and directional feedback for standard part and mold design, and Moldflow Insight which provides definitive results for flow, cooling, and warpage along with support for specialized molding processes. In addition, Autodesk produces Moldflow Design, Moldflow CAD Doctor, Moldflow synergy, Moldflow Magics STL Expert, and Moldflow Structural Alliance that serve as connectivity tools for other CAD and CAE software. They also have a free results viewer, Moldflow Communicator.", "Automation-Control": 0.998316884, "Qwen2": "Yes"} {"id": "9644681", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=9644681", "title": "Geometric programming", "text": "A geometric program (GP) is an optimization problem of the form\nwhere formula_2 are posynomials and formula_3 are monomials. In the context of geometric programming (unlike standard mathematics), a monomial is a function from formula_4 to formula_5 defined as\nwhere formula_7 and formula_8. A posynomial is any sum of monomials.\nGeometric programming is\nclosely related to convex optimization: any GP can be made convex by means of a change of variables. GPs have numerous applications, including component sizing in IC design, aircraft design, maximum likelihood estimation for logistic regression in statistics, and parameter tuning of positive linear systems in control theory. \nConvex form.\nGeometric programs are not in general convex optimization problems, but they can be transformed to convex problems by a change of variables and a transformation of the objective and constraint functions. In particular, after performing the change of variables formula_9 and taking the log of the objective and constraint functions, the functions formula_10, i.e., the posynomials, are transformed into log-sum-exp functions, which are convex, and the functions formula_11, i.e., the monomials, become affine. Hence, this transformation transforms every GP into an equivalent convex program. In fact, this log-log transformation can be used to convert a larger class of problems, known as log-log convex programming (LLCP), into an equivalent convex form. \nSoftware.\nSeveral software packages exist to assist with formulating and solving geometric programs.", "Automation-Control": 0.9941667914, "Qwen2": "Yes"} {"id": "38847195", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=38847195", "title": "TP model transformation in control theory", "text": "Baranyi and Yam proposed the TP model transformation as a new concept in quasi-LPV (qLPV) based control, which plays a central role in the highly desirable bridging between identification and polytopic systems theories. It is also used as a TS (Takagi-Sugeno) fuzzy model transformation. It is uniquely effective in manipulating the convex hull of polytopic forms (or TS fuzzy models), and, hence, has revealed and proved the fact that convex hull manipulation is a necessary and crucial step in achieving optimal solutions and decreasing conservativeness in modern linear matrix inequality based control theory. Thus, although it is a transformation in a mathematical sense, it has established a conceptually new direction in control theory and has laid the ground for further new approaches towards optimality.\nFor details please visit: TP model transformation.\nA free MATLAB implementation of the TP model transformation can be downloaded at or an old version of the toolbox is available at MATLAB Central . Be careful, in the MATLAB toolbox the assignments of the dimensions of the core tensor is in the opposite way in contrast to the notation used in the related literature. In some variants of the ToolBox, the first two dimension of the core tensor is assigned to the vertex systems. In the TP model literature the last two. A simple example is given below.\nRelated definitions.\nwith input formula_2, output formula_3 and state\nvector formula_4. The system matrix formula_5 is a parameter-varying object, where formula_6 is a time varying formula_7-dimensional parameter vector which is an element of\nclosed hypercube formula_8. As a matter of fact, further parameter dependent channels can be inserted to formula_9 that represent various control performance requirements.\nformula_10 in the above LPV model can also include some elements of the state vector\nformula_4, and, hence this model belongs to the class of non-linear systems, and is also referred to as a quasi LPV (qLPV) model.\nwith input formula_2, output formula_3 and state\nvector formula_4. The system matrix formula_16 is a parameter-varying object, where formula_6 is a time varying formula_7-dimensional parameter vector which is an element of\nclosed hypercube formula_8, and the weighting functions formula_20 are the elements of vector formula_21. Core tensor contains elements formula_22 which are the vertexes of the system.\nAs a matter of fact, further parameter dependent channels can be inserted to formula_9 that represent various control performance requirements.\nHere\nThis means that formula_26 is within the vertexes formula_22 of the system (within the convex hull defined by the vertexes) for all formula_28. \nNote that the TP type polytopic model can always be given in the form\nwhere the vertexes are the same as in the TP type polytopic form and the multi variable weighting functions are the product of the one variable weighting functions according to the TP type polytopic form, and r is the linear index equivalent of the multi-linear indexing formula_30.\nAssume a given qLPV model formula_31, where formula_32, whose TP polytopic structure may be unknown (e.g. it is given by neural networks). The TP model transformation determines its TP polytopic structure as\nnamely it generates core tensor formula_34 and weighting functions of formula_35 for all formula_36. Its free MATLAB implementation is downloadable at or at MATLAB Central .\nIf the given model does not have (finite element) TP polytopic structure, then the TP model transformation determines its approximation:\nwhere trade-off is offered by the TP model transformation between complexity (number of vertexes stored in the core tensor or the number of weighting functions) and the approximation accuracy. The TP model can be generated according to various constrains. Typical TP models generated by the TP model transformation are:\nTP model based control design.\nSince the TP type polytopic model is a subset of the polytopic model representations, the analysis and design methodologies developed for polytopic representations are applicable for the TP type polytopic models as well. \nOne typical way is to search the nonlinear controller in the form:\nwhere the vertexes formula_39 of the controller is calculated from formula_40. Typically, the vertexes formula_40 are substituted into Linear Matrix Inequalities in order to determine formula_39.\nIn TP type polytopic form the controller is:\nwhere the vertexes formula_44 stored in the core tensor formula_45 are determined from the vertexes formula_46 stored in formula_34. Note that the polytopic observer or other components can be generated in similar way, such as these vertexes are also generated from formula_48.\nThe polytopic representation of a given qLPV model is not invariant. I.e. a given formula_49 has formula_50 number of different representation as:\nwhere formula_52. In order to generate an optimal control of the given model formula_49 we apply, for instance LMIs. Thus, if we apply the selected LMIs to the above polytopic model we arrive at:\nSince the LMIs realize a non-linear mapping between the vertexes in formula_48 and formula_56 we may find very different controllers for each formula_52. This means that we have formula_58 different number of \"optimal\" controllers to the same system formula_49. Thus, the question is: which one of the \"optimal\" controllers is really the optimal one. The TP model transformation let us to manipulate the weighting functions systematically that is equivalent to the manipulation of the vertexes. The geometrical meaning of this manipulation is the manipulation of the convex hull defined by the vertexes. We can easily demonstrate the following facts:\nof a given model formula_49, then we can generate a controller as\nthen we solved the control problem of all systems formula_63 that can be given by the same vertexes, but with different weighting functions as:\nwhere\nIf one of these systems are very hardly controllable (or even uncontrollable) then we arrive at a very conservative solution (or unfeasible LMIs). Therefore, we expect that during tightening the convex hull we exclude such problematic systems.", "Automation-Control": 0.9971492887, "Qwen2": "Yes"} {"id": "56447389", "revid": "20483999", "url": "https://en.wikipedia.org/wiki?curid=56447389", "title": "IEC 61360", "text": "IEC 61360, with the title \"Standard data element types with associated classification scheme\", is a series of standard documents defining a general purpose vocabulary in terms of a reference dictionary published by the International Electrotechnical Commission.\nIntended use.\nThe vocabulary specified in IEC 61360 may be used to define ontologies for use in the field of electrotechnology, electronics and related domains.\nStructure.\nThe IEC 61360 series is structured into different parts:\nIEC 61360-1 provides a detailed introduction to the structure of the dictionary and its use. \nIEC 61360-2 specifies the detailed dictionary data model and IEC 61360-6 stipulates quality criteria for the content of the dictionary.\nThe data model defined in IEC 61360-2 is also published in ISO 13584-42.\nThe IEC provides a technical dictionary for the use in the electro-technical and electronic domain which is published as IEC 61360-4. This dictionary is called IEC Common Data Dictionary (IEC CDD) and can be accessed as a web page (https://cdd.iec.ch).\nSee also.\nIEC 61360 also defines the base for other product taxonomies like eCl@ss.\nIndustrie 4.0 uses product property description based on IEC 61360.", "Automation-Control": 0.7299352288, "Qwen2": "Yes"} {"id": "11360852", "revid": "286058", "url": "https://en.wikipedia.org/wiki?curid=11360852", "title": "Predictive state representation", "text": "In computer science, a predictive state representation (PSR) is a way to model a state of controlled dynamical system from a history of actions taken and resulting observations. PSR captures the state of a system as a vector of predictions for future tests (experiments) that can be done on the system. A test is a sequence of action-observation pairs and its prediction is the probability of the test's observation-sequence happening if the test's action-sequence were to be executed on the system. One of the advantage of using PSR is that the predictions are directly related to observable quantities. This is in contrast to other models of dynamical systems, such as partially observable Markov decision processes (POMDPs) where the state of the system is represented as a probability distribution over unobserved nominal states.", "Automation-Control": 0.9507008791, "Qwen2": "Yes"} {"id": "23537962", "revid": "39166520", "url": "https://en.wikipedia.org/wiki?curid=23537962", "title": "Small control property", "text": "For applied mathematics, in nonlinear control theory, a non-linear system of the form formula_1 is said to satisfy the small control property if for every formula_2 there exists a formula_3 so that for all formula_4 there exists a formula_5 so that the time derivative of the system's Lyapunov function is negative definite at that point.\nIn other words, even if the control input is arbitrarily small, a starting configuration close enough to the origin of the system can be found that is asymptotically stabilizable by such an input.", "Automation-Control": 0.9941758513, "Qwen2": "Yes"} {"id": "61186819", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=61186819", "title": "Smart Metrology", "text": "Smart Metrology is the modern approach to industrial metrology. The name was introduced by Jean-Michel Pou and Laurent Leblond, a French meteorologist and a French statistician. The term was coined in their book \"La Smart Metrology: De la métrologie des instruments... à la métrologie des décisions\". It was immediately adopted by Deltamu, a French company providing services in the field of industrial metrology, to promote its vision of metrology.\nThe modern approach promoted by Smart Metrology consists, mainly, in the full exploitation of all available data and information, including that provided by Big Data, to implement a correct, pertinent and efficient approach to the three pillars of metrology (uncertainty, calibration and traceability) in the industrial applications\nThe Smart Metrology approach.\nThe approach suggested by Smart Metrology is fully framed inside the ISO 9001 recommendations that any industry using a measuring instrument must keep them under control.\nThe traditional approach.\nThe traditional approach to industrial metrology tends to follow these steps:\nSo, the actual results of the calibration may not even be used in the decision-making process. This way, metrology is often regarded as a pure cost and is actually not following the ISO 9001 quality standards.\nSmart metrology innovation.\nSmart metrology follows a different approach to keeping the instruments under control. This new approach is aimed at achieving a higher efficiency according to the following steps:\nAccording to the above steps, metrology does no longer represent an useless cost, afforded mainly to satisfy the standards. Instead, it can be regarded as an investment to enhance the quality of industrial production. It makes full use of the measurement results and makes use of measurement uncertainty in the decision-making process.", "Automation-Control": 0.9472395182, "Qwen2": "Yes"} {"id": "61190139", "revid": "869314", "url": "https://en.wikipedia.org/wiki?curid=61190139", "title": "Mengchu Zhou", "text": "Mengchu Zhou (; born 31 October 1963) is a Chinese Distinguished Professor of electrical and computer engineering in the Helen and John C. Hartmann Dept. of Electrical and Computer Engineering at New Jersey Institute of Technology (NJIT) and at Macau University of Science and Technology. He is the Chairman of IKAS Industries of Shenzhen in China and a Board Member of OneSmart Education Group headquartered in China.\nHe is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE), a Fellow of the International Federation of Automatic Control (IFAC), a Fellow of the American Association for the Advancement of Science (AAAS) and a Fellow of the Chinese Association of Automation (CAA). Zhou is the Founding Editor-in-Chief of the IEEE/Wiley Book Series on Systems Science and Engineering and the Editor-in-Chief of the IEEE/CAA Journal of Automatica Sinica. In 2015, he received the Norbert Wiener Award for \"fundamental contributions to the area of Petri net theory and applications to discrete event systems,\" from the IEEE Systems, Man, and Cybernetics Society which also awarded him the Franklin V. Taylor Memorial Award for Best Paper award in 2010. In 2000, Zhou received the Humboldt Research Award for US Senior Scientists, Alexander von Humboldt Foundation, Germany. In 1994, he received the Society of Manufacturing Engineers, Computer-Integrated Manufacturing UNIVERSITY-LEAD Award (Leadership and Excellence in the Application and Development of integrated manufacturing). The number of his publications receiving 200 or more citations is 24 according to Google Scholar. He is one of the world's Highly Cited Researchers in Web of Science and has a total of more than 34,000 citations with an h- index of 89.\nEducation.\nZhou earned his Ph.D. in Computer & Systems Engineering, Rensselaer Polytechnic Institute, Troy, NY, 1990. He completed his M. S. in Automatic Control, Beijing Institute of Technology, Beijing, China, 1986 following the completion of his B. S. in Control Engineering, Nanjing University of Science & Technology, Nanjing, China, 1983.\nCareer.\nIn 2000, Zhou joined the faculty of New Jersey Institute of Technology where in 2013, Zhou became a Distinguished Professor at The Helen and John C. Hartmann Department of Electrical and Computer Engineering at the New Jersey Institute of Technology. There, he serves as researcher of Petri Nets and their applications, Director of the M.S. Program in Power and Energy Systems, Director of the M.S. Program in Computer Engineering, Director of Discrete Event Systems Laboratory, Director of CRRC-ZIC Laboratory for Rail System Network and Information Technologies, and Area Coordinator of Intelligent Systems. Previous to his career as professor, Zhou worked at Beijing Institute of Computer Applications, where he was an assistant engineer responsible for the development of CAD/CAM for vehicles.\nSignificant contributions.\nPetri nets are a modeling tool playing the same role in event- driven systems as the differential/difference equations in continuous dynamic systems. As the size and complexity of automated systems increase, ad hoc methods lose their effectiveness, and a strong need arises for systematic methods of analysis and design. Zhou's work is about such methods - including their modeling, analysis, and synthesis of various automated systems.\nIn 1991, Zhou provided the theoretical basis for Petri net synthesis methods that model systems with shared resources. He formulated two new resource- sharing concepts: parallel mutual exclusion (PME) and sequential mutual exclusion (SME). PME models a resource shared by distinct independent processes. SME is a sequential composition of PMEs, modeling a resource shared by sequentially related processes. Zhou derived the conditions under which a net containing such structures will not have a total system shut down (deadlock). His approach enabled flexible design of systems that met constraints, and optimized performance. The synthesized models could be converted to supervisory controllers for automated systems. To simplify the optimal control design for any given automated system, his work invented the elementary and dependent siphons of Petri nets - important structural objects for characterizing deadlocks. He also invented several deadlock control methods for automated systems. Their use reduced the structural complexity of supervisory controllers - - they became linear with respect to system size. For certain systems, optimal controllers were developed with polynomially complex algorithms, thereby allowing for the first time, on- line deployment of optimal control methods. Thousands of researchers and engineers use his methods in various applications for automated system design, analysis, and control. Factories which use Zhou’s methods are thereby able to both prevent deadlock and simultaneously operate at maximum productivity - a rare combination in complex automated systems.\nZhou was among the pioneers of Petri net- based methods for semiconductor manufacturing, in particular, robotic cells called cluster tools, widely used in today’s semiconductor wafer fabrication plants. His survey- type papers popularized and greatly increased acceptance of Petri nets in system designs and applications (e.g., work published in IEEE Transactions on Industrial Electronics, IEEE Transactions on Semiconductor Manufacturing, IEEE Transactions on Automation Science and Engineering, and IEEE Robotics and Automation Magaz ine). Several of his patents have been licensed to dozens of industrial firms and put into industrial use, generating significant economic outcomes. Helping substantially improve the productivity of manufacturing semiconductor wafers and chips, this has a direct impact on lowering manufacturing costs and significantly increasing bottom line profitability in the semiconductor manufacturing industry.\nZhou has made contributions to the advancement of Petri net theory and their applications in automated systems. He has published over 400 papers in IEEE Transactions and Journals (majority being regular papers) and over 100 in other journals. He has authored/coathored 12 patents, 12 books, 29 book chapters and over 300 conference- proceeding papers. He is the world’s most cited researcher on Petri nets, and one of the overall leading researchers in automated manufacturing systems (Scopus). Web of Science ranked Zhou in 2012 as the number one most highly cited scholar in engineering worldwide and has listed him as a \"highly cited scholar\" in engineering since then. His recognition includes four Fellow designations - by IEEE for contributions to Petri nets and their applications; by AAAS for distinguished contributions to Petri nets, discrete event systems, and their applications to manufacturing, transportation, workflow, disassembly, web services, and software design; by the International Federation of Automatic Control (IFAC) for seminal contributions to the theory of Petri nets and their application in manufacturing, transportation, and web services; and by Chinese Association of Automation for contributions to the field of automation.", "Automation-Control": 0.8944317102, "Qwen2": "Yes"} {"id": "1853037", "revid": "703908", "url": "https://en.wikipedia.org/wiki?curid=1853037", "title": "Robot welding", "text": "Robot welding is the use of mechanized programmable tools (robots), which completely automate a welding process by both performing the weld and handling the part. Processes such as gas metal arc welding, while often automated, are not necessarily equivalent to robot welding, since a human operator sometimes prepares the materials to be welded. Robot welding is commonly used for resistance spot welding and arc welding in high production applications, such as the automotive industry.\nHistory.\nRobot welding is a relatively new application of robotics, even though robots were first introduced into U.S. industry during the 1960s. The use of robots in welding did not take off until the 1980s, when the automotive industry began using robots extensively for spot welding. Since then, both the number of robots used in industry and the number of their applications has grown greatly. In 2005, more than 120,000 robots were in use in North American industry, about half of them for welding. Growth is primarily limited by high equipment costs, and the resulting restriction to high-production applications. \nRobot arc welding has begun growing quickly just recently, and already it commands about 20 percent of industrial robot applications. The major components of arc welding robots are the manipulator or the mechanical unit and the controller, which acts as the robot's \"brain\". The manipulator is what makes the robot move, and the design of these systems can be categorized into several common types, such as SCARA and cartesian coordinate robot, which use different coordinate systems to direct the arms of the machine.\nThe robot may weld a pre-programmed position, be guided by machine vision, or by a combination of the two methods. However, the many benefits of robotic welding have proven to make it a technology that helps many original equipment manufacturers increase accuracy, repeat-ability, and throughput One welding robot can do the work of several human welders. For example, in arc welding, which produces hot sparks and smoke, a human welder can keep his torch on the work for roughly thirty percent of the time; for robots, the percentage is about 90. \nThe technology of signature image processing has been developed since the late 1990s for analyzing electrical data in real time collected from automated, robotic welding, thus enabling the optimization of welds.\nAdvantages.\nAdvantages of robot welding include: \nDisadvantages.\nDisadvantages of robot welding include:", "Automation-Control": 0.9310251474, "Qwen2": "Yes"} {"id": "21685018", "revid": "748937960", "url": "https://en.wikipedia.org/wiki?curid=21685018", "title": "Rate gyro", "text": "A rate gyro is a type of gyroscope, which rather than indicating direction, indicates the rate of change of angle with time. If a gyro has only one gimbal ring, with consequently only one plane of freedom, it can be adapted for use as a rate gyro to measure a rate of angular movement.\nRate gyros are used in rate integrating gyroscopes, and in attitude control systems for vehicles, and in combination with other sensors to make inertial navigation systems.\nThe advantage of rate gyros over other types of gyros is the fast response rate and their relatively low cost.\nPrinciples.\nSpinning.\nThe traditional type of rate gyro employs relatively conventional gyroscopes with viscous couplings to transfer the spin rate to allow it to be read.\nVibrating structure gyroscope.\nMEMS gyros are cheap and have no moving parts. They often work by sonic resonance effects driven by piezoelectric transducers, that provide a signal when a rotation occurs.", "Automation-Control": 0.9328736067, "Qwen2": "Yes"} {"id": "21718903", "revid": "23646674", "url": "https://en.wikipedia.org/wiki?curid=21718903", "title": "Hobby injection molding", "text": "Hobby injection molding machines, also known as benchtop injectors, hold molds on a smaller scale. Benchtop injectors have become more common as inexpensive CNC milling machines have reduced the cost of producing molds in a home workshop.\nIn hobby injectors injection pressure is generated manually by the operator, with a lever or gear translating the operator's effort to the required pressure. The most common hobby injection machine uses a handle to press down with. This enables the user to generate roughly of downward force, through the use of leverage.\nHistory.\nIt is not known when the first hobby injection molder was constructed. Before the development of inexpensive CNC milling machines, producing a metal mold was prohibitively expensive for most hobbyists. With a small CNC mill and personal CAD tools, though, even complex shapes can be cut easily and accurately.\nApplications.\nHobby injection molding has a variety of applications including the creation of low cost prototypes, new inventions, replication of lost or broken parts, and provides homeowners the opportunity to build anything. Hobby injection molding is a low cost method of repeatable production.\nMaterials.\nPolyethylene (both LDPE and HDPE), polypropylene, and polystyrene (including HIPS) have all been used successfully with lever-actuated benchtop injectors.\nEquipment.\nBenchtop injectors are smaller and simpler than their larger industrious counterparts because they rely on the operator to manually inject melted polymer into the mold and remove the finished part from the mold. Production injectors automatically inject melted polymer at a prescribed rate into the mold, cool the mold to rapidly solidify the polymer, then eject the part from the mold once it's cool. The two halves of the mold must be pressed together with great force to prevent a flash in the part where the two halves meet, and the nozzle of the injector must be pressed tightly against the inlet port of the mold to prevent the escape of melted polymer and a defect in the finished part. In a benchtop injector this is done manually by clamping or bolting the mold together and clamping the complete mold into the injector. In a production injector this is accomplished with hydraulic or pneumatic actuators, which increase the cost of the machine but dramatically reduce the labor required to produce a finished part.\nMolds.\nMetal molds.\nLow cost benchtop CNC milling machines allow home enthusiasts to machine molds out of softer metals. Rather than P20 tool steel, most grades of aluminum can be machined into working molds capable of 1000 plus cycles. Mic 6 cast aluminum is more stable post machining and during cycles than hot extruded grades like 6061 and is easy to machine however it has worse mechanical properties. 7000 series like 7050 and 7075 are preferred for the best mechanical properties in aluminum, they are comparable to low to mid carbon steel molds. Copper alloys, like pewter, or bismuth alloy molds can be cast around a model to create strong molds with higher molding temperatures than epoxy molds. The casting around a model to create each mold part produces complex molds quickly. The parts can also capture detailed surface finishes.\nEpoxy Molds.\nEpoxy molds typically mix epoxy with a metal powder (generally aluminum) to form a mold. Atomized aluminum allows for the distribution of heat from the mold surface outward toward the edges. This typically preserves the surface quality for 50-100 cycles on a single epoxy mold.\nDue to the nature of oxygen entrapment in epoxy during the pouring and curing period it is common to have distortions and cavitation in the final injection mold. Pressurizing the epoxy during the curing period is a form of surface quality retention. External pressures can be created with the use of a pressure pot connected to an air compressor to crush air trapped inside the epoxy mold during curing. As time passes over a 24-hour period the oxygen bubbles will not be able to escape and will cure directly inside the mold. With sufficient pressure these small cavities will be invisible to the naked eye.\nDegassing the epoxy during the curing period can also be done using a vacuum chamber and will require a pressure of 100 kPa (29 inHg) in order to create near vacuum conditions. This can be achieved with the use of a 2-stage vacuum pump that is capable of 2 Pa (15 μmHg).\nSingle use molds.\nSingle use injection molds can be achieved through the use of plaster of Paris. The mold breaks down after the first shot and will rarely allow for the injection of a second shot.", "Automation-Control": 0.9307224751, "Qwen2": "Yes"} {"id": "23265420", "revid": "27823944", "url": "https://en.wikipedia.org/wiki?curid=23265420", "title": "IEEE Robotics and Automation Award", "text": "The IEEE Robotics and Automation Award is a Technical Field Award of the Institute of Electrical and Electronics Engineers (IEEE) that was established by the IEEE Board of Directors in 2002. This award is presented for contributions in the field of robotics and automation.\nThis award may be presented to an individual or team of up to three people.\nRecipients of this award receive a bronze medal, certificate, and honorarium.", "Automation-Control": 0.9333928823, "Qwen2": "Yes"} {"id": "64465650", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=64465650", "title": "SOS-convexity", "text": "A multivariate polynomial is SOS-convex (or sum of squares convex) if its Hessian matrix H can be factored as H(\"x\") = \"S\"T(\"x\")\"S\"(\"x\") where \"S\" is a matrix (possibly rectangular) which entries are polynomials in \"x\". In other words, the Hessian matrix is a SOS matrix polynomial.\nAn equivalent definition is that the form defined as \"g\"(\"x\",\"y\") = \"y\"TH(\"x\")\"y\" is a sum of squares of forms.\nConnection with convexity.\nIf a polynomial is SOS-convex, then it is also convex. Since establishing whether a polynomial is SOS-convex amounts to solving a semidefinite programming problem, SOS-convexity can be used as a proxy to establishing if a polynomial is convex. In contrast, deciding if a generic polynomial of degree large than four is convex is a NP-hard problem.\nThe first counterexample of a polynomial which is convex but not SOS-convex was constructed by Amir Ali Ahmadi and Pablo Parrilo in 2009. The polynomial is a homogeneous polynomial that is sum-of-squares and given by:\nformula_1\nIn the same year, Grigoriy Blekherman proved in a non-constructive manner that there exist convex forms that is not representable as sum of squares. An explicit example of a convex form (with degree 4 and 272 variables) that is not a sum of squares was claimed by James Saunderson in 2021.\nConnection with non-negativity and sum-of-squares.\nIn 2013 Amir Ali Ahmadi and Pablo Parrilo showed that every convex homogeneous polynomial in \"n\" variables and degree 2\"d\" is SOS-convex if and only if either (a) \"n\" = 2 or (b) 2\"d\" = 2 or (c) \"n\" = 3 and 2\"d\" = 4. Impressively, the same relation is valid for non-negative homogeneous polynomial in \"n\" variables and degree 2\"d\" that can be represented as sum of squares polynomials (See Hilbert's seventeenth problem).", "Automation-Control": 0.9375015497, "Qwen2": "Yes"} {"id": "19883453", "revid": "6289403", "url": "https://en.wikipedia.org/wiki?curid=19883453", "title": "Spindle (tool)", "text": "In machine tools, a spindle is a rotating axis of the machine, which often has a shaft at its heart. The shaft itself is called a spindle, but also, in shop-floor practice, the word often is used metonymically to refer to the entire rotary unit, including not only the shaft itself, but its bearings and anything attached to it (chuck, etc.). Spindles are electrically or pneumatically powered and come in various sizes. They are versatile in terms of material it can work with. Materials that spindles work with include embroidery, foam, glass, wood, and etc.\nA machine tool may have several spindles, such as the headstock and tailstock spindles on a bench lathe. The main spindle is usually the biggest one. References to \"the spindle\" without further qualification imply the main spindle. Some machine tools that specialize in high-volume mass production have a group of 4, 6, or even more main spindles. These are called multispindle machines. For example, gang drills and many screw machines are multispindle machines. Although a bench lathe has more than one spindle (counting the tailstock), it is not called a multispindle machine; it has one main spindle.\nExamples of spindles include\nHigh speed spindle.\nHigh speed spindles are used strictly in machines, like CNC mills, designed for metal work. There are two types of high speed spindles, each with different designs:\nBelt-driven spindle.\nConsisting of spindle and bearing shafts held within the spindle housing, the belt-driven spindle is powered by an external motor connected via a belt-pulley system.\nIntegral motor spindle.\nA main component of this spindle is the motor, stored internally.\nBoth types, the belt-driven and the integral motor spindles, have advantages and disadvantages according to their design. Which one is more desirable depends on the purpose of the machine and product(s) being produced.\nCNC machines used with spindles.\nThe type of CNC machine being used with your spindle will vary. Common CNC machines used are:\nBeing that there are a variety of CNC machines available, it is important to choose the right one that fits your specifities.", "Automation-Control": 0.9994801283, "Qwen2": "Yes"} {"id": "1850694", "revid": "42362362", "url": "https://en.wikipedia.org/wiki?curid=1850694", "title": "Drawing (manufacturing)", "text": "Drawing is a metalworking process that uses tensile forces to elongate metal, glass, or plastic. As the material is drawn (pulled), it stretches and becomes thinner, achieving a desired shape and thickness. Drawing is classified into two types: sheet metal drawing and wire, bar, and tube drawing. Sheet metal drawing is defined as a plastic deformation over a curved axis. For wire, bar, and tube drawing, the starting stock is drawn through a die to reduce its diameter and increase its length. Drawing is usually performed at room temperature, thus classified as a cold working process; however, drawing may also be performed at higher temperatures to hot work large wires, rods, or hollow tubes in order to reduce forces. \nDrawing differs from rolling in that pressure is not applied by the turning action of a mill but instead depends on force applied locally near the area of compression. This means the maximal drawing force is limited by the tensile strength of the material, a fact particularly evident when drawing thin wires.\nThe starting point of cold drawing is hot-rolled stock of a suitable size.\nMetal.\nSuccessful drawing depends on the flow and stretch of the material. Steels, copper alloys, and aluminium alloys are commonly drawn metals.\nIn sheet metal drawing, as a die forms a shape from a flat sheet of metal (the \"blank\"), the material is forced to move and conform to the die. The flow of material is controlled through pressure applied to the blank and lubrication applied to the die or the blank. If the form moves too easily, wrinkles will occur in the part. To correct this, more pressure or less lubrication is applied to the blank to limit the flow of material and cause the material to stretch or set thin. If too much pressure is applied, the part will become too thin and break. Drawing metal requires finding the correct balance between wrinkles and breaking to achieve a successful part.\nSheet metal drawing becomes deep drawing when the workpiece is longer than its diameter. It is common that the workpiece is also processed using other forming processes, such as piercing, ironing, necking, rolling, and beading. In shallow drawing, the depth of drawing is less than the smallest dimension of the hole.\nBar, tube, and wire drawing all work upon the same principle: the starting stock is drawn through a die to reduce its diameter and increase its length. Usually, the die is mounted on a draw bench. The starting end of the workpiece is narrowed or pointed to get the end through the die. The end is then placed in grips which pull the rest of the workpiece through the die.\nDrawing can also be used to cold form a shaped cross-section. Cold drawn cross-sections are more precise and have a better surface finish than hot extruded parts. Inexpensive materials can be used instead of expensive alloys for strength requirements, due to work hardening. Bars or rods that are drawn cannot be coiled; therefore, straight-pull draw benches are used. Chain drives are used to draw workpieces up to . Hydraulic cylinders are used for shorter length workpieces. The reduction in area is usually restricted to between 20% and 50%, because greater reductions would exceed the tensile strength of the material, depending on its ductility. To achieve a certain size or shape, multiple passes through progressively smaller dies and intermediate anneals may be required. Tube drawing is very similar to bar drawing, except the beginning stock is a tube. It is used to decrease the diameter, improve surface finish, and improve dimensional accuracy. A mandrel may or may not be used depending on the specific process used. A floating plug may also be inserted into the inside diameter of the tube to control the wall thickness. Wire drawing has long been used to produce flexible metal wire by drawing the material through a series of dies of decreasing size. These dies are manufactured from a number of materials, the most common being tungsten carbide and diamond.\nThe cold drawing process for steel bars and wire is as follows:\nGlass.\nSimilar drawing processes are applied in glassblowing and in making glass and plastic optical fiber.\nPlastics.\nPlastic drawing, sometimes referred to as \"cold drawing\", is the same process as used on metal bars, applied to plastics. Plastic drawing is primarily used in manufacturing plastic fibers. The process was discovered by Julian W. Hill in 1930 while trying to make fibers from an early polyester.\nIt is performed after the material has been \"spun\" into filaments; by extruding the polymer melt through pores of a spinneret. During this process, the individual polymer chains tend to somewhat align because of viscous flow. These filaments still have an amorphous structure, so they are drawn to align the fibers further, thus increasing crystallinity, tensile strength, and stiffness. This is done on a draw twister machine. For nylon, the fiber is stretched to four times its spun length. The crystals formed during drawing are held together by hydrogen bonds between the amide hydrogens of one chain and the carbonyl oxygens of another chain. Polyethylene terephthalate (PET) sheet is drawn in two dimensions to make BoPET (biaxially-oriented polyethylene terephthalate) with improved mechanical properties.", "Automation-Control": 0.6325421929, "Qwen2": "Yes"} {"id": "1850994", "revid": "1144556309", "url": "https://en.wikipedia.org/wiki?curid=1850994", "title": "Western Electric rules", "text": "The Western Electric rules are decision rules in statistical process control for detecting out-of-control or non-random conditions on control charts. Locations of the observations relative to the control chart control limits (typically at ±3 standard deviations) and centerline indicate whether the process in question should be investigated for assignable causes. The Western Electric rules were codified by a specially-appointed committee of the manufacturing division of the Western Electric Company and appeared in the first edition of a 1956 handbook, that became a standard text of the field. Their purpose was to ensure that line workers and engineers interpret control charts in a uniform way.\nMotivation.\nThe rules attempt to distinguish unnatural patterns from natural patterns based on several criteria:\nThe absence of points near the centerline is identified as a mixture pattern.\nThe absence of points near the control limits is identified as a stratification pattern.\nThe presence of points outside the control limits is identified as an instability pattern.\nOther unnatural patterns are categorized as systematic (autocorrelative), repetition, or trend patterns.\nThis classification divides the chart of observations into zones, measured in units of standard deviation (σ) between the centerline and control limits, as follows:\nZones A, B, and C are sometimes called the \"three sigma zone\", \"two sigma zone\", and \"one sigma zone\", respectively.\nZone rules.\nAn important aspect of the Western Electric rules are the zone rules, designed to detect process instability, and the presence of assignable causes.\nData sets of observations are appraised by four basic rules, that categorize the occurrence of data samples in a set of zones defined by multiples of the standard deviation.\nThese rules are evaluated for one side of the center line (one half of the control band) at a time (e.g., first the centerline to the upper control limit, then the centerline to the lower control limit).\nData satisfying any of these conditions as indicated by the control chart provide the justification for investigating the process to discover whether assignable causes are present and can be removed. Note that there is always a possibility of false positives: Assuming observations are normally distributed, one expects Rule 1 to be triggered by chance one out of every 370 observations on average. The false alarm rate rises to one out of every 91.75 observations when evaluating all four rules.\nAsymmetric control limits.\nThe zone rules presented above apply to control charts with symmetric control limits. The handbook provides additional guidelines for control charts where the control limits are not symmetrical, as for R charts and p-charts.\nFor formula_1 and R charts (which plot the behavior of the subgroup range), the Handbook recommends using the zone rules above for subgroups of sufficient size (five or more). For small sample subgroups, the Handbook recommends:\nFor other control charts based on skewed distributions, the Handbook recommends:\nOther unnatural patterns.\nThe handbook also identifies patterns that require consideration of both the upper and lower halves of the control chart together for identification:", "Automation-Control": 0.6256068945, "Qwen2": "Yes"} {"id": "24271629", "revid": "37787117", "url": "https://en.wikipedia.org/wiki?curid=24271629", "title": "Local tangent space alignment", "text": "Local tangent space alignment (LTSA) is a method for manifold learning, which can efficiently learn a nonlinear embedding into low-dimensional coordinates from high-dimensional data, and can also reconstruct high-dimensional coordinates from embedding coordinates. It is based on the intuition that when a manifold is correctly unfolded, all of the tangent hyperplanes to the manifold will become aligned. It begins by computing the \"k\"-nearest neighbors of every point. It computes the tangent space at every point by computing the \"d\"-first principal components in each local neighborhood. It then optimizes to find an embedding that aligns the tangent spaces, but it ignores the label information conveyed by data samples, and thus can not be used for classification directly.", "Automation-Control": 0.9986326098, "Qwen2": "Yes"} {"id": "37463545", "revid": "39166520", "url": "https://en.wikipedia.org/wiki?curid=37463545", "title": "Energy-shaping control", "text": "Energy-shaping control for energy systems considers the plant and its controller as energy-transformation devices. The control strategy is formulated in terms of interconnection (in a power-preserving manner) in order to achieve a desired behavior.", "Automation-Control": 0.9999799132, "Qwen2": "Yes"} {"id": "64243298", "revid": "42342156", "url": "https://en.wikipedia.org/wiki?curid=64243298", "title": "Tidepool (company)", "text": "Tidepool is a nonprofit company founded in 2013 which makes open-source tools to help people better manage diabetes. The company works together with Medtronic to create an interoperable automated insulin pump system.", "Automation-Control": 0.9266896248, "Qwen2": "Yes"} {"id": "20108242", "revid": "1215485", "url": "https://en.wikipedia.org/wiki?curid=20108242", "title": "Trailer stability assist", "text": "Trailer Stability Assist (TSA), also known as Electronic Trailer Sway Control, is designed to control individual wheel slip to correct potential trailer swing before there is an accident. Although similar to Electronic Stability Control (ESC), TSA is programmed differently and is designed to detect yaw in the tow-vehicle and take specific corrective actions to eliminate trailer sway. Most ESC systems are not designed to detect such movement nor take the correct actions to control both trailer and tow-vehicle; so not all ESC equipped vehicles have TSA capabilities.\nTSA systems detect when a trailer is starting to oscillate while under tow and corrects any dangerous trailer swing through a combination of either torque reduction and/or individual wheel braking to bring the trailer and tow-vehicle back under control. While towing heavy trailers, such as travel trailer, an unwanted wallow of the whole assembly may occur. Without the help of electronics, regaining stability requires focused attention by the driver.", "Automation-Control": 0.9868424535, "Qwen2": "Yes"} {"id": "42478623", "revid": "5042921", "url": "https://en.wikipedia.org/wiki?curid=42478623", "title": "Sparse matrix–vector multiplication", "text": "Sparse matrix–vector multiplication (SpMV) of the form is a widely used computational kernel existing in many scientific applications. The input matrix is sparse. The input vector and the output vector are dense. In the case of a repeated operation involving the same input matrix but possibly changing numerical values of its elements, can be preprocessed to reduce both the parallel and sequential run time of the SpMV kernel.", "Automation-Control": 0.6155353785, "Qwen2": "Yes"} {"id": "5971657", "revid": "37061325", "url": "https://en.wikipedia.org/wiki?curid=5971657", "title": "Bang–bang control", "text": "In control theory, a bang–bang controller (hysteresis, 2 step or on–off controller), is a feedback controller that switches abruptly between two states. These controllers may be realized in terms of any element that provides hysteresis. They are often used to control a plant that accepts a binary input, for example a furnace that is either completely on or completely off. Most common residential thermostats are bang–bang controllers. The Heaviside step function in its discrete form is an example of a bang–bang control signal. Due to the discontinuous control signal, systems that include bang–bang controllers are variable structure systems, and bang–bang controllers are thus variable structure controllers.\nBang–bang solutions in optimal control.\nIn optimal control problems, it is sometimes the case that a control is restricted to be between a lower and an upper bound. If the optimal control switches from one extreme to the other (i.e., is strictly never in between the bounds), then that control is referred to as a bang-bang solution.\nBang–bang controls frequently arise in minimum-time problems. For example, if it is desired for a car starting at rest to arrive at a certain position ahead of the car in the shortest possible time, the solution is to apply maximum acceleration until the unique \"switching point\", and then apply maximum braking to come to rest exactly at the desired position.\nA familiar everyday example is bringing water to a boil in the shortest time, which is achieved by applying full heat, then turning it off when the water reaches a boil. A closed-loop household example is most thermostats, wherein the heating element or air conditioning compressor is either running or not, depending upon whether the measured temperature is above or below the setpoint.\nBang–bang solutions also arise when the Hamiltonian is linear in the control variable; application of Pontryagin's minimum or maximum principle will then lead to pushing the control to its upper or lower bound depending on the sign of the coefficient of \"u\" in the Hamiltonian.\nIn summary, bang–bang controls are actually \"optimal\" controls in some cases, although they are also often implemented because of simplicity or convenience.\nPractical implications of bang-bang control.\nMathematically or within a computing context there may\nbe no problems, but the physical realization of bang-bang\ncontrol systems gives rise to several complications. \nFirst, depending on the width of the hysteresis gap and inertia in the process, there will be an oscillating error signal around the desired set point value (e.g., temperature), often saw-tooth shaped. Room temperature may become uncomfortable just before the next switch 'ON' event. Alternatively, a narrow hysteresis gap will lead to frequent\non/off switching, which is often undesirable (e.g. an electrically ignited gas heater).\nSecond, the onset of the step function may entail, for example, a high electrical current and/or sudden heating and expansion of metal vessels, ultimately leading to metal fatigue or other wear-and-tear effects. Where possible, continuous control, such as in PID control will avoid problems caused by the brisk state transitions that are the consequence of bang-bang control.\nPulse-width modulation.\nA PID controller can send pulse-width modulation control signals that reduce switching of motors, solenoids, etc. by setting \"minimum ON times\" and \"minimum OFF times\".", "Automation-Control": 0.9898260236, "Qwen2": "Yes"} {"id": "63686034", "revid": "19502098", "url": "https://en.wikipedia.org/wiki?curid=63686034", "title": "Giorgio Quazza Medal", "text": "The Giorgio Quazza Medal is an award given by the International Federation of Automatic Control (IFAC) to a distinguished control engineer, presented at each IFAC Triennial International World Congress. It was established in 1979, as a memorial to the late Giorgio Quazza, a leading Italian electrical and control engineer who served IFAC in many capacities in a most distinguished manner. The award is given for \"outstanding lifetime contributions of a researcher and/or engineer to conceptual foundations in the field of systems and control.\"", "Automation-Control": 0.9999858737, "Qwen2": "Yes"} {"id": "45059593", "revid": "38033597", "url": "https://en.wikipedia.org/wiki?curid=45059593", "title": "Iron roughneck", "text": "An iron roughneck is a piece of hydraulic machinery used to \"handle\" (connect and disconnect) segments of pipe in a modern drilling rig. The segments can be manipulated as they are hoisted into and out of a borehole. This type of work was previously performed manually by workers using tongs, and was one of the most dangerous jobs in a drilling operation. However, with iron roughnecks and modern technology, much of this can be done remotely with minimal manual handling.\nAutomated roughnecks became common in deep-water drilling and were later adopted by onshore rigs.", "Automation-Control": 0.7785117626, "Qwen2": "Yes"} {"id": "39335150", "revid": "9155723", "url": "https://en.wikipedia.org/wiki?curid=39335150", "title": "Bending machine (manufacturing)", "text": "A bending machine is a forming machine tool (DIN 8586). Its purpose is to assemble a bend on a workpiece. A bend is manufactured by using a bending tool during a linear or rotating move.\nThe detailed classification can be done with the help of the kinematics.\nCNC bending.\nCNC bending machines are developed for high flexibility and low setup times. Those machines are able to bend single pieces as well as small batches with the same precision and efficiency as series-produced parts in an economical way.\nUniversal bending machines – modular construction.\nUniversal bending machines consists of a basic machine that can be adjusted with little effort and used for a variety of bends. A simple plug-in system supports quick and easy exchange of tools.\nThe basic machine consists of a CNC-operated side stop, a work bench, and software for programming and operating. Its modular construction offers an affordable entry into the bending technology, because after an initial investment the machine can be customized and extended later without any conversion. That means the basic machine delivers a bending stroke, and the tool determines the kind of bending.\nBending tools.\nIn the case of bending tools they are classified by the kind of generated bends. They can be constructed to adjust the bending angle by reference, stroke measurement or angle measurement.\nCNC machines usually abstain from a reference part. They grant a high bending accuracy starting with the first work piece.\nStandard bends.\nAll bends without an extraordinary geometry belong to standard bends. The distance between a bend and the material end is quite high providing an adequate bearing area. The same with one bend to the next.\nTypical tools are a so-called bending former combined with a prisms with electronic angular measurement or an ordinary prism.\nU-bending.\nFor U-bends where tight and narrow bends are necessary, the bending former is replaced by a bending mandrel. A bending mandrel has a narrow geometry.\nOffset bending.\nOffset bending tools are used to assemble two bends with a small distance between in one step.\nEdgewise bending.\nEdge bending tools are used if the bending axis is placed parallel to the tight side of the work piece. Tools for bending on edge may include electronic angular measurement allowing a high bending accuracy.\nTorsion bending.\nTorsion tools are able to rotate the workpiece on the longitudinal axis. Alternatives are complex assembly groups with standard bends.\nAngular measurement and spring back compensation.\nFor producing single pieces as well as small batches with the same precision and efficiency as series-produced parts, a spring back compensation is helpful. A bending accuracy of +/- 0.2° starting\nfrom the first work piece is achieved due to calculated spring back compensation and the use\nof electronic tools.\nOperating mode angular measurement.\nBending prisms with electronic angular measurement technology are equipped with two flattened bending bolds. That bold rotate while bending giving a signal to the angle measurement. The measuring accuracy is about 0.1º. The computer then calculates the required final stroke and spring back of every bend is compensated regardless of material type. A high angle accuracy of +/- 0.2º is achieved instantly with the first workpiece without adjustments. Compared to adjustment by reference, material waste amounts are decreased, because even inconsistencies within a single piece of material are automatically adjusted .\nOperating mode stroke measurement.\nWherever bending prisms with electronic angular measurement are not suitable, a small distance between the bends might be a reason, bending prisms without electronic angle measurement are applied.\nIn that case the control unit can be switched from angular measurement to stroke measurement. This method allows the pre-selection of the stroke of the bending ram in mm and therefore the immersion depth of the punch into the prism. Setting accuracy is +/- 0.1 mm. A final stroke is usually not required. Further development of the stroke system enables the user to specify an angle from which the stroke is calculated by using stored stroke functions. Bending accuracy in that case is dependent on material properties such as thickness, hardness, etc. which may differ from one work piece to another.\nProgramming and principle of operation.\nProgramming is done on a PC equipped with dedicated software, which is part of the machine or connected to an external workstation. For generating a new program engineering data can be imported or pasted per mouse and keyboard. Through a graphic and menu-driven user interface previous CNC programming skills are not required. The software asks for all necessary values and checks all figures. Inputs can be corrected at any time and minimum distances are checked instantly to guard against improper inputs. The software automatically calculates the flat length of each part being bent and determines the exact position of the side stop. The part is shown on a screen.\nIdeally each program is stored in one database, so it is easy to recover them by search and sort functions.\nNetworking with the whole production line.\nA lot of organizational effort and interface management is saved, if the CNC bending machine is connected to the previous and subsequent process. For a connection to other machines and external workstations corporate interfaces have to be established.\nNetworking with a punching machine.\nIf a part is bended, in most cases a prior process was inserting holes to mount it in an assembly group.\nTherefore, a punching machine is an option. Some programs enable the operator to program both step by one software tool.", "Automation-Control": 0.8224414587, "Qwen2": "Yes"} {"id": "38464460", "revid": "1142057972", "url": "https://en.wikipedia.org/wiki?curid=38464460", "title": "Lyapunov optimization", "text": "This article describes Lyapunov optimization for dynamical systems. It gives an example application to optimal control in queueing networks.\nIntroduction.\nLyapunov optimization refers to the use of a Lyapunov function to optimally control a dynamical system. Lyapunov functions are used extensively in control theory to ensure different forms of system stability. The state of a system at a particular time is often described by a multi-dimensional vector. A Lyapunov function is a nonnegative scalar measure of this multi-dimensional state. Typically, the function is defined to grow large when the system moves towards undesirable states. System stability is achieved by taking control actions that make the Lyapunov function drift in the negative direction towards zero.\nLyapunov drift is central to the study of optimal control in queueing networks. A typical goal is to stabilize all network queues while optimizing some performance objective, such as minimizing average energy or maximizing average throughput. Minimizing the drift of a quadratic Lyapunov function leads to the\nbackpressure routing algorithm for network stability, also called the \"max-weight algorithm\".\nAdding a weighted penalty term to the Lyapunov drift and minimizing the sum leads to the drift-plus-penalty algorithm for joint network stability and penalty minimization. The drift-plus-penalty procedure can also be used to compute solutions to convex programs and linear programs.\nLyapunov drift for queueing networks.\nConsider a queueing network that evolves in discrete time with normalized time slots formula_1 Suppose there are formula_2 queues in the network, and define the vector of queue backlogs at time formula_3 by:\nQuadratic Lyapunov functions.\nFor each slot formula_5 define:\nThis function is a scalar measure of the total queue backlog in the network. It is called \"quadratic Lyapunov function\" on the queue state. Define the \"Lyapunov drift\" as the change in this function from one slot to the next:\nBounding the Lyapunov drift.\nSuppose the queue backlogs change over time according to the following equation:\nwhere formula_9 and formula_10 are arrivals and service opportunities, respectively, in queue formula_11 on slot formula_12 This equation can be used to compute a bound on the Lyapunov drift for any slot t:\nRearranging this inequality, summing over all formula_14 and dividing by 2 leads to:\nwhere:\nSuppose the second moments of arrivals and service in each queue are bounded, so that there is a finite constant formula_17 such that for all formula_3 and all possible queue vectors formula_19 the following property holds:\nTaking conditional expectations of (Eq. 1) leads to the following bound on the \"conditional expected Lyapunov drift\":\nA basic Lyapunov drift theorem.\nIn many cases, the network can be controlled so that the difference between arrivals and service at each queue satisfies the following property for some real number formula_22:\nIf the above holds for the same epsilon for all queues formula_14 all slots formula_5 and all possible vectors formula_26 then (Eq. 2) reduces to the drift condition used in the following Lyapunov drift theorem. The theorem below can be viewed as a variation on Foster's theorem for Markov chains. However, it does not require a Markov chain structure.\nProof. Taking expectations of both sides of the drift inequality and using the law of iterated expectations yields:\nSumming the above expression over formula_34 and using the law of telescoping sums gives:\nUsing the fact that formula_36 is non-negative and rearranging the terms in the above expression proves the result.\nLyapunov optimization for queueing networks.\nConsider the same queueing network as in the above section. Now define formula_37 as a \"network penalty\" incurred on slot formula_12 Suppose the goal is to stabilize the queueing network while minimizing the time average of formula_39 For example, to stabilize the network while minimizing time average power, formula_37 can be defined as the total power incurred by the network on slot t. To treat problems of maximizing the time average of some desirable \"reward\" formula_41 the penalty can be defined formula_42 This is useful for maximizing network throughout utility subject to stability.\nTo stabilize the network while minimizing the time average of the penalty formula_43 network algorithms can be designed to make control actions that greedily minimize a bound on the following drift-plus-penalty expression on each slot formula_3:\nwhere formula_46 is a non-negative weight that is chosen as desired to affect a performance tradeoff. A key feature of this approach is that it typically does not require knowledge of the probabilities of the random network events (such as random job arrivals or channel realizations). Choosing formula_47 reduces to minimizing a bound on the drift every slot and, for routing in multi-hop queueing networks, reduces to the backpressure routing algorithm developed by Tassiulas and Ephremides. Using formula_48 and defining formula_37 as the network power use on slot formula_3 leads to the drift-plus-penalty algorithm for minimizing average power subject to network stability developed by Neely. Using formula_48 and using formula_37 as the negative of an admission control utility metric leads to the drift-plus-penalty algorithm for joint flow control and network routing developed by Neely, Modiano, and Li.\nA generalization of the Lyapunov drift theorem of the previous section is important in this context. For simplicity of exposition, assume formula_37 is bounded from below:\nFor example, the above is satisfied with formula_55 in cases when the penalty formula_37 is always non-negative. Let formula_57 represent a desired target for the time average of formula_39 Let formula_46 be a parameter used to weight the importance of meeting the target. The following theorem shows that if a drift-plus-penalty condition is met, then the time average penalty is at most O(1/V) above the desired target, while average queue size is O(V). The formula_46 parameter can be tuned to make time average penalty as close to (or below) the target as desired, with a corresponding queue size tradeoff.\nProof. Taking expectations of both sides of the posited drift-plus-penalty and using the law of iterated expectations we have:\nSumming the above over the first formula_3 slots and using the law of telescoping sums gives:\nDividing by formula_72 and rearranging terms proves the time average penalty bound. A similar argument proves the time average queue size bound.", "Automation-Control": 0.6061122417, "Qwen2": "Yes"} {"id": "26597035", "revid": "7918", "url": "https://en.wikipedia.org/wiki?curid=26597035", "title": "Pairwise summation", "text": "In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence. Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.\nIn particular, pairwise summation of a sequence of \"n\" numbers \"xn\" works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its worst-case roundoff errors grow asymptotically as at most \"O\"(ε log \"n\"), where ε is the machine precision (assuming a fixed condition number, as discussed below). In comparison, the naive technique of accumulating the sum in sequence (adding each \"xi\" one at a time for \"i\" = 1, ..., \"n\") has roundoff errors that grow at worst as \"O\"(ε\"n\"). Kahan summation has a worst-case error of roughly \"O\"(ε), independent of \"n\", but requires several times more arithmetic operations. If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of formula_1 for pairwise summation.\nA very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.\nThe algorithm.\nIn pseudocode, the pairwise summation algorithm for an array of length ≥ 0 can be written:\n \"s\" = pairwise(\"x\"[1…\"n\"])\n if \"n\" ≤ \"N\" \"base case: naive summation for a sufficiently small array\"\n \"s\" = 0\n for \"i\" = 1 to \"n\"\n \"s\" = \"s\" + \"x\"[\"i\"]\n else \"divide and conquer: recursively sum two halves of the array\"\n \"m\" = floor(\"n\" / 2)\n \"s\" = pairwise(\"x\"[1…\"m\"]) + pairwise(\"x\"[\"m\"+1…\"n\"])\n end if\nFor some sufficiently small , this algorithm switches to a naive loop-based summation as a base case, whose error bound is O(Nε). The entire sum has a worst-case error that grows asymptotically as \"O\"(ε log \"n\") for large \"n\", for a given condition number (see below).\nIn an algorithm of this sort (as for divide and conquer algorithms in general), it is desirable to use a larger base case in order to amortize the overhead of the recursion. If \"N\" = 1, then there is roughly one recursive subroutine call for every input, but more generally there is one recursive call for (roughly) every \"N\"/2 inputs if the recursion stops at exactly \"n\" = \"N\". By making \"N\" sufficiently large, the overhead of recursion can be made negligible (precisely this technique of a large base case for recursive summation is employed by high-performance FFT implementations).\nRegardless of \"N\", exactly \"n\"−1 additions are performed in total, the same as for naive summation, so if the recursion overhead is made negligible then pairwise summation has essentially the same computational cost as for naive summation.\nA variation on this idea is to break the sum into \"b\" blocks at each recursive stage, summing each block recursively, and then summing the results, which was dubbed a \"superblock\" algorithm by its proposers. The above pairwise algorithm corresponds to \"b\" = 2 for every stage except for the last stage which is \"b\" = \"N\".\nAccuracy.\nSuppose that one is summing \"n\" values \"x\"\"i\", for \"i\" = 1, ..., \"n\". The exact sum is:\n(computed with infinite precision).\nWith pairwise summation for a base case \"N\" = 1, one instead obtains formula_3, where the error formula_4 is bounded above by:\nwhere ε is the machine precision of the arithmetic being employed (e.g. ε ≈ 10−16 for standard double precision floating point). Usually, the quantity of interest is the relative error formula_6, which is therefore bounded above by:\nIn the expression for the relative error bound, the fraction (Σ|\"xi\"|/|Σ\"xi\"|) is the condition number of the summation problem. Essentially, the condition number represents the \"intrinsic\" sensitivity of the summation problem to errors, regardless of how it is computed. The relative error bound of \"every\" (backwards stable) summation method by a fixed algorithm in fixed precision (i.e. not those that use arbitrary-precision arithmetic, nor algorithms whose memory and time requirements change based on the data), is proportional to this condition number. An \"ill-conditioned\" summation problem is one in which this ratio is large, and in this case even pairwise summation can have a large relative error. For example, if the summands \"xi\" are uncorrelated random numbers with zero mean, the sum is a random walk and the condition number will grow proportional to formula_8. On the other hand, for random inputs with nonzero mean the condition number asymptotes to a finite constant as formula_9. If the inputs are all non-negative, then the condition number is 1.\nNote that the formula_10 denominator is effectively 1 in practice, since formula_11 is much smaller than 1 until \"n\" becomes of order 21/ε, which is roughly 101015 in double precision.\nIn comparison, the relative error bound for naive summation (simply adding the numbers in sequence, rounding at each step) grows as formula_12 multiplied by the condition number. In practice, it is much more likely that the rounding errors have a random sign, with zero mean, so that they form a random walk; in this case, naive summation has a root mean square relative error that grows as formula_13 and pairwise summation has an error that grows as formula_1 on average.\nSoftware implementations.\nPairwise summation is the default summation algorithm in NumPy and the Julia technical-computing language, where in both cases it was found to have comparable speed to naive summation (thanks to the use of a large base case).\nOther software implementations include the HPCsharp library for the C Sharp language and the standard library summation in D.", "Automation-Control": 0.9257591367, "Qwen2": "Yes"} {"id": "2301590", "revid": "34412543", "url": "https://en.wikipedia.org/wiki?curid=2301590", "title": "NC-CAM", "text": "NC-CAM is a computer-aided manufacturing software program introduced in 1989, and used by printed circuit board manufacturers to create, modify, and optimize the CNC program files used by printed circuit board drilling and routing machines. In particular, NC-CAM is used to optimize the RS-274C Excellon format files used to program Excellon, Hitachi and other printed circuit board drilling and routing machines.\nNC-CAM was first developed for MS-DOS by Robert Henningsgard, and it is today developed and supplied for Microsoft Windows by FASTechnologies, Corp. of Big Lake, Minnesota, USA.", "Automation-Control": 0.9783972502, "Qwen2": "Yes"} {"id": "2057699", "revid": "39166520", "url": "https://en.wikipedia.org/wiki?curid=2057699", "title": "Stable polynomial", "text": "In the context of the characteristic polynomial of a differential equation or difference equation, a polynomial is said to be stable if either:\nThe first condition provides stability for continuous-time linear systems, and the second case relates to stability\nof discrete-time linear systems. A polynomial with the first property is called at times a Hurwitz polynomial and with the second property a Schur polynomial. Stable polynomials arise in control theory and in mathematical theory\nof differential and difference equations. A linear, time-invariant system (see LTI system theory) is said to be BIBO stable if every bounded input produces bounded output. A linear system is BIBO stable if its characteristic polynomial is stable. The denominator is required to be Hurwitz stable if the system is in continuous-time and Schur stable if it is in discrete-time. In practice, stability is determined by applying any one of several stability criteria.", "Automation-Control": 0.999972105, "Qwen2": "Yes"} {"id": "45574862", "revid": "36440400", "url": "https://en.wikipedia.org/wiki?curid=45574862", "title": "Additive state decomposition", "text": "Additive state decomposition occurs when a system is decomposed into two or more subsystems with the same dimension as that of the original system. A commonly used decomposition in the control field is to decompose a system into two or more lower-order subsystems, called lower-order subsystem decomposition here. In contrast, additive state decomposition is to decompose a system into two or more subsystems with the same dimension as that of the original system.\nTaking a system for example, it is decomposed into two subsystems: and , where and , respectively. The lower-order subsystem decomposition satisfies\nBy contrast, the additive state decomposition satisfies\nOn a dynamical control system.\nConsider an 'original' system as follows:\nwhere formula_3.\nFirst, a 'primary' system is brought in, having the same dimension as the original system:\nwhere formula_4\nFrom the original system and the primary system, the following 'secondary' system is derived:\nNew variables formula_6 are defined as follows:\nThen the secondary system can be further written as follows:\nFrom the definition , it follows\nThe process is shown in this picture:\nExamples.\nExample 1.\nIn fact, the idea of the additive state decomposition has been implicitly mentioned in existing literature. An existing example is the tracking controller design, which often requires a reference system to derive error dynamics. The reference system (primary system) is assumed to be given as follows:\nBased on the reference system, the error dynamics (secondary system) are derived as follows:\nwhere formula_13\nThis is a commonly used step to transform a tracking problem to a stabilization problem when adaptive control is used.\nExample 2.\nConsider a class of systems as follows:\nChoose as the original system and design the primary system as follows:\nThen the secondary system is determined by the rule :\nBy additive state decomposition\nSince\nthe tracking error can be analyzed by and separately. If and are bounded and small, then so is . Fortunately, note that is a linear time-invariant system and is independent of the secondary system , for the analysis of which many tools such as the transfer function are available. By contrast, the transfer function tool cannot be directly applied to the original system as it is time-varying.\nExample 3.\nConsider a class of nonlinear systems as follows:\nwhere represent the state, output and input, respectively; the function is nonlinear. The objective is to design such that as . Choose as the original system and design the primary system as follows:\nThen the secondary system is determined by the rule :\nwhere . Then and\n. Here, the task is assigned to the linear time-invariant system (a linear time-invariant system being simpler than a nonlinear one). On the other hand, the task is assigned to the nonlinear system (a stabilizing control problem is simpler than a tracking problem). If the two tasks are accomplished, then . The basic idea is to decompose an original system into two subsystems in charge of simpler subtasks. Then one designs controllers for two subtasks, and finally combines them to achieve the original control task. The process is shown in this picture:\nComparison with superposition principle.\nA well-known example implicitly using additive state decomposition is the superposition principle, widely used in physics and engineering.The superposition principle states: For all linear systems, the net response at a given place and time caused by two or more stimuli is the sum of the responses which would have been caused by each stimulus individually. For a simple linear system:\nthe statement of the superposition principle means , where\nObviously, this result can also be derived from the additive state decomposition. Moreover, the superposition principle and additive state decomposition have the following relationship.\nFrom Table 1, additive state decomposition can be applied not only to linear systems but also nonlinear systems.\nApplications.\nAdditive state decomposition is used in stabilizing control, and can be extended to additive output decomposition.", "Automation-Control": 0.9999710917, "Qwen2": "Yes"} {"id": "552466", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=552466", "title": "System identification", "text": "The field of system identification uses statistical methods to build mathematical models of dynamical systems from measured data. System identification also includes the optimal design of experiments for efficiently generating informative data for fitting such models as well as model reduction. A common approach is to start from measurements of the behavior of the system and the external influences (inputs to the system) and try to determine a mathematical relation between them without going into many details of what is actually happening inside the system; this approach is called black box system identification.\nOverview.\nA dynamic mathematical model in this context is a mathematical description of the dynamic behavior of a system or process in either the time or frequency domain. Examples include:\nOne of the many possible applications of system identification is in control systems. For example, it is the basis for modern data-driven control systems, in which concepts of system identification are integrated into the controller design, and lay the foundations for formal controller optimality proofs.\nInput-output vs output-only.\nSystem identification techniques can utilize both input and output data (e.g. eigensystem realization algorithm) or can include only the output data (e.g. frequency domain decomposition). Typically an input-output technique would be more accurate, but the input data is not always available.\nOptimal design of experiments.\nThe quality of system identification depends on the quality of the inputs, which are under the control of the systems engineer. Therefore, systems engineers have long used the principles of the design of experiments. In recent decades, engineers have increasingly used the theory of optimal experimental design to specify inputs that yield maximally precise estimators.\nWhite- and black-box.\nOne could build a so-called white-box model based on first principles, e.g. a model for a physical process from the Newton equations, but in many cases, such models will be overly complex and possibly even impossible to obtain in reasonable time due to the complex nature of many systems and processes.\nA much more common approach is therefore to start from measurements of the behavior of the system and the external influences (inputs to the system) and try to determine a mathematical relation between them without going into the details of what is actually happening inside the system. This approach is called system identification. Two types of models are common in the field of system identification:\nIn the context of nonlinear system identification Jin et al. describe grey-box modeling by assuming a model structure a priori and then estimating the model parameters. Parameter estimation is relatively easy if the model form is known but this is rarely the case. Alternatively, the structure or model terms for both linear and highly complex nonlinear models can be identified using NARMAX methods. This approach is completely flexible and can be used with grey box models where the algorithms are primed with the known terms, or with completely black-box models where the model terms are selected as part of the identification procedure. Another advantage of this approach is that the algorithms will just select linear terms if the system under study is linear, and nonlinear terms if the system is nonlinear, which allows a great deal of flexibility in the identification.\nIdentification for control.\nIn control systems applications, the objective of engineers is to obtain a good performance of the closed-loop system, which is the one comprising the physical system, the feedback loop and the controller. This performance is typically achieved by designing the control law relying on a model of the system, which needs to be identified starting from experimental data. If the model identification procedure is aimed at control purposes, what really matters is not to obtain the best possible model that fits the data, as in the classical system identification approach, but to obtain a model satisfying enough for the closed-loop performance. This more recent approach is called identification for control, or I4C in short.\nThe idea behind I4C can be better understood by considering the following simple example. Consider a system with \"true\" transfer function formula_1:\nand an identified model formula_3:\nFrom a classical system identification perspective, formula_3 is \"not\", in general, a \"good\" model for formula_1. In fact, modulus and phase of formula_3 are different from those of formula_1 at low frequency. What is more, while formula_1 is an asymptotically stable system, formula_3 is a simply stable system. However, formula_3 may still be a model good enough for control purposes. In fact, if one wants to apply a purely proportional negative feedback controller with high gain formula_12, the closed-loop transfer function from the reference to the output is, for formula_1\nand for formula_3\nSince formula_12 is very large, one has that formula_18. Thus, the two closed-loop transfer functions are indistinguishable. In conclusion, formula_3 is a \"perfectly acceptable\" identified model for the \"true\" system if such feedback control law has to be applied. Whether or not a model is \"appropriate\" for control design depends not only on the plant/model mismatch but also on the controller that will be implemented. As such, in the I4C framework, given a control performance objective, the control engineer has to design the identification phase in such a way that the performance achieved by the model-based controller on the \"true\" system is as high as possible.\nSometimes, it is even more convenient to design a controller without explicitly identifying a model of the system, but directly working on experimental data. This is the case of \"direct\" data-driven control systems.\nForward model.\nA common understanding in Artificial Intelligence is that the controller has to generate the next move for a robot. For example, the robot starts in the maze and then the robot decides to move forward. Model predictive control determines the next action indirectly. The term “model” is referencing to a forward model which doesn't provide the correct action but simulates a scenario. A forward model is equal to a physics engine used in game programming. The model takes an input and calculates the future state of the system.\nThe reason why dedicated forward models are constructed is because it allows one to divide the overall control process. The first question is how to predict the future states of the system. That means, to simulate a plant over a timespan for different input values. And the second task is to search for a sequence of input values which brings the plant into a goal state. This is called predictive control.\nThe forward model is the most important aspect of a MPC-controller. It has to be created before the solver can be realized. If it's unclear what the behavior of a system is, it's not possible to search for meaningful actions. The workflow for creating a forward model is called system identification. The idea is to formalize a system in a set of equations which will behave like the original system. The error between the real system and the forward model can be measured.\nThere are many techniques available to create a forward model: ordinary differential equations is the classical one which is used in physics engines like Box2d. A more recent technique is a neural network for creating the forward model.", "Automation-Control": 0.9798692465, "Qwen2": "Yes"} {"id": "24355140", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=24355140", "title": "Multiaxis machining", "text": "Multiaxis machining is a manufacturing process that involves tools that move in 4 or more directions and are used to manufacture parts out of metal or other materials by milling away excess material, by water jet cutting or by laser cutting. This type of machining was originally performed mechanically on large complex machines. These machines operated on 4, 5, 6, and even 12 axes which were controlled individually via levers that rested on cam plates. The cam plates offered the ability to control the tooling device, the table in which the part is secured, as well as rotating the tooling or part within the machine. Due to the machines size and complexity it took extensive amounts of time to set them up for production. Once computer numerically controlled machining was introduced it provided a faster, more efficient method for machining complex parts.\nTypical CNC tools support translation in 3 axis; multiaxis machines also support rotation around one or multiple axis. 5-axis machines are commonly used in industry in which the workpiece is translated linearly along three axes (typically x, y, and z) and the tooling spindle is capable of rotation about an addition 2 axes.\nThere are now many CAM (computer aided manufacturing) software systems available to support multiaxis machining including software that can automatically convert 3-axis toolpaths into 5-axis toolpaths. Prior to the advancement of Computer Aided Manufacturing, transferring information from design to production often required extensive manual labor, generating errors and resulting in wasted time and material.\nThere are three main components to multiaxis machines:\nMultiaxis machines offer several improvements over other CNC tools, at the cost of increased complexity and price of the machine:\nThe number of axes for multiaxis machines varies from 4 to 9. Each axis of movement is implemented either by moving the table (into which the workpiece is attached), or by moving the tool. The actual configuration of axes varies, therefore machines with the same number of axes can differ in the movements that can be performed.\nApplications.\nMultiaxis CNC machines are used in many industries including:\nMultiaxis machining is also commonly used for rapid prototyping as it can create strong, high quality models out of metal, plastic, and wood while still being easily programmable.\nComputer-aided manufacturing (CAM) software.\nCAM software automates the process of converting 3D models into tool paths, the route the multiaxis machine takes to mill a part (Fig. 1). This software takes into account the different parameters of the tool head (in the case of a CNC router, this would be the bit size), dimensions of the blank, and any constraints the machine may have. The tool paths for multiple passes can be generated to produce a higher level of detail on the parts. The first few passes remove large amounts of material, while the final, most important pass creates the surface finish. In the case of the CNC lathe, the CAM software will optimize the tool path to have the central axis of the part align with the rotary of the lathe. Once the tool paths have been generated, the CAM software will convert them into G-code, allowing the CNC machine to begin milling.\nCAM software is currently the limiting factor in the capabilities of a multiaxis machine with ongoing development. Recent breakthroughs in this space include:", "Automation-Control": 0.998963654, "Qwen2": "Yes"} {"id": "1611899", "revid": "20841863", "url": "https://en.wikipedia.org/wiki?curid=1611899", "title": "Enterprise messaging system", "text": "An enterprise messaging system (EMS) or messaging system in brief is a set of published enterprise-wide standards that allows organizations to send semantically precise messages between computer systems. EMS systems promote loosely coupled architectures that allow changes in the formats of messages to have minimum impact on message subscribers. EMS systems are facilitated by the use of structured messages (such as using XML or JSON), and appropriate protocols, such as DDS, MSMQ, AMQP or SOAP with web services.\nEMS usually takes into account the following considerations:\nEMS are also known as Message-Oriented Middleware (MOM)\nSeparation of message header and message body.\nThe design of an EMS is usually broken down into two sections:\nComparisons.\nThe commonalities between messaging systems (in terms of capabilities and architecture) have been captured in a platform-independent fashion as enterprise integration patterns (a.k.a. messaging patterns).\nAlthough similar in concept to an enterprise service bus (ESB), an EMS places emphasis on design of messaging protocols (for instance, using DDS, MSMQ or AMQP), not the implementation of the services using a specific technology such as web services, DDS APIs for C/C++ and Java, .NET or Java Message Service (JMS).\nNote that an Enterprise Messaging System should not be confused with an electronic mail system used for delivering human readable text messages to individual people.\nAn example of a specific application programming interface (API) that implements an enterprise messaging system is the Java Message Service. Although this is an API it embodies many of the same issues involved in setting up a full EMS.\nPolicy statements may also be extracted from a centralized policy server. These policy statements can be expressed in the XML Access Control Markup Language (XACML).", "Automation-Control": 0.9224786162, "Qwen2": "Yes"} {"id": "40315420", "revid": "14383484", "url": "https://en.wikipedia.org/wiki?curid=40315420", "title": "ISO/IEC JTC 1/SC 38", "text": "ISO/IEC JTC 1/SC 38 Cloud Computing and Distributed Platforms is a standardization subcommittee, which is part of the Joint Technical Committee ISO/IEC JTC 1 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). \nISO/IEC JTC 1/SC 38 serves as the focus, proponent, and systems integration entity on Cloud Computing, Distributed Platforms, and the application of these technologies. ISO/IEC JTC 1/SC 38 provides guidance to JTC 1, IEC, ISO and other entities developing standards in these areas. The Subcommittee is addressing the demand pull from users, especially governments, for standards to assist them in specifying, acquiring and applying Cloud Computing and distribute platform technologies and services.\nHistory.\nISO/IEC JTC 1/SC 38 was formed at the October 2009 ISO/IEC JTC 1 Plenary Meeting in Tel Aviv via approval of Resolution 36. The international secretariat of ISO/IEC JTC 1/SC 38 is the American National Standards Institute (ANSI), located in the United States. The first meeting of the subcommittee took place in Beijing, China in May 2010. ISO/IEC JTC 1/SC 38 approved its scope, established three working groups and developed terms of reference for each at this inaugural meeting.\nEstablished to address three related areas of technology - Web Services, Service Oriented Architecture (SOA), and Cloud Computing - ISO/IEC JTC 1/SC 38 was initially titled Distributed Application Platforms and Services (DAPS). Meeting in Plenary 2 times per year during its first 6 years, with interim electronic and face-to-face meetings of its Working Groups, ISO/IEC JTC 1/SC 38 completed work in Web Services and SOA and increased its focus on Cloud Computing. To reflect this evolution in focus, the JTC 1 2014 Plenary Meeting in Abu Dhabi approved a revised scope and new title for ISO/IEC JTC 1/SC 38, Cloud Computing and Distributed Platforms.\nScope and mission.\nThe scope of ISO/IEC JTC 1/SC 38 is the “Standardization in the area of Cloud Computing and Distributed Platforms”. This includes:\nStructure.\nISO/IEC JTC 1/SC 38 is made up of two working groups (WGs). Each working group carries out specific tasks in standards development within the field of Cloud Computing and Distributed Platforms, where the focus of each working group is described in the group’s terms of reference. The two active working groups of ISO/IEC JTC 1/SC 38 are:\nTerms of Reference:\nTerms of Reference:\nCollaborations.\nISO/IEC JTC 1/SC 38 works closely with a number of other JTC 1 subcommittees, including ISO/IEC JTC 1/SC 7, Software and Systems Engineering, and ISO/IEC JTC 1/SC 27, IT Security Techniques. In addition, the subcommittee works with a number of external forums, including the Cloud Security Alliance, Distributed Management Task Force (DMTF), the Open Grid Forum and The Open Group. Together, ISO/IEC JTC 1/SC 38 (specifically WG 3: Cloud Computing) and ITU-T/SG 13 formed Collaborative Teams on Cloud Computing Overview and Vocabulary (CT-CCVOCAB) and Cloud Computing Reference Architecture (CT-CCRA) that developed two standards: ISO/IEC JTC 1 17788 – Cloud Computing Vocabulary and ISO/IEC JTC 1 17789 – Cloud Computing Reference Architecture.\nOrganizations internal to ISO or IEC that collaborate with or are in liaison to ISO/IEC JTC 1/SC 38 include:\nOrganizations external to ISO or IEC that collaborate with or are in liaison to ISO/IEC JTC 1/SC 38 include:\nMember countries.\nCountries pay a fee to ISO to be members of subcommittees.\nThe 29 \"P\" (participating) members of ISO/IEC JTC 1/SC 38 are: Australia, Austria, Belgium, Brazil, Canada, China, Denmark, Finland, France, Germany, India, Ireland, Israel, Italy, Japan, Republic of Korea, Luxembourg, Netherlands, Poland, Portugal, Russian Federation, Singapore, Slovakia, South Africa, Spain, Sweden, Switzerland, United Kingdom, and United States of America.\nThe 8 \"O\" (observing) members of ISO/IEC JTC 1/SC 38 are: Argentina, Bosnia and Herzegovina, Czech Republic, Hong Kong, New Zealand, Norway, Serbia, and Uruguay.\nStandards.\nAs of 2019, ISO/IEC JTC 1/SC 38 has 15 published standards. Some standards related to the work in ISO/IEC JTC 1/SC 38 include:", "Automation-Control": 0.6590918303, "Qwen2": "Yes"} {"id": "658183", "revid": "35498457", "url": "https://en.wikipedia.org/wiki?curid=658183", "title": "Industrial process control", "text": "An industrial process control or simply process control in continuous production processes is a discipline that uses industrial control systems and control theory to achieve a production level of consistency, economy and safety which could not be achieved purely by human manual control. It is implemented widely in industries such as automotive, mining, dredging, oil refining, pulp and paper manufacturing, chemical processing and power generating plants.\nThere is a wide range of size, type and complexity, but it enables a small number of operators to manage complex processes to a high degree of consistency. The development of large industrial process control systems was instrumental in enabling the design of large high volume and complex processes, which could not be otherwise economically or safely operated.\nThe applications can range from controlling the temperature and level of a single process vessel, to a complete chemical processing plant with several thousand control loops.\nHistory.\nEarly process control breakthroughs came most frequently in the form of water control devices. Ktesibios of Alexandria is credited for inventing float valves to regulate water level of water clocks in the 3rd century BC. In the 1st century AD, Heron of Alexandria invented a water valve similar to the fill valve used in modern toilets.\nLater process controls inventions involved basic physics principles. In 1620, Cornelis Drebbel invented a bimetallic thermostat for controlling the temperature in a furnace. In 1681, Denis Papin discovered the pressure inside a vessel could be regulated by placing weights on top of the vessel lid. In 1745, Edmund Lee created the fantail to improve windmill efficiency; a fantail was a smaller windmill placed 90° of the larger fans to keep the face of the windmill pointed directly into the oncoming wind.\nWith the dawn of the Industrial Revolution in the 1760s, process controls inventions were aimed to replace human operators with mechanized processes. In 1784, Oliver Evans created a water-powered flourmill which operated using buckets and screw conveyors. Henry Ford applied the same theory in 1910 when the assembly line was created to decrease human intervention in the automobile production process.\nFor continuously variable process control it was not until 1922 that a formal control law for what we now call PID control or three-term control was first developed using theoretical analysis, by Russian American engineer Nicolas Minorsky. Minorsky was researching and designing automatic ship steering for the US Navy and based his analysis on observations of a helmsman. He noted the helmsman steered the ship based not only on the current course error, but also on past error, as well as the current rate of change; this was then given a mathematical treatment by Minorsky.\nHis goal was stability, not general control, which simplified the problem significantly. While proportional control provided stability against small disturbances, it was insufficient for dealing with a steady disturbance, notably a stiff gale (due to steady-state error), which required adding the integral term. Finally, the derivative term was added to improve stability and control.\nDevelopment of modern process control operations.\nProcess control of large industrial plants has evolved through many stages. Initially, control would be from panels local to the process plant. However this required a large manpower resource to attend to these dispersed panels, and there was no overall view of the process. The next logical development was the transmission of all plant measurements to a permanently-staffed central control room. Effectively this was the centralisation of all the localised panels, with the advantages of lower manning levels and easier overview of the process. Often the controllers were behind the control room panels, and all automatic and manual control outputs were transmitted back to plant. However, whilst providing a central control focus, this arrangement was inflexible as each control loop had its own controller hardware, and continual operator movement within the control room was required to view different parts of the process.\nWith the coming of electronic processors and graphic displays it became possible to replace these discrete controllers with computer-based algorithms, hosted on a network of input/output racks with their own control processors. These could be distributed around plant, and communicate with the graphic display in the control room or rooms. The distributed control system (DCS) was born.\nThe introduction of DCSs allowed easy interconnection and re-configuration of plant controls such as cascaded loops and interlocks, and easy interfacing with other production computer systems. It enabled sophisticated alarm handling, introduced automatic event logging, removed the need for physical records such as chart recorders, allowed the control racks to be networked and thereby located locally to plant to reduce cabling runs, and provided high level overviews of plant status and production levels.\nHierarchy.\nThe accompanying diagram is a general model which shows functional manufacturing levels in a large process using processor and computer-based control.\nReferring to the diagram: Level 0 contains the field devices such as flow and temperature sensors (process value readings - PV), and final control elements (FCE), such as control valves; Level 1 contains the industrialised Input/Output (I/O) modules, and their associated distributed electronic processors; Level 2 contains the supervisory computers, which collate information from processor nodes on the system, and provide the operator control screens; Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and monitoring targets; Level 4 is the production scheduling level.\nControl model.\nTo determine the fundamental model for any process, the inputs and outputs of the system are defined differently than for other chemical processes. The balance equations are defined by the control inputs and outputs rather than the material inputs. The control model is a set of equations used to predict the behavior of a system and can help determine what the response to change will be. The state variable (x) is a measurable variable that is a good indicator of the state of the system, such as temperature (energy balance), volume (mass balance) or concentration (component balance). Input variable (u) is a specified variable that commonly include flow rates.\nIt is important to note that the entering and exiting flows are both considered control inputs. The control input can be classified as a manipulated, disturbance, or unmonitored variable. Parameters (p) are usually a physical limitation and something that is fixed for the system, such as the vessel volume or the viscosity of the material. Output (y) is the metric used to determine the behavior of the system. The control output can be classified as measured, unmeasured, or unmonitored.\nTypes.\nProcesses can be characterized as batch, continuous, or hybrid. Batch applications require that specific quantities of raw materials be combined in specific ways for particular duration to produce an intermediate or end result. One example is the production of adhesives and glues, which normally require the mixing of raw materials in a heated vessel for a period of time to form a quantity of end product. Other important examples are the production of food, beverages and medicine. Batch processes are generally used to produce a relatively low to intermediate quantity of product per year (a few pounds to millions of pounds).\nA continuous physical system is represented through variables that are smooth and uninterrupted in time. The control of the water temperature in a heating jacket, for example, is an example of continuous process control. Some important continuous processes are the production of fuels, chemicals and plastics. Continuous processes in manufacturing are used to produce very large quantities of product per year (millions to billions of pounds). Such controls use feedback such as in the PID controller A PID Controller includes proportional, integrating, and derivative controller functions.\nApplications having elements of batch and continuous process control are often called hybrid applications.\nControl loops.\nThe fundamental building block of any industrial control system is the control loop, which controls just one process variable. An example is shown in the accompanying diagram, where the flow rate in a pipe is controlled by a PID controller, assisted by what is effectively a cascaded loop in the form of a valve servo-controller to ensure correct valve positioning.\nSome large systems may have several hundreds or thousands of control loops. In complex processes the loops are interactive, so that the operation of one loop may affect the operation of another. The system diagram for representing control loops is a Piping and instrumentation diagram.\nCommonly used control systems include programmable logic controller (PLC), Distributed Control System (DCS) or SCADA.\nA further example is shown. If a control valve were used to hold level in a tank, the level controller would compare the equivalent reading of a level sensor to the level setpoint and determine whether more or less valve opening was necessary to keep the level constant. A cascaded flow controller could then calculate the change in the valve position.\nEconomic advantages.\nThe economic nature of many products manufactured in batch and continuous processes require highly efficient operation due to thin margins. The competing factor in process control is that products must meet certain specifications in order to be satisfactory. These specifications can come in two forms: a minimum and maximum for a property of the material or product, or a range within which the property must be. All loops are susceptible to disturbances and therefore a buffer must be used on process set points to ensure disturbances do not cause the material or product to go out of specifications. This buffer comes at an economic cost (i.e. additional processing, maintaining elevated or depressed process conditions, etc.).\nProcess efficiency can be enhanced by reducing the margins necessary to ensure product specifications are met. This can be done by improving the control of the process to minimize the effect of disturbances on the process. The efficiency is improved in a two step method of narrowing the variance and shifting the target. Margins can be narrowed through various process upgrades (i.e. equipment upgrades, enhanced control methods, etc.). Once margins are narrowed, an economic analysis can be done on the process to determine how the set point target is to be shifted. Less conservative process set points lead to increased economic efficiency. Effective process control strategies increase the competitive advantage of manufacturers who employ them.", "Automation-Control": 0.8570261598, "Qwen2": "Yes"} {"id": "21008711", "revid": "41175258", "url": "https://en.wikipedia.org/wiki?curid=21008711", "title": "Dutch Institute of Systems and Control", "text": "The Dutch Institute of Systems and Control is an interuniversity institute and graduate school that unites nine university departments in the Netherlands active in systems and control theory and engineering. The graduate school offers courses covering a wide range of topics from mathematical system theory to control engineering. DISC provides a program in systems and control offered to PhD students of the participating departments, such as graduate courses, summer school and network events. The Institute also unites all academic research in the Netherlands in the field of systems and control. Examples of knowledge application are developing energy-efficient greenhouses, designing cars that drive by wire, autonomously walking or flying robots, and operational strategies in process industry.", "Automation-Control": 0.9958875179, "Qwen2": "Yes"} {"id": "7639687", "revid": "5984052", "url": "https://en.wikipedia.org/wiki?curid=7639687", "title": "Goal node (computer science)", "text": "In computer science, a goal node is a node in a graph that meets defined criteria for success or termination.\nHeuristical artificial intelligence algorithms, like A* and B*, attempt to reach such nodes in optimal time by defining the distance to the goal node. When the goal node is reached, A* defines the distance to the goal node as 0 and all other nodes' distances as positive values.", "Automation-Control": 0.9083554745, "Qwen2": "Yes"} {"id": "52467144", "revid": "38132428", "url": "https://en.wikipedia.org/wiki?curid=52467144", "title": "Swarm robotic platforms", "text": "Swarm robotic platforms apply swarm robotics in multi-robot collaboration. They take inspiration from nature (e.g. collective problem solving mechanisms seen in nature such as honey bee aggregation). The main goal is to control a large number of robots (with limited sensing/processing ability) to accomplish a common task/problem. Hardware limitation and cost of robot platforms limit current research in swarm robotics to mostly performed by simulation software (e.g. Stage, ARGoS). On the other hand, simulation of swarm scenarios that needs large numbers of agents is extremely complex and often inaccurate due to poor modelling of external conditions and limitation of computation.\nComparison of platforms.\nSeveral mobile robot platforms have previously been developed to study swarm applications.", "Automation-Control": 0.9787544608, "Qwen2": "Yes"} {"id": "1029949", "revid": "6727347", "url": "https://en.wikipedia.org/wiki?curid=1029949", "title": "Printed circuit board milling", "text": "Printed circuit board milling (also: isolation milling) is the milling process used for removing areas of copper from a sheet of printed circuit board (PCB) material to recreate the pads, signal traces and structures according to patterns from a digital circuit board plan known as a \"layout file\". Similar to the more common and well known chemical PCB etch process, the PCB milling process is subtractive: material is removed to create the electrical isolation and ground planes required. However, unlike the chemical etch process, PCB milling is typically a non-chemical process and as such it can be completed in a typical office or lab environment without exposure to hazardous chemicals. High quality circuit boards can be produced using either process. In the case of PCB milling, the quality of a circuit board is chiefly determined by the system's true, or weighted, milling accuracy and control as well as the condition (sharpness, temper) of the milling bits and their respective feed/rotational speeds. By contrast, in the chemical etch process, the quality of a circuit board depends on the accuracy and/or quality of the mask used to protect the copper from the chemicals and the state of the etching chemicals.\nAdvantages.\nPCB milling has advantages for both prototyping and some special PCB designs. The biggest benefit is that one does not have to use chemicals to produce PCBs.\nWhen creating a prototype, outsourcing a board takes time. An alternative is to make a PCB in-house. Using the wet process, in-house production presents problems with chemicals and disposing thereof. High-resolution boards using the wet process are hard to achieve and still, when done, one still has to drill and eventually cut out the PCB from the base material.\nCNC machine prototyping can provide a fast-turnaround board production process without the need for wet processing. If a CNC machine is already used for drilling, this single machine could carry out both parts of the process, drilling and milling. A CNC machine is used to process drilling, milling and cutting.\nMany boards that are simple for milling would be very difficult to process by wet etching and manual drilling afterward in a laboratory environment without using top-of-the-line systems that usually cost many times more than CNC milling machines.\nIn mass production, milling is unlikely to replace etching although the use of CNC is already standard practice for drilling the boards.\nHardware.\nA PCB milling system is a single machine that can perform all of the required actions to create a prototype board, with the exception of inserting \"vias\" and \"through hole plating\". Most of these machines require only a standard AC mains outlet and a shop-type vacuum cleaner for operation.\nSoftware.\nSoftware for milling PCBs is usually delivered by the CNC machine manufacturer. Most of the packages can be split in two main categories – raster and vector.\nSoftware that produces tool paths using raster calculation method tends to have lower resolution of processing than the vector based software since it relies on the raster information it receives.\nMechanical system.\nThe mechanics behind a PCB milling machine are fairly straightforward and have their roots in CNC milling technology. A PCB milling system is similar to a miniature and highly accurate NC milling table. For machine control, positioning information and machine control commands are sent from the controlling software via a serial port or parallel port connection to the milling machine's on-board controller. The controller is then responsible for driving and monitoring the various positioning components which move the milling head and gantry and control the spindle speed. Spindle speeds can range from 30,000 RPM to 100,000 RPM depending on the milling system, with higher spindle speeds equating to better accuracy, in a nutshell the smaller the tool diameter the higher RPM you need. Typically this drive system comprises non-monitored stepper motors for the X/Y axis, an on-off non-monitored solenoid, pneumatic piston or lead screw for the Z-axis, and a DC motor control circuit for spindle speed, none of which provide positional feedback. More advanced systems provide a monitored stepper motor Z-axis drive for greater control during milling and drilling as well as more advanced RF spindle motor control circuits that provide better control over a wider range of speeds.\nX and Y-axis control.\nFor the X and Y-axis drive systems most PCB milling machines use stepper motors that drive a precision lead screw. The lead screw is in turn linked to the gantry or milling head by a special precision machined connection assembly. To maintain correct alignment during milling, the gantry or milling head's direction of travel is guided along using linear or dovetailed bearing(s). Most X/Y drive systems provide user control, via software, of the milling speed, which determines how fast the stepper motors drive their respective axes.\nZ-axis control.\nZ-axis drive and control are handled in several ways. The first and most common is a simple solenoid that pushes against a spring. When the solenoid is energized it pushes the milling head down against a spring stop that limits the downward travel. The rate of descent as well as the amount of force exerted on the spring stop must be manually set by mechanically adjusting the position of the solenoid's plunger. The second type of Z-axis control is through the use of a pneumatic cylinder and a software-driven gate valve. Due to the small cylinder size and the amount of air pressure used to drive it there is little range of control between the up and down stops. Both the solenoid and pneumatic system cannot position the head anywhere other than the endpoints, and are therefore useful for only simple 'up/down' milling tasks. The final type of Z-axis control uses a stepper motor that allows the milling head to be moved in small accurate steps up or down. Further, the speed of these steps can be adjusted to allow tool bits to be eased into the board material rather than hammered into it. The depth (number of steps required) as well as the downward/upward speed is under user control via the controlling software.\nOne of the major challenges with milling PCBs is handling variations in flatness. Since conventional etching techniques rely on optical masks that sit right on the copper layer they can conform to any slight bends in the material so all features are replicated faithfully.\nWhen milling PCBs however, any minute height variations encountered when milling will cause conical bits to either sink deeper (creating a wider cut) or rise off the surface, leaving an uncut section. Before cutting some systems perform height mapping probes across the board to measure height variations and adjust the Z values in the G-code beforehand.\nTooling.\nPCBs may be machined with conventional endmills, conical d-bit cutters, and spade mills. D-bits and spade mills are cheap and as they have a small point allow the traces to be close together. Taylor's equation, Vc Tn = C, can predict tool life for a given surface speed.\nAlternatives.\nA method with similar advantages to mechanical milling is laser etching and laser drilling. Etching PCBs with lasers offers the same advantages as mechanical milling in regards to quick turnaround times, but the nature of the laser etching process is preferable to both milling and chemical etching when it comes to physical variations exerted on the object. Whereas mechanical milling and chemical etching exact physical stress on the board, laser etching offers non-contact surface removal, making it a superior option for PCBs where precision and geometric accuracy are at a premium, such as RF and microwave designs. Laser drilling is more precise, has extremely low power consumption compared with other techniques, requires less maintenance, does not use lubricants or drill bits, low rates of wear, does not use abrasive materials, does not ruin the boards, is more eco friendly, and in the most high-powered machines, the drilling is instant, but is expensive. An additional emerging alternative to milling and laser etching is an additive approach based upon printing the conductive trace. Such PCB printers come at a range of price points and with differing features but also offer rapid in-house circuit manufacture, with very little to no waste. An example of such a technology that produces simpler, low layer count PCBs is Voltera. A system at the higher layer-count end of the additive manufacturing approach is Nano Dimension's DragonFly technology which prints complex high layer count circuits as well as electro-mechanical parts.", "Automation-Control": 0.7047129273, "Qwen2": "Yes"} {"id": "20560973", "revid": "41503339", "url": "https://en.wikipedia.org/wiki?curid=20560973", "title": "Energy functional", "text": " The energy functional is the total energy of a certain system, as a functional of the system's state.\nIn the energy methods of simulating the dynamics of complex structures, a state of the system is often described as an element of an appropriate function space. To be in this state, the system pays a certain cost in terms of energy required by the state. This energy is a scalar quantity, a function of the state, hence the term \"functional\". The system tends to develop from the state with higher energy (higher cost) to the state with lower energy, thus local minima of this functional are usually related to the stable stationary states. Studying such states is part of the optimization problems, where the terms \"energy functional\" or \"cost functional\" are often used to describe the objective function.", "Automation-Control": 0.9775595665, "Qwen2": "Yes"} {"id": "7415899", "revid": "46103694", "url": "https://en.wikipedia.org/wiki?curid=7415899", "title": "Sequential quadratic programming", "text": "Sequential quadratic programming (SQP) is an iterative method for constrained nonlinear optimization which may be considered a quasi-Newton method. SQP methods are used on mathematical problems for which the objective function and the constraints are twice continuously differentiable.\nSQP methods solve a sequence of optimization subproblems, each of which optimizes a quadratic model of the objective subject to a linearization of the constraints. If the problem is unconstrained, then the method reduces to Newton's method for finding a point where the gradient of the objective vanishes. If the problem has only equality constraints, then the method is equivalent to applying Newton's method to the first-order optimality conditions, or Karush–Kuhn–Tucker conditions, of the problem.\nAlgorithm basics.\nConsider a nonlinear programming problem of the form:\nThe Lagrangian for this problem is\nwhere formula_3 and formula_4 are Lagrange multipliers.\nThe standard Newton's Method searches for the solution formula_5 by iterating the following equation:\nformula_6.\nHowever, because the matrix formula_7 is generally singular (and therefore non-invertible), the Newton step formula_8 cannot be calculated directly. Instead the basic sequential quadratic programming algorithm defines an appropriate search direction formula_9 at an iterate formula_10, as a solution to the quadratic programming subproblem\nNote that the term formula_12 in the expression above may be left out for the minimization problem, since it is constant under the formula_13 operator. \nTogether, the SQP algorithm starts by first choosing the initial iterate formula_14, then calculating formula_15 and formula_16. Then the QP subproblem is built and solved to find the Newton step direction formula_17 which is used to update the parent problem iterate using formula_18. This process is repeated for formula_19 until the parent problem satisfies a convergence test.\nImplementations.\nSQP methods have been implemented in well known numerical environments such as MATLAB and GNU Octave. There also exist numerous software libraries, including open source:", "Automation-Control": 0.6980185509, "Qwen2": "Yes"} {"id": "48844125", "revid": "10289486", "url": "https://en.wikipedia.org/wiki?curid=48844125", "title": "Structured sparsity regularization", "text": "Structured sparsity regularization is a class of methods, and an area of research in statistical learning theory, that extend and generalize sparsity regularization learning methods. Both sparsity and structured sparsity regularization methods seek to exploit the assumption that the output variable formula_1 (i.e., response, or dependent variable) to be learned can be described by a reduced number of variables in the input space formula_2 (i.e., the domain, space of features or explanatory variables). \"Sparsity regularization methods\" focus on selecting the input variables that best describe the output. \"Structured sparsity regularization methods\" generalize and extend sparsity regularization methods, by allowing for optimal selection over structures like groups or networks of input variables in formula_2.\nCommon motivation for the use of structured sparsity methods are model interpretability, high-dimensional learning (where dimensionality of formula_2 may be higher than the number of observations formula_5), and reduction of computational complexity. Moreover, structured sparsity methods allow to incorporate prior assumptions on the structure of the input variables, such as overlapping groups, non-overlapping groups, and acyclic graphs. Examples of uses of structured sparsity methods include face recognition, magnetic resonance image (MRI) processing, socio-linguistic analysis in natural language processing, and analysis of genetic expression in breast cancer.\nDefinition and related concepts.\nSparsity regularization.\nConsider the linear kernel regularized empirical risk minimization problem with a loss function formula_6 and the formula_7 \"norm\" as the regularization penalty:\nwhere formula_9, and formula_10 denotes the formula_7 \"norm\", defined as the number of nonzero entries of the vector formula_12. formula_13 is said to be sparse if formula_14. Which means that the output formula_15 can be described by a small subset of input variables.\nMore generally, assume a dictionary formula_16 with formula_17 is given, such that the target function formula_18 of a learning problem can be written as:\nThe formula_7 norm formula_22 as the number of non-zero components of formula_12 is defined as \nformula_27 is said to be sparse if formula_28.\nHowever, while using the formula_7 norm for regularization favors sparser solutions, it is computationally difficult to use and additionally is not convex. A computationally more feasible norm that favors sparser solutions is the formula_30 norm; this has been shown to still favor sparser solutions and is additionally convex.\nStructured sparsity regularization.\nStructured sparsity regularization extends and generalizes the variable selection problem that characterizes sparsity regularization. Consider the above regularized empirical risk minimization problem with a general kernel and associated feature map formula_16 with formula_17.\nThe regularization term formula_34 penalizes each formula_35 component independently, which means that the algorithm will suppress input variables independently from each other.\nIn several situations we may want to impose more structure in the regularization process, so that, for example, input variables are suppressed according to predefined groups. Structured sparsity regularization methods allow to impose such structure by adding structure to the norms defining the regularization term.\nStructures and norms.\nNon-overlapping groups: group Lasso.\nThe non-overlapping group case is the most basic instance of structured sparsity. In it, an \"a priori\" partition of the coefficient vector formula_12 in formula_37 non-overlapping groups is assumed. Let formula_38 be the vector of coefficients in group formula_39, we can define a regularization term and its group norm as\nwhere formula_41 is the group formula_42 norm formula_43 , formula_44 is group formula_39, and formula_46 is the \"j-th\" component of group formula_44.\nThe above norm is also referred to as group Lasso. This regularizer will force entire coefficient groups towards zero, rather than individual coefficients. As the groups are non-overlapping, the set of non-zero coefficients can be obtained as the union of the groups that were not set to zero, and conversely for the set of zero coefficients.\nOverlapping groups.\nOverlapping groups is the structure sparsity case where a variable can belong to more than one group formula_39. This case is often of interest as it can represent a more general class of relationships among variables than non-overlapping groups can, such as tree structures or other type of graphs.\nThere are two types of overlapping group sparsity regularization approaches, which are used to model different types of input variable relationships:\nIntersection of complements: group Lasso.\nThe \"intersection of complements\" approach is used in cases when we want to select only those input variables that have positive coefficients in all groups they belong to. Consider again the group Lasso for a regularized empirical risk minimization problem:\nwhere formula_41 is the group formula_42 norm, formula_44 is group formula_39, and formula_46 is the \"j-th\" component of group formula_44.\nAs in the non-overlapping groups case, the \"group Lasso\" regularizer will potentially set entire groups of coefficients to zero. Selected variables are those with coefficients formula_56. However, as in this case groups may overlap, we take the intersection of the complements of those groups that are not set to zero.\nThis \"intersection of complements\" selection criteria implies the modeling choice that we allow some coefficients within a particular group formula_39 to be set to zero, while others within the same group formula_39 may remain positive. In other words, coefficients within a group may differ depending on the several group memberships that each variable within the group may have.\nUnion of groups: latent group Lasso.\nA different approach is to consider union of groups for variable selection. This approach captures the modeling situation where variables can be selected as long as they belong at least to one group with positive coefficients. This modeling perspective implies that we want to preserve group structure.\nThe formulation of the union of groups approach is also referred to as latent group Lasso, and requires to modify the group formula_42 norm considered above and introduce the following regularizer \nwhere formula_61, formula_62 is the vector of coefficients of group g, and formula_63 is a vector with coefficients formula_46 for all variables formula_65 in group formula_39 , and formula_67 in all others, i.e., formula_68 if formula_65 in group formula_39 and formula_71 otherwise.\nThis regularizer can be interpreted as effectively replicating variables that belong to more than one group, therefore conserving group structure. As intended by the union of groups approach, requiring formula_72 produces a vector of weights w that effectively sums up the weights of all variables across all groups they belong to.\nIssues with Group Lasso regularization and alternative approaches.\nThe objective function using group lasso consists of an error function, which is generally required to be convex but not necessarily strongly convex, and a group formula_30 regularization term. An issue with this objective function is that it is convex but not necessarily strongly convex, and thus generally does not lead to unique solutions.\nAn example of a way to fix this is to introduce the squared formula_42 norm of the weight vector as an additional regularization term while keeping the formula_30 regularization term from the group lasso approach. If the coefficient of the squared formula_42 norm term is greater than formula_67, then because the squared formula_42 norm term is strongly convex, the resulting objective function will also be strongly convex. Provided that the formula_42 coefficient is suitably small but still positive, the weight vector minimizing the resulting objective function is generally very close to a weight vector that minimizes the objective function that would result from removing the group formula_42 regularization term altogether from the original objective function; the latter scenario corresponds to the group Lasso approach. Thus this approach allows for simpler optimization while maintaining sparsity.\nNorms based on the structure over Input variables.\n\"See: Submodular set function\"\nBesides the norms discussed above, other norms used in structured sparsity methods include hierarchical norms and norms defined on grids. These norms arise from submodular functions and allow the incorporation of prior assumptions on the structure of the input variables. In the context of hierarchical norms, this structure can be represented as a directed acyclic graph over the variables while in the context of grid-based norms, the structure can be represented using a grid.\nHierarchical Norms.\n\"See:\" Unsupervised learning\nUnsupervised learning methods are often used to learn the parameters of latent variable models. Latent variable models are statistical models where in addition to the observed variables, a set of latent variables also exists which is not observed. Often in such models, \"hierarchies\" are assumed between the variables of the system; this system of hierarchies can be represented using directed acyclic graphs.\nHierarchies of latent variables have emerged as a natural structure in several applications, notably to model text documents. Hierarchical models using Bayesian non-parametric methods have been used to learn topic models, which are statistical models for discovering the abstract \"topics\" that occur in a collection of documents. Hierarchies have also been considered in the context of kernel methods. Hierarchical norms have been applied to bioinformatics, computer vision and topic models.\nNorms defined on grids.\nIf the structure assumed over variables is in the form of a 1D, 2D or 3D grid, then submodular functions based on overlapping groups can be considered as norms, leading to stable sets equal to rectangular or convex shapes. Such methods have applications in computer vision\nAlgorithms for computation.\nBest subset selection problem.\nThe problem of choosing the best subset of input variables can be naturally formulated under a penalization framework as:\nWhere formula_10 denotes the formula_7 \"norm\", defined as the number of nonzero entries of the vector formula_12.\nAlthough this formulation makes sense from a modeling perspective, it is computationally unfeasible, as it is equivalent to an exhaustive search evaluating all possible subsets of variables.\nTwo main approaches for solving the optimization problem are: 1) greedy methods, such as step-wise regression in statistics, or matching pursuit in signal processing; and 2) convex relaxation formulation approaches and proximal gradient optimization methods.\nConvex relaxation.\nA natural approximation for the best subset selection problem is the formula_30 norm regularization:\nSuch a scheme is called basis pursuit or the Lasso, which substitutes the formula_7 \"norm\" for the convex, non-differentiable formula_30 norm.\nProximal gradient methods.\nProximal gradient methods, also called forward-backward splitting, are optimization methods useful for minimizing functions with a convex and differentiable component, and a convex potentially non-differentiable component.\nAs such, proximal gradient methods are useful for solving sparsity and structured sparsity regularization problems of the following form: \nWhere formula_90 is a convex and differentiable loss function like the quadratic loss, and formula_91 is a convex potentially non-differentiable regularizer such as the formula_30 norm.\nConnections to Other Areas of Machine Learning.\nConnection to Multiple Kernel Learning.\nStructured Sparsity regularization can be applied in the context of multiple kernel learning. Multiple kernel learning refers to a set of machine learning methods that use a predefined set of kernels and learn an optimal linear or non-linear combination of kernels as part of the algorithm.\nIn the algorithms mentioned above, a whole space was taken into consideration at once and was partitioned into groups, i.e. subspaces. A complementary point of view is to consider the case in which distinct spaces are combined to obtain a new one. It is useful to discuss this idea considering finite dictionaries. Finite dictionaries with linearly independent elements - these elements are also known as atoms - refer to finite sets of linearly independent basis functions, the linear combinations of which define hypothesis spaces. Finite dictionaries can be used to define specific kernels, as will be shown. Assume for this example that rather than only one dictionary, several finite dictionaries are considered.\nFor simplicity, the case in which there are only two dictionaries formula_93 and formula_94 where formula_95 and formula_96 are integers, will be considered. The atoms in formula_26 as well as the atoms in formula_98 are assumed to be linearly independent. Let formula_99 be the union of the two dictionaries. Consider the linear space of functions formula_100 given by linear combinations of the form\nformula_101\nfor some coefficient vectors formula_102, where formula_103. Assume the atoms in formula_104 to still be linearly independent, or equivalently, that the map formula_105 is one to one. The functions in the space formula_100 can be seen as the sums of two components, one in the space formula_107, the linear combinations of atoms in formula_26 and one in formula_109, the linear combinations of the atoms in formula_98.\nOne choice of norm on this space is formula_111. Note that we can now view formula_100 as a function space in which formula_107, formula_109 are subspaces. In view of the linear independence assumption, formula_100 can be identified with formula_116 and formula_117 with formula_118 respectively. The norm mentioned above can be seen as the group norm in formula_100associated to the subspaces formula_107, formula_109, providing a connection to structured sparsity regularization.\nHere, formula_107, formula_109 and formula_100 can be seen to be the reproducing kernel Hilbert spaces with corresponding feature maps formula_125, given by formula_126, formula_127, given by formula_128, and formula_129, given by the concatenation of formula_130, respectively.\nIn the structured sparsity regularization approach to this scenario, the relevant groups of variables which the group norms consider correspond to the subspaces formula_107 and formula_109. This approach promotes setting the groups of coefficients corresponding to these subspaces to zero as opposed to only individual coefficients, promoting sparse multiple kernel learning.\nThe above reasoning directly generalizes to any finite number of dictionaries, or feature maps. It can be extended to feature maps inducing infinite dimensional hypothesis\nspaces.\nWhen Sparse Multiple Kernel Learning is useful.\nConsidering sparse multiple kernel learning is useful in several situations including the following:\nGenerally sparse multiple kernel learning is particularly useful when there are many kernels and model selection and interpretability are important.\nAdditional uses and applications.\nStructured sparsity regularization methods have been used in a number of settings where it is desired to impose an \"a priori\" input variable structure to the regularization process. Some such applications are:", "Automation-Control": 0.743532896, "Qwen2": "Yes"} {"id": "2748600", "revid": "36932059", "url": "https://en.wikipedia.org/wiki?curid=2748600", "title": "Laser engineered net shaping", "text": "Laser powder forming, also known by the proprietary name (laser engineered net shaping) is an additive manufacturing technology developed for fabricating metal parts directly from a computer-aided design (CAD) solid model by using a metal powder injected into a molten pool created by a focused, high-powered laser beam. This technique is also equivalent to several trademarked techniques that have the monikers direct metal deposition (DMD), and laser consolidation (LC). Compared to processes that use powder beds, such as selective laser melting (SLM) objects created with this technology can be substantially larger, even up to several feet long.\nMethod.\nA high power laser is used to melt metal powder supplied coaxially to the focus of the laser beam through a deposition head. The laser beam typically travels through the center of the head and is focused to a small spot by one or more lenses. The X-Y table is moved in raster fashion to fabricate each layer of the object. The head is moved up vertically after each layer is completed. \nMetal powders are delivered and distributed around the circumference of the head either by gravity, or by using a pressurized carrier gas. An inert shroud gas is often used to shield the melt pool from atmospheric oxygen for better control of properties, and to promote layer to layer adhesion by providing better surface wetting.\nOther techniques.\nThis process is similar to other 3D fabrication technologies in its approach in that it forms a solid component by the layer additive method. The LENS process can go from metal and metal oxide powder to metal parts, in many cases without any secondary operations. LENS is similar to selective laser sintering, but the metal powder is applied only where material is being added to the part at that moment. It can produce parts in a wide range of alloys, including titanium, stainless steel, aluminum, and other specialty materials; as well as composite and functionally graded materials. Primary applications for LENS technology include repair and overhaul, rapid prototyping, rapid manufacturing, and limited-run manufacturing for aerospace, defense, and medical markets. Microscopy studies show the LENS parts to be fully dense with no compositional degradation. Mechanical testing reveals outstanding as-fabricated mechanical properties.\nThe process can also make \"near\" net shape parts when it is not possible to make an item to exact specifications. In these cases post production process like light machining, surface finishing, or heat treatment may be applied to achieve end compliance. It is used as finishing operations.", "Automation-Control": 0.9854684472, "Qwen2": "Yes"} {"id": "6851511", "revid": "7852030", "url": "https://en.wikipedia.org/wiki?curid=6851511", "title": "High stock removal", "text": "High stock removal is a technological process with the goal of removing large amounts of material. The quantity of material which can be removed by a specific process depends on the material properties and the machining tool used.\nMaterials.\nThe stock removal rate is largely a function of the material's properties. This is expressed as the machinability of a material: the ease or difficulty of machining a particular material. The machinability of materials varies greatly; for instance, aluminium and magnesium have high machinability compared to titanium and other special metals.\nSpecific energy.\nOne way of quantifying the machinability of a material is to measure specific energy (e): this is the amount of energy required to cut a given volume of work material (kWh/mm3), and varies with material properties.\nNew materials.\nNew materials are continuously developed to address the extreme demands of market segments such as petrochemical and aerospace. Metallurgical advances have produced a wide range of high-performance materials (e.g. titanium and high-nickel alloys), but a consequence of their attractive properties is often that they are difficult to machine.\nTemperature rising.\nThe specific cutting energy needed for ‘difficult to machine’ materials can be extremely high. Especially in high stock removal applications, there are problems with thermal load in the work material. An increase of the work material temperature can lead to deterioration of the work material surface integrity, resulting in metallurgical damages like micro-cracks, residual stresses and work hardening. Excessive heat also dramatically shortens tool life.\nHigh stock removal machine tools.\nThe energy required to remove large amounts of material depends on the properties of the working material (specific energy) as well as the technological process used.\nTechnologies.\nSeveral technologies are capable of removing substantial amounts of material. Among them are: sawing, turning, broaching, milling and grinding. Turning and milling are the most popular machining technologies; turning is mainly used for round products (though a specialized variant called whirling can modulate the turning axis to produce non-round shapes), whereas milling has a broad range of applications. Certain ‘difficult to machine’ materials like titanium, stainless steels, and exotic high-nickel alloys can be challenging to process when high stock removal is the goal, due to local heat generation at the cutting edge and the difficulty in removing it. These challenges can be mitigated, however, by strategies such as high-volume flood coolant, specialized cutting tool geometries, optimized speed and feed settings, and tool coatings like AlTiCN which tend to divert heat into the chip, away from the cutting tool.\nGrinding.\nTraditionally bonded abrasives are used for stock removal. To remove substantial amounts of material in a grinding process, vertical segment grinders are used. These machines work with a rotating disc with abrasive segments, against which the work material is pressed with the aid of a rotating or reciprocating table. These technologies require significantly greater power than other grinding methods, up to . Some major manufactures of these machines are Blanchard, Mattison, Göckel and Reform.\nBelt grinding.\nGrinding with coated abrasives has recently become a viable alternative for high stock removal through developments in machine tool and grinding belt technology.\nBelt grinding with coated abrasives can be an attractive process because the large surface area of the recirculating belt tends to carry away heat and prevent local hot spots. The productivity of this technology is, in many cases, three times that of rotary or reciprocating vertical grinders. As a result, belt grinding is replacing traditional grinding technologies in the field of the specialty metal processing. ", "Automation-Control": 0.9849234223, "Qwen2": "Yes"} {"id": "6852657", "revid": "21857263", "url": "https://en.wikipedia.org/wiki?curid=6852657", "title": "Belt grinding", "text": "Belt grinding is an abrasive machining process used on metals and other materials. It is typically used as a finishing process in industry. A belt, coated in abrasive material, is run over the surface to be processed in order to remove material or produce the desired finish.\nApplications.\nBelt grinding is a versatile process suitable for all kinds of different applications. There are three different applications of the belt grinding technology:\nGrinding methods.\nWide belt grinding is a familiar process in industry as well as home applications. There are several basic methods for belt grinding:\nIn general there are three basic elements of the belt-grinding machine: work rest support, grinding head and a regulating head. These components differ for all the methods but in general the workpiece is pressed between the grinding head and the rest support. The objective of the regulating head is to coordinate the belt pressure. \nWide belt grinding.\nOne of the most common methods is wide belt grinding.\nThe belt grinding process is variable by adjusting certain parameters such as belt speed, grinding pressure, feed speed, durometer of the contact drum, size of the contact drum and the abrasive belt that is used. The machines can be made for wet or dry operation. Furthermore, a wide belt grinding machine can be constructed with single or multiple heads. The first head is used for coarse grinding and the next heads gradually make a finer finish. Wide belt grinding is also used as a high stock removal method for special metals (e.g. stainless steel, titanium, and nickel alloys).\nChanging variables.\nThere are several objectives possible for grinding with coated abrasives. Among them are the right application (e.g. finish or stock removal), time saving and efficiency of the abrasive tool. \nTo achieve the above objectives, it is essential to look in more detail to the variables which affect them. These include the work material properties, the grit and abrasive type of the grinding belt, belt speed, belt sequences, contact wheel hardness and diameter, serration, type of lubricant (or dry) and grinding pressure. Changing these variables will affect the performances of the belt grinding process. \nIn the wide belt method, a contact wheel supports the abrasive belt. The selection of the contact wheel and abrasive to match the grinding parameters required for a specific operation is very critical. Stock removal generally requires a harder, serrated rubber contact wheel, and coarse grade ceramic abrasives. Finishing generally requires the use of a smooth faced contact wheel and fine grade abrasives.", "Automation-Control": 0.972048223, "Qwen2": "Yes"} {"id": "66574331", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=66574331", "title": "Bellman filter", "text": "The Bellman filter is an algorithm that estimates the value sequence of hidden states in a state-space model. It is a generalization of the Kalman filter, allowing for nonlinearity in both the state and observation equations. The principle behind the Bellman filter is an approximation of the maximum a posteriori estimator, which makes it robust to heavy-tailed noise. It is in general a very fast method, since at each iteration only the very last state value is estimated. The algorithm owes its name to the Bellman equation, which plays a central role in the derivation of the algorithm.", "Automation-Control": 1.0000097752, "Qwen2": "Yes"} {"id": "4521831", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=4521831", "title": "Supervisory control", "text": "Supervisory control is a general term for control of many individual controllers or control loops, such as within a distributed control system. It refers to a high level of overall monitoring of individual process controllers, which is not necessary for the operation of each controller, but gives the operator an overall plant process view, and allows integration of operation between controllers.\nA more specific use of the term is for a Supervisory Control and Data Acquisition system or SCADA, which refers to a specific class of system for use in process control, often on fairly small and remote applications such as a pipeline transport, water distribution, or wastewater utility system station.\nForms.\nSupervisory control often takes one of two forms. In one, the controlled machine or process continues autonomously. It is observed from time to time by a human who, when deeming it necessary, intervenes to modify the control algorithm in some way. In the other, the process accepts an instruction, carries it out autonomously, reports the results and awaits further commands. With manual control, the operator interacts directly with a controlled process or task using switches, levers, screws, valves etc., to control actuators. This concept was incorporated in the earliest machines which sought to extend the physical capabilities of man. In contrast, with automatic control, the machine adapts to changing circumstances and makes decisions in pursuit of some goal which can be as simple as switching a heating system on and off to maintain a room temperature within a specified range. Sheridan defines supervisory control as follows: \"in the strictest sense, supervisory control means that one or more human operators are intermittently programming and continually receiving information from a computer that itself closes an autonomous control loop through artificial effectors to the controlled process or task environment.\"\nOther points.\nRobotics applications have traditionally aimed for automatic control. Automatic control requires sensing and responding appropriately to all combinations of circumstances which can present problems of overwhelming complexity. A supervisory control scheme offers the prospect of solving the automation problem incrementally and leaving those problems unsolved to be handled by the human supervisor.\nCommunications delay does not have the same impact on this control scheme. All time critical feedback occurs at the slave where the delays are negligible. Instability is thus avoided without modifying the feedback loop. Communications delay, in this case, slows the rate at which an operator can assign tasks to the slave and determine whether those tasks have been successfully carried out.", "Automation-Control": 0.9925700426, "Qwen2": "Yes"} {"id": "4521890", "revid": "35498457", "url": "https://en.wikipedia.org/wiki?curid=4521890", "title": "Control loop", "text": "A control loop is the fundamental building block of control systems in general industrial control systems and industrial control systems in particular. It consists of the process sensor, the controller function, and the final control element (FCE) which controls the process necessary to automatically adjust the value of a measured process variable (PV) to equal the value of a desired set-point (SP). \nThere are two common classes of control loop: open loop and closed loop. In an open-loop control system, the control action from the controller is independent of the process variable. An example of this is a central heating boiler controlled only by a timer. The control action is the switching on or off of the boiler. The process variable is the building temperature. This controller operates the heating system for a constant time regardless of the temperature of the building.\nIn a closed-loop control system, the control action from the controller is dependent on the desired and actual process variable. In the case of the boiler analogy, this would utilize a thermostat to monitor the building temperature, and feed back a signal to ensure the controller output maintains the building temperature close to that set on the thermostat. A closed-loop controller has a feedback loop which ensures the controller exerts a control action to control a process variable at the same value as the setpoint. For this reason, closed-loop controllers are also called feedback controllers.\nOpen-loop and closed-loop.\nFundamentally, there are two types of control loop: \"open-loop control\" (feedforward), and \"closed-loop control\" (feedback).\nIn open-loop control, the control action from the controller is independent of the \"process output\" (or \"controlled process variable\"). A good example of this is a central heating boiler controlled only by a timer, so that heat is applied for a constant time, regardless of the temperature of the building. The control action is the switching on/off of the boiler, but the controlled variable should be the building temperature, but is not because this is open-loop control of the boiler, which does not give closed-loop control of the temperature.\nIn closed loop control, the control action from the controller is dependent on the process output. In the case of the boiler analogy this would include a thermostat to monitor the building temperature, and thereby feed back a signal to ensure the controller maintains the building at the temperature set on the thermostat. A closed loop controller therefore has a feedback loop which ensures the controller exerts a control action to give a process output the same as the \"reference input\" or \"set point\". For this reason, closed loop controllers are also called feedback controllers.\nThe definition of a closed loop control system according to the British Standard Institution is \"a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero.\"\nLikewise; \"A \"Feedback Control System\" is a system which tends to maintain a prescribed relationship of one system variable to another by comparing functions of these variables and using the difference as a means of control.\"\nOther examples.\nAn example of a control system is a car's cruise control, which is a device designed to maintain vehicle speed at a constant \"desired\" or \"reference\" speed provided by the driver. The \"controller\" is the cruise control, the \"plant\" is the car, and the \"system\" is the car and the cruise control. The system output is the car's speed, and the control itself is the engine's throttle position which determines how much power the engine delivers.\nA primitive way to implement cruise control is simply to lock the throttle position when the driver engages cruise control. However, if the cruise control is engaged on a stretch of non-flat road, then the car will travel slower going uphill and faster when going downhill. This type of controller is called an \"open-loop controller\" because there is no feedback; no measurement of the system output (the car's speed) is used to alter the control (the throttle position.) As a result, the controller cannot compensate for changes acting on the car, like a change in the slope of the road.\nIn a \"closed-loop control system\", data from a sensor monitoring the car's speed (the system output) enters a controller which continuously compares the quantity representing the speed with the reference quantity representing the desired speed. The difference, called the error, determines the throttle position (the control). The result is to match the car's speed to the reference speed (maintain the desired system output). Now, when the car goes uphill, the difference between the input (the sensed speed) and the reference continuously determines the throttle position. As the sensed speed drops below the reference, the difference increases, the throttle opens, and engine power increases, speeding up the vehicle. In this way, the controller dynamically counteracts changes to the car's speed. The central idea of these control systems is the \"feedback loop\", the controller affects the system output, which in turn is measured and fed back to the controller.\nApplication.\nThe accompanying diagram shows a control loop with a single PV input, a control function, and the control output (CO) which modulates the action of the final control element (FCE) to alter the value of the manipulated variable (MV). In this example, a flow control loop is shown, but can be level, temperature, or any one of many process parameters which need to be controlled. The control function shown is an \"intermediate type\" such as a PID controller which means it can generate a full range of output signals anywhere between 0-100%, rather than just an on/off signal. \nIn this example, the value of the PV is always the same as the MV, as they are in series in the pipeline. However, if the feed from the valve was to a tank, and the controller function was to control the level using the fill valve, the PV would be the tank level, and the MV would be the flow to the tank.\nThe controller function can be a discrete controller or a function block in a computerised control system such as a distributed control system or a programmable logic controller. In all cases, a control loop diagram is a very convenient and useful way of representing the control function and its interaction with plant. In practice at a process control level, control loops are normally abbreviated using standard symbols in a Piping and instrumentation diagram, which shows all elements of the process measurement and control based on a process flow diagram.\nAt a detailed level the control loop connection diagram is created to show the electrical and pneumatic connections. This greatly aids diagnostics and repair, as all the connections for a single control function are on one diagram.\nLoop and control equipment tagging.\nTo aid unique identification of equipment, each loop and its elements are identified by a \"tagging\" system and each element has a unique tag identification.\nBased on the standards ANSI/ISA S5.1 and ISO 14617-6, the identifications consist of up to 5 letters.\nThe first identification letter is for the measured value, the second is a modifier, 3rd indicates the passive/readout function, 4th - active/output function, and the 5th is the function modifier. This is followed by loop number, which is unique to that loop.\nFor instance, FIC045 means it is the Flow Indicating Controller in control loop 045. This is also known as the \"tag\" identifier of the field device, which is normally given to the location and function of the instrument. The same loop may have FT045 - which is the flow transmitter in the same loop.\nFor reference designation of any equipment in industrial systems the standard IEC 61346 (\"Industrial systems, installations and equipment and industrial products — Structuring principles and reference", "Automation-Control": 0.994348526, "Qwen2": "Yes"} {"id": "62643682", "revid": "31702163", "url": "https://en.wikipedia.org/wiki?curid=62643682", "title": "Compensator (control theory)", "text": "A compensator is a component in the control system that is used to regulate another system. Usually, it is done by conditioning the input or the output to that system. There are three types of compensators: lag, lead and lag-lead compensators.\nAdjusting a control system in order to improve its performance might lead to unexpected behaviour (e.g. poor stability or even instability by increasing the gain value). In order to make the system behave as desired, it is necessary to redesign the system and add a compensator, a device which compensates for the deficient performance of the original system.", "Automation-Control": 0.8265905976, "Qwen2": "Yes"} {"id": "53416632", "revid": "754619", "url": "https://en.wikipedia.org/wiki?curid=53416632", "title": "Cold Metal Transfer", "text": "Cold Metal Transfer (abbreviated CMT) is a welding method that is usually performed by a welding robot. The CMT machine detects a short circuit which sends a signal that retracts the welding filler material, giving the weld time to cool before each drop is placed. This leaves a smooth weld that is stronger than that of a hotter weld. This works well on thin metal that is prone to warping and the weld burning through the material. This type of welding is more efficient than other GMAW methods when the metal is thinner than 10mm, anything thicker and the expense begins to overcome traditional welding. Welding wire is fed through the system that is controlled by a computer, the computer adjusts things such as wire feed, welding speed, and amps going through the wire. This allows precise welding of materials like steel and aluminum, with very little slag and spatter, resulting in a cleaner finish weld.\nDefinition.\nCMT is a subset of gas metal arc welding. It works by reducing the weld current and retracting the weld wire when detecting a short circuit, resulting in a drop-by-drop deposit of weld material. Developed for thin materials, CMT requires strict control of weld parameters.\nHistory.\nCMT was originally intended for joining sheet metal in the automotive industry, but has expanded to thicker materials.\nApplication.\nCold metal transfer is used to weld different types of metal with various thicknesses. This low voltage, low heat welding works well on thin sheet metal. It is also being used for thicker material where the integrity of the weld is important. When metal is overheated it affects its structural properties, CMT welding keeps the heat to a minimum, resulting in little change to the structure of the metal, providing a stronger weld. Thin metal has a greater possibility of distorting when heated, during traditional GMAW welding heat sinks or other heat protection had to be used to prevent the warping of the metal, heat protection is not needed during the CMT process. CMT has a wide variety of applications in various industries such as small engine, automotive, and marine.", "Automation-Control": 0.9987193942, "Qwen2": "Yes"} {"id": "15094152", "revid": "31876559", "url": "https://en.wikipedia.org/wiki?curid=15094152", "title": "Gradient method", "text": "In optimization, a gradient method is an algorithm to solve problems of the form \nwith the search directions defined by the gradient of the function at the current point. Examples of gradient methods are the gradient descent and the conjugate gradient.", "Automation-Control": 0.997441709, "Qwen2": "Yes"} {"id": "1120647", "revid": "27823944", "url": "https://en.wikipedia.org/wiki?curid=1120647", "title": "Oracle Reports", "text": "Oracle Reports is a tool for developing reports against data stored in an Oracle database. Oracle Reports consists of Oracle Reports Developer (a component of the Oracle Developer Suite) and Oracle Application Server Reports Services (a component of the Oracle Application Server).\nOutput formats.\nThe report output can be delivered directly to a printer or saved in the following formats: HTML, RTF, PDF, XML, Microsoft Excel\nHistory.\nOracle RPT.\nOracle RPT was an early, primitive predecessor to SQL*Report Writer. There was no editor or IDE provided and instead the reports were created by editing text files to control the report output.\nOracle Reports 2.5.\nRelease April 1995.\nNew Object Navigator.\nNew Toolbars.\nNew Menus.\nStill no undos.\nMore stable IDE.\nOracle Reports 6i.\nNew features added in 6i:\nOracle Reports 9i.\nNew features added in 9i:\nOracle Reports 10g.\nNew features added in 10g:", "Automation-Control": 0.9996768236, "Qwen2": "Yes"} {"id": "28307770", "revid": "1163212947", "url": "https://en.wikipedia.org/wiki?curid=28307770", "title": "Knuckle boom crane", "text": "A knuckle boom crane is a kind of standard crane whose boom articulates at the 'knuckle' near the middle, letting it fold back like a finger. This provides a compact size for storage and manoeuvring.\nKnuckle boom cranes have become very common on offshore vessels as less of the deck space is blocked by the crane. Disadvantages of this crane type are the higher power demand and increased maintenance requirement due to the increased number of moving parts.\nKnuckle boom crane arms are much lighter than boom truck cranes, and they are designed to allow for more payloads to be carried on the back of the truck that it is mounted on. The majority of them are mounted behind the cab and leave the entire bed of the truck empty.\nThe cranes come with different types of control systems, such as: stand up, control from the ground, seat control, or radio remote control. The radio remote systems now can start the crane as well as run the crane. Currently, they come equipped with a computer readout system that immediately gives readouts from the system if the crane is overloaded or not.", "Automation-Control": 0.7511042953, "Qwen2": "Yes"} {"id": "75485", "revid": "842485", "url": "https://en.wikipedia.org/wiki?curid=75485", "title": "Electrical discharge machining", "text": "Electrical discharge machining (EDM), also known as spark machining, spark eroding, die sinking, wire burning or wire erosion, is a metal \nfabrication process whereby a desired shape is obtained by using electrical discharges (sparks). Material is removed from the work piece by a series of rapidly recurring current discharges between two electrodes, separated by a dielectric liquid and subject to an electric voltage. One of the electrodes is called the tool-electrode, or simply the or , while the other is called the workpiece-electrode, or . The process depends upon the tool and work piece not making physical contact.\nWhen the voltage between the two electrodes is increased, the intensity of the electric field in the volume between the electrodes becomes greater, causing dielectric break down of the liquid, and produces an electric arc. As a result, material is removed from the electrodes. Once the current stops (or is stopped, depending on the type of generator), new liquid dielectric is conveyed into the inter-electrode volume, enabling the solid particles (debris) to be carried away and the insulating properties of the dielectric to be restored. Adding new liquid dielectric in the inter-electrode volume is commonly referred to as . After a current flow, the voltage between the electrodes is restored to what it was before the breakdown, so that a new liquid dielectric breakdown can occur to repeat the cycle.\nHistory.\nThe erosive effect of electrical discharges was first noted in 1770 by English physicist Joseph Priestley.\nDie-sink EDM.\nTwo Soviet scientists, B. R. Lazarenko and N. I. Lazarenko, were tasked in 1943 to investigate ways of preventing the erosion of tungsten electrical contacts due to sparking. They failed in this task but found that the erosion was more precisely controlled if the electrodes were immersed in a dielectric fluid. This led them to invent an EDM machine used for working difficult-to-machine materials such as tungsten. The Lazarenkos' machine is known as an R-C-type machine, after the resistor–capacitor circuit (RC circuit) used to charge the electrodes.\nSimultaneously but independently, an American team, Harold Stark, Victor Harding, and Jack Beaver, developed an EDM machine for removing broken drills and taps from aluminium castings. Initially constructing their machines from under-powered electric-etching tools, they were not very successful. But more powerful sparking units, combined with automatic spark repetition and fluid replacement with an electromagnetic interrupter arrangement produced practical machines. Stark, Harding, and Beaver's machines were able to produce 60 sparks per second. Later machines based on their design used vacuum tube circuits that were able to produce thousands of sparks per second, significantly increasing the speed of cutting.\nWire-cut EDM.\nThe wire-cut type of machine arose in the 1960s for making tools (dies) from hardened steel. The tool electrode in wire EDM is simply a wire. To avoid the erosion of the wire causing it to break, the wire is wound between two spools so that the active part of the wire is constantly changing. The earliest numerical controlled (NC) machines were conversions of punched-tape vertical milling machines. The first commercially available NC machine built as a wire-cut EDM machine was manufactured in the USSR in 1967. Machines that could optically follow lines on a master drawing were developed by David H. Dulebohn's group in the 1960s at Andrew Engineering Company for milling and grinding machines. Master drawings were later produced by computer numerical controlled (CNC) plotters for greater accuracy. A wire-cut EDM machine using the CNC drawing plotter and optical line follower techniques was produced in 1974. Dulebohn later used the same plotter CNC program to directly control the EDM machine, and the first CNC EDM machine was produced in 1976.\nCommercial wire EDM capability and use has advanced substantially during recent decades. Feed rates have increased and surface finish can be finely controlled.\nGeneralities.\nElectrical discharge machining is a machining method primarily used for hard metals or those that would be very difficult to machine with traditional techniques. EDM typically works with materials that are electrically conductive, although methods have also been proposed for using EDM to machine insulating ceramics. EDM can cut intricate contours or cavities in pre-hardened steel without the need for heat treatment to soften and re-harden them. This method can be used with any other metal or metal alloy such as titanium, hastelloy, kovar, and inconel. Also, applications of this process to shape polycrystalline diamond tools have been reported.\nEDM is often included in the \"non-traditional\" or \"non-conventional\" group of machining methods together with processes such as electrochemical machining (ECM), water jet cutting (WJ, AWJ), laser cutting and opposite to the \"conventional\" group (turning, milling, grinding, drilling and any other process whose material removal mechanism is essentially based on mechanical forces).\nIdeally, EDM can be seen as a series of breakdown and restoration of the liquid dielectric in-between the electrodes. However, caution should be exerted in considering such a statement because it is an idealized model of the process, introduced to describe the fundamental ideas underlying the process. Yet, any practical application involves many aspects that may also need to be considered. For instance, the removal of the debris from the inter-electrode volume is likely to be always partial. Thus the electrical properties of the dielectric in the inter-electrodes volume can be different from their nominal values and can even vary with time. The inter-electrode distance, often also referred to as spark-gap, is the result of the control algorithms of the specific machine used. The control of such a distance appears logically to be central to this process. Also, not all of the current between the dielectric is of the ideal type described above: the spark-gap can be short-circuited by the debris. The control system of the electrode may fail to react quickly enough to prevent the two electrodes (tool and workpiece) from coming into contact, with a consequent short circuit. This is unwanted because a short circuit contributes to material removal differently from the ideal case. The flushing action can be inadequate to restore the insulating properties of the dielectric so that the current always happens in the point of the inter-electrode volume (this is referred to as arcing), with a consequent unwanted change of shape (damage) of the tool-electrode and workpiece. Ultimately, a description of this process in a suitable way for the specific purpose at hand is what makes the EDM area such a rich field for further investigation and research.\nTo obtain a specific geometry, the EDM tool is guided along the desired path very close to the work; ideally it should not touch the workpiece, although in reality this may happen due to the performance of the specific motion control in use. In this way, a large number of current discharges (colloquially also called sparks) happen, each contributing to the removal of material from both tool and workpiece, where small craters are formed. The size of the craters is a function of the technological parameters set for the specific job at hand. They can be with typical dimensions ranging from the nanoscale (in micro-EDM operations) to some hundreds of micrometers in roughing conditions.\nThe presence of these small craters on the tool results in the gradual erosion of the electrode. This erosion of the tool-electrode is also referred to as wear. Strategies are needed to counteract the detrimental effect of the wear on the geometry of the workpiece. One possibility is that of continuously replacing the tool-electrode during a machining operation. This is what happens if a continuously replaced wire is used as electrode. In this case, the correspondent EDM process is also called wire EDM. The tool-electrode can also be used in such a way that only a small portion of it is actually engaged in the machining process and this portion is changed on a regular basis. This is, for instance, the case when using a rotating disk as a tool-electrode. The corresponding process is often also referred to as EDM grinding.\nA further strategy consists in using a set of electrodes with different sizes and shapes during the same EDM operation. This is often referred to as multiple electrode strategy, and is most common when the tool electrode replicates in negative the wanted shape and is advanced towards the blank along a single direction, usually the vertical direction (i.e. z-axis). This resembles the sink of the tool into the dielectric liquid in which the workpiece is immersed, so, not surprisingly, it is often referred to as die-sinking EDM (also called conventional EDM and ram EDM). The corresponding machines are often called sinker EDM. Usually, the electrodes of this type have quite complex forms. If the final geometry is obtained using a usually simple-shaped electrode which is moved along several directions and is possibly also subject to rotations, often the term EDM milling is used.\nIn any case, the severity of the wear is strictly dependent on the technological parameters used in the operation (for instance: polarity, maximum current, open circuit voltage). For example, in micro-EDM, also known as μ-EDM, these parameters are usually set at values which generates severe wear. Therefore, wear is a major problem in that area.\nThe problem of wear to graphite electrodes is being addressed. In one approach, a digital generator, controllable within milliseconds, reverses polarity as electro-erosion takes place. That produces an effect similar to electroplating that continuously deposits the eroded graphite back on the electrode. In another method, a so-called \"Zero Wear\" circuit reduces how often the discharge starts and stops, keeping it on for as long a time as possible.\nDefinition of the technological parameters.\nDifficulties have been encountered in the definition of the technological parameters that drive the process.\nTwo broad categories of generators, also known as power supplies, are in use on EDM machines commercially available: the group based on RC circuits and the group based on transistor controlled pulses.\nIn both categories, the primary parameters at setup are the current and frequency delivered. In RC circuits, however, little control is expected over the time duration of the discharge, which is likely to depend on the actual spark-gap conditions (size and pollution) at the moment of the discharge. Also, the open circuit voltage (i.e. the voltage between the electrodes when the dielectric is not yet broken) can be identified as steady state voltage of the RC circuit.\nIn generators based on transistor control, the user is usually able to deliver a train of pulses of voltage to the electrodes. Each pulse can be controlled in shape, for instance, quasi-rectangular. In particular, the time between two consecutive pulses and the duration of each pulse can be set. The amplitude of each pulse constitutes the open circuit voltage. Thus, the maximum duration of discharge is equal to the duration of a pulse of voltage in the train. Two pulses of current are then expected not to occur for a duration equal or larger than the time interval between two consecutive pulses of voltage.\nThe maximum current during a discharge that the generator delivers can also be controlled. Because other sorts of generators may also be used by different machine builders, the parameters that may actually be set on a particular machine will depend on the generator manufacturer. The details of the generators and control systems on their machines are not always easily available to their user. This is a barrier to describing unequivocally the technological parameters of the EDM process. Moreover, the parameters affecting the phenomena occurring between tool and electrode are also related to the controller of the motion of the electrodes.\nA framework to define and measure the electrical parameters during an EDM operation directly on inter-electrode volume with an oscilloscope external to the machine has been recently proposed by Ferri \"et al.\" These authors conducted their research in the field of μ-EDM, but the same approach can be used in any EDM operation. This would enable the user to estimate directly the electrical parameters that affect their operations without relying upon machine manufacturer's claims. When machining different materials in the same setup conditions, the actual electrical parameters of the process are significantly different.\nMaterial removal mechanism.\nThe first serious attempt at providing a physical explanation of the material removal during electric discharge machining is perhaps that of Van Dijck. Van Dijck presented a thermal model together with a computational simulation to explain the phenomena between the electrodes during electric discharge machining. However, as Van Dijck himself admitted in his study, the number of assumptions made to overcome the lack of experimental data at that time was quite significant.\nFurther models of what occurs during electric discharge machining in terms of heat transfer were developed in the late eighties and early nineties. It resulted in three scholarly papers: the first presenting a thermal model of material removal on the cathode, the second presenting a thermal model for the erosion occurring on the anode and the third introducing a model describing the plasma channel formed during the passage of the discharge current through the dielectric liquid. Validation of these models is supported by experimental data provided by AGIE.\nThese models give the most authoritative support for the claim that EDM is a thermal process, removing material from the two electrodes because of melting or vaporization, along with pressure dynamics established in the spark-gap by the collapsing of the plasma channel. However, for small discharge energies the models are inadequate to explain the experimental data. All these models hinge on a number of assumptions from such disparate research areas as submarine explosions, discharges in gases, and failure of transformers, so it is not surprising that alternative models have been proposed more recently in the literature trying to explain the EDM process.\nAmong these, the model from Singh and Ghosh reconnects the removal of material from the electrode to the presence of an electrical force on the surface of the electrode that could mechanically remove material and create the craters. This would be possible because the material on the surface has altered mechanical properties due to an increased temperature caused by the passage of electric current. The authors' simulations showed how they might explain EDM better than a thermal model (melting or evaporation), especially for small discharge energies, which are typically used in μ-EDM and in finishing operations.\nGiven the many available models, it appears that the material removal mechanism in EDM is not yet well understood and that further investigation is necessary to clarify it, especially considering the lack of experimental scientific evidence to build and validate the current EDM models. This explains an increased current research effort in related experimental techniques.\nIn this conclusion, there are following major factors are achieved during machining operations:\nTypes.\nSinker EDM.\nSinker EDM, also called ram EDM, cavity type EDM or volume EDM, consists of an electrode and workpiece submerged in an insulating liquid such as, more typically, oil or, less frequently, other dielectric fluids. The electrode and workpiece are connected to a suitable power supply. The power supply generates an electrical potential between the two parts. As the electrode approaches the workpiece, dielectric breakdown occurs in the fluid, forming a plasma channel, and a small spark jumps.\nThese sparks usually strike one at a time, because it is very unlikely that different locations in the inter-electrode space have the identical local electrical characteristics which would enable a spark to occur simultaneously in all such locations. These sparks happen in huge numbers at seemingly random locations between the electrode and the workpiece. As the base metal is eroded, and the spark gap subsequently increased, the electrode is lowered automatically by the machine so that the process can continue uninterrupted. Several hundred thousand sparks occur per second, with the actual duty cycle carefully controlled by the setup parameters. These controlling cycles are sometimes known as \"on time\" and \"off time\", which are more formally defined in the literature.\nThe on time setting determines the length or duration of the spark. Hence, a longer on time produces a deeper cavity from each spark, creating a rougher finish on the workpiece. The reverse is true for a shorter on time. Off time is the period of time between sparks. Although not directly affecting the machining of the part, the off time allows the flushing of dielectric fluid through a nozzle to clean out the eroded debris. Insufficient debris removal can cause repeated strikes in the same location which can lead to a short circuit. Modern controllers monitor the characteristics of the arcs and can alter parameters in microseconds to compensate. The typical part geometry is a complex 3D shape, often with small or odd shaped angles. Vertical, orbital, vectorial, directional, helical, conical, rotational, spin and indexing machining cycles are also used.\nWire EDM.\nIn \"wire electrical discharge machining\" (WEDM), also known as \"wire-cut EDM\" and \"wire cutting\", a thin single-strand metal wire, usually brass, is fed through the workpiece, submerged in a tank of dielectric fluid, typically deionized water. Wire-cut EDM is typically used to cut plates as thick as 300mm and to make punches, tools, and dies from hard metals that are difficult to machine with other methods. \nThe wire, which is constantly fed from a spool, is held between upper and lower diamond guides which is centered in a water nozzle head. The guides, usually CNC-controlled, move in the \"x\"–\"y\" plane. On most machines, the upper guide can also move independently in the \"z\"–\"u\"–\"v\" axis, giving rise to the ability to cut tapered and transitioning shapes (circle on the bottom, square at the top for example). The upper guide can control axis movements in the GCode standard, \"x\"–\"y\"–\"u\"–\"v\"–\"i\"–\"j\"–\"k\"–\"l\"–. This allows the wire-cut EDM to be programmed to cut very intricate and delicate shapes. \nThe upper and lower diamond guides are usually accurate to , and can have a cutting path or \"kerf\" as small as using Ø wire, though the average cutting kerf that achieves the best economic cost and machining time is using Ø brass wire. The reason that the cutting width is greater than the width of the wire is because sparking occurs from the sides of the wire to the work piece, causing erosion. This \"overcut\" is necessary, for many applications it is adequately predictable and therefore can be compensated for (for instance in micro-EDM this is not often the case). Spools of wire are long — an 8 kg spool of 0.25 mm wire is just over 19 kilometers in length. Wire diameter can be as small as and the geometry precision is not far from ± . \nThe wire-cut process uses water as its dielectric fluid, controlling its resistivity and other electrical properties with filters and PID controlled de-ionizer units. The water flushes the cut debris away from the cutting zone. Flushing is an important factor in determining the maximum feed rate for a given material thickness.\nAlong with tighter tolerances, multi axis EDM wire-cutting machining centers have added features such as multi heads for cutting two parts at the same time, controls for preventing wire breakage, automatic self-threading features in case of wire breakage, and programmable machining strategies to optimize the operation.\nWire-cutting EDM is commonly used when low residual stresses are desired, because it does not require high cutting forces for removal of material. If the energy/power per pulse is relatively low (as in finishing operations), little change in the mechanical properties of a material is expected due to these low residual stresses, although material that hasn't been stress-relieved can distort in the machining process.\nThe work piece may undergo a significant thermal cycle, its severity depending on the technological parameters used. Such thermal cycles may cause formation of a recast layer on the part and residual tensile stresses on the work piece. If machining takes place after heat treatment, dimensional accuracy will not be affected by heat treat distortion.\nFast hole drilling EDM.\nFast hole drilling EDM was designed for producing fast, accurate, small, and deep holes. It is conceptually akin to sinker EDM but the electrode is a rotating tube conveying a pressurized jet of dielectric fluid. It can make a hole an inch deep in about a minute and is a good way to machine holes in materials too hard for twist-drill machining. This EDM drilling type is used largely in the aerospace industry, producing cooling holes into aero blades and other components. It is also used to drill holes in industrial gas turbine blades, in molds and dies, and in bearings.\nApplications.\nPrototype production.\nThe EDM process is most widely used by the mold-making, tool, and die industries, but is becoming a common method of making prototype and production parts, especially in the aerospace, automobile and electronics industries in which production quantities are relatively low. In sinker EDM, a graphite, copper tungsten, or pure copper electrode is machined into the desired (negative) shape and fed into the workpiece on the end of a vertical ram.\nCoinage die making.\nFor the creation of dies for producing jewelry and badges, or blanking and piercing (through use of a pancake die) by the coinage (stamping) process, the positive master may be made from sterling silver, since (with appropriate machine settings) the master is significantly eroded and is used only once. The resultant negative die is then hardened and used in a drop hammer to produce stamped flats from cutout sheet blanks of bronze, silver, or low proof gold alloy. For badges these flats may be further shaped to a curved surface by another die. This type of EDM is usually performed submerged in an oil-based dielectric. The finished object may be further refined by hard (glass) or soft (paint) enameling, or electroplated with pure gold or nickel. Softer materials such as silver may be hand engraved as a refinement.\nSmall hole drilling.\nSmall hole drilling EDM is used in a variety of applications.\nOn wire-cut EDM machines, small hole drilling EDM is used to make a through hole in a workpiece through which to thread the wire for the wire-cut EDM operation. A separate EDM head specifically for small hole drilling is mounted on a wire-cut machine and allows large hardened plates to have finished parts eroded from them as needed and without pre-drilling.\nSmall hole EDM is used to drill rows of holes into the leading and trailing edges of turbine blades used in jet engines. Gas flow through these small holes allows the engines to use higher temperatures than otherwise possible. The high-temperature, very hard, single crystal alloys employed in these blades makes conventional machining of these holes with high aspect ratio extremely difficult, if not impossible.\nSmall hole EDM is also used to create microscopic orifices for fuel system components, spinnerets for synthetic fibers such as rayon, and other applications.\nThere are also stand-alone small hole drilling EDM machines with an \"x\"–\"y\" axis also known as a super drill or \"hole popper\" that can machine blind or through holes. EDM drills bore holes with a long brass or copper tube electrode that rotates in a chuck with a constant flow of distilled or deionized water flowing through the electrode as a flushing agent and dielectric. The electrode tubes operate like the wire in wire-cut EDM machines, having a spark gap and wear rate. Some small-hole drilling EDMs are able to drill through 100 mm of soft or hardened steel in less than 10 seconds, averaging 50% to 80% wear rate. Holes of 0.3 mm to 6.1 mm can be achieved in this drilling operation. Brass electrodes are easier to machine but are not recommended for wire-cut operations due to eroded brass particles causing \"brass on brass\" wire breakage, therefore copper is recommended.\nMetal disintegration machining.\nSeveral manufacturers produce EDM machines for the specific purpose of removing broken cutting tools and fasteners from work pieces. In this application, the process is termed \"metal disintegration machining\" or MDM. The metal disintegration process removes only the center of the broken tool or fastener, leaving the hole intact and allowing a part to be reclaimed.\nClosed loop manufacturing.\nClosed loop manufacturing can improve the accuracy and reduce the tool costs\nAdvantages and disadvantages.\nEDM is often compared to Electrochemical Machining.\nAdvantages of EDM include:\nDisadvantages of EDM include:", "Automation-Control": 0.7982319593, "Qwen2": "Yes"} {"id": "55717347", "revid": "10755432", "url": "https://en.wikipedia.org/wiki?curid=55717347", "title": "Ministry of Instrumentation, Automation and Control Systems", "text": "The Ministry of Instrument-Making, Automation Devices and Control Systems (Minpribor; ) was a government ministry in the Soviet Union.\nEstablished in 1959 as State Committee for Automation and Machine Building; it assumed its ministerial title in 1965; oversees development and integration into industry of automated control systems. The ministry developed and manufactured systems for industrial control, planning and management.\nList of ministers.\n\"Source\":", "Automation-Control": 0.9991540313, "Qwen2": "Yes"} {"id": "7011", "revid": "33011235", "url": "https://en.wikipedia.org/wiki?curid=7011", "title": "Control engineering", "text": "Control engineering or control systems engineering is an engineering discipline that deals with control systems, applying control theory to design equipment and systems with desired behaviors in control environments. The discipline of controls overlaps and is usually taught along with electrical engineering and mechanical engineering at many institutions around the world.\nThe practice uses sensors and detectors to measure the output performance of the process being controlled; these measurements are used to provide corrective feedback helping to achieve the desired performance. Systems designed to perform without requiring human input are called automatic control systems (such as cruise control for regulating the speed of a car). Multi-disciplinary in nature, control systems engineering activities focus on implementation of control systems mainly derived by mathematical modeling of a diverse range of systems.\nOverview.\nModern day control engineering is a relatively new field of study that gained significant attention during the 20th century with the advancement of technology. It can be broadly defined or classified as practical application of control theory. Control engineering plays an essential role in a wide range of control systems, from simple household washing machines to high-performance fighter aircraft. It seeks to understand physical systems, using mathematical modelling, in terms of inputs, outputs and various components with different behaviors; to use control system design tools to develop controllers for those systems; and to implement controllers in physical systems employing available technology. A system can be mechanical, electrical, fluid, chemical, financial or biological, and its mathematical modelling, analysis and controller design uses control theory in one or many of the time, frequency and complex-s domains, depending on the nature of the design problem.\nControl engineering is the engineering discipline that focuses on the modeling of a diverse range of dynamic systems (e.g. mechanical systems) and the design of controllers that will cause these systems to behave in the desired manner. Although such controllers need not be electrical, many are and hence control engineering is often viewed as a subfield of electrical engineering.\nElectrical circuits, digital signal processors and microcontrollers can all be used to implement control systems. Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles.\nIn most cases, control engineers utilize feedback when designing control systems. This is often accomplished using a PID controller system. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system, which adjusts the motor's torque accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. In practically all such systems stability is important and control theory can help ensure stability is achieved.\nAlthough feedback is an important aspect of control engineering, control engineers may also work on the control of systems without feedback. This is known as open loop control. A classic example of open loop control is a washing machine that runs through a pre-determined cycle without the use of sensors.\nHistory.\nAutomatic control systems were first developed over two thousand years ago. The first feedback control device on record is thought to be the ancient Ktesibios's water clock in Alexandria, Egypt, around the third century BCE. It kept time by regulating the water level in a vessel and, therefore, the water flow from that vessel. This certainly was a successful device as water clocks of similar design were still being made in Baghdad when the Mongols captured the city in 1258 CE. A variety of automatic devices have been used over the centuries to accomplish useful tasks or simply just to entertain. The latter includes the automata, popular in Europe in the 17th and 18th centuries, featuring dancing figures that would repeat the same task over and over again; these automata are examples of open-loop control. Milestones among feedback, or \"closed-loop\" automatic control devices, include the temperature regulator of a furnace attributed to Drebbel, circa 1620, and the centrifugal flyball governor used for regulating the speed of steam engines by James Watt in 1788.\nIn his 1868 paper \"On Governors\", James Clerk Maxwell was able to explain instabilities exhibited by the flyball governor using differential equations to describe the control system. This demonstrated the importance and usefulness of mathematical models and methods in understanding complex phenomena, and it signaled the beginning of mathematical control and systems theory. Elements of control theory had appeared earlier but not as dramatically and convincingly as in Maxwell's analysis.\nControl theory made significant strides over the next century. New mathematical techniques, as well as advances in electronic and computer technologies, made it possible to control significantly more complex dynamical systems than the original flyball governor could stabilize. New mathematical techniques included developments in optimal control in the 1950s and 1960s followed by progress in stochastic, robust, adaptive, nonlinear control methods in the 1970s and 1980s. Applications of control methodology have helped to make possible space travel and communication satellites, safer and more efficient aircraft, cleaner automobile engines, and cleaner and more efficient chemical processes.\nBefore it emerged as a unique discipline, control engineering was practiced as a part of mechanical engineering and control theory was studied as a part of electrical engineering since electrical circuits can often be easily described using control theory techniques. In the very first control relationships, a current output was represented by a voltage control input. However, not having adequate technology to implement electrical control systems, designers were left with the option of less efficient and slow responding mechanical systems. A very effective mechanical controller that is still widely used in some hydro plants is the governor. Later on, previous to modern power electronics, process control systems for industrial applications were devised by mechanical engineers using pneumatic and hydraulic control devices, many of which are still in use today.\nEducation.\nAt many universities around the world, control engineering courses are taught primarily in electrical engineering and mechanical engineering, but some courses can be instructed in mechatronics engineering, and aerospace engineering. In others, control engineering is connected to computer science, as most control techniques today are implemented through computers, often as embedded systems (as in the automotive field). The field of control within chemical engineering is often known as process control. It deals primarily with the control of variables in a chemical process in a plant. It is taught as part of the undergraduate curriculum of any chemical engineering program and employs many of the same principles in control engineering. Other engineering disciplines also overlap with control engineering as it can be applied to any system for which a suitable model can be derived. However, specialised control engineering departments do exist, for example, in Italy there are several master in Automation & Robotics that are fully specialised in Control engineering or the Department of Automatic Control and Systems Engineering at the University of Sheffield or the Department of Robotics and Control Engineering at the United States Naval Academy and the Department of Control and Automation Engineering at the Istanbul Technical University.\nControl engineering has diversified applications that include science, finance management, and even human behavior. Students of control engineering may start with a linear control system course dealing with the time and complex-s domain, which requires a thorough background in elementary mathematics and Laplace transform, called classical control theory. In linear control, the student does frequency and time domain analysis. Digital control and nonlinear control courses require Z transformation and algebra respectively, and could be said to complete a basic control education.\nCareers.\nA control engineer's career starts with a bachelor's degree and can continue through the college process. Control engineer degrees are well paired with an electrical or mechanical engineering degree. Control engineers usually get jobs in technical managing where they typically lead interdisciplinary projects. There are many job opportunities in aerospace companies, manufacturing companies, automobile companies, power companies, and government agencies. Some places that hire Control Engineers include companies such as Rockwell Automation, NASA, Ford, and Goodrich. Control Engineers can possibly earn $66k annually from Lockheed Martin Corp. They can also earn up to $96k annually from General Motors Corporation.\nAccording to a \"Control Engineering\" survey, most of the people who answered were control engineers in various forms of their own career. There are not very many careers that are classified as \"control engineer,\" most of them are specific careers that have a small semblance to the overarching career of control engineering. A majority of the control engineers that took the survey in 2019 are system or product designers, or even control or instrument engineers. Most of the jobs involve process engineering or production or even maintenance, they are some variation of control engineering.\nRecent advancement.\nOriginally, control engineering was all about continuous systems. Development of computer control tools posed a requirement of discrete control system engineering because the communications between the computer-based digital controller and the physical system are governed by a computer clock. The equivalent to Laplace transform in the discrete domain is the Z-transform. Today, many of the control systems are computer controlled and they consist of both digital and analog components.\nTherefore, at the design stage either digital components are mapped into the continuous domain and the design is carried out in the continuous domain, or analog components are mapped into discrete domain and design is carried out there. The first of these two methods is more commonly encountered in practice because many industrial systems have many continuous systems components, including mechanical, fluid, biological and analog electrical components, with a few digital controllers.\nSimilarly, the design technique has progressed from paper-and-ruler based manual design to computer-aided design and now to computer-automated design or CAD which has been made possible by evolutionary computation. CAD can be applied not just to tuning a predefined control scheme, but also to controller structure optimisation, system identification and invention of novel control systems, based purely upon a performance requirement, independent of any specific control scheme.\nResilient control systems extend the traditional focus of addressing only planned disturbances to frameworks and attempt to address multiple types of unexpected disturbance; in particular, adapting and transforming behaviors of the control system in response to malicious actors, abnormal failure modes, undesirable human action, etc.", "Automation-Control": 0.9980223775, "Qwen2": "Yes"} {"id": "7039", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=7039", "title": "Control theory", "text": "Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any \"delay\", \"overshoot\", or \"steady-state error\" and ensuring a level of control stability; often with the aim to achieve a degree of optimality.\nTo do this, a controller with the requisite corrective behavior is required. This controller monitors the controlled process variable (PV), and compares it with the reference or set point (SP). The difference between actual and desired value of the process variable, called the \"error\" signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied are controllability and observability. Control theory is used in control system engineering to design automation that have revolutionized manufacturing, aircraft, communications and other industries, and created new fields such as robotics. \nExtensive use is usually made of a diagrammatic style known as the block diagram. In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system.\nControl theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell. Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development of PID control theory by Nicolas Minorsky.\nAlthough a major application of mathematical control theory is in control systems engineering, which deals with the design of process control systems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs - thus control theory also has applications in life sciences, computer engineering, sociology and operations research.\nHistory.\nAlthough control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the centrifugal governor, conducted by the physicist James Clerk Maxwell in 1868, entitled \"On Governors\". A centrifugal governor was already used to regulate the velocity of windmills. Maxwell described and analyzed the phenomenon of self-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate, Edward John Routh, abstracted Maxwell's results for the general class of linear systems. Independently, Adolf Hurwitz analyzed system stability using differential equations in 1877, resulting in what is now known as the Routh–Hurwitz theorem.\nA notable application of dynamic control was in the area of crewed flight. The Wright brothers made their first successful test flights on December 17, 1903, and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds.\nBy World War II, control theory was becoming an important area of research. Irmgard Flügge-Lotz developed the theory of discontinuous automatic control systems, and applied the bang-bang principle to the development of automatic flight control equipment for aircraft. Other areas of application for discontinuous controls included fire-control systems, guidance systems and electronics.\nSometimes, mechanical methods are used to improve the stability of systems. For example, ship stabilizers are fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship.\nThe Space Race also depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find an internal model that obeys the good regulator theorem. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract \"useful work\" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of a regulator interacting with a plant.\nLinear and nonlinear control theory.\nThe field of control theory can be divided into two branches:\nAnalysis techniques - frequency domain and time domain.\nMathematical techniques for analyzing and designing control systems fall into two different categories:\nIn contrast to the frequency domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation, a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the \"time-domain approach\") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. \"State space\" refers to the space whose axes are the state variables. The state of the system can be represented as a point within that space.\nSystem interfacing - SISO & MIMO.\nControl systems can be divided into different categories depending on the number of inputs and outputs.\nClassical SISO System Design.\nThe scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain using differential equations, in the complex-s domain with the Laplace transform, or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory are PID controllers. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model.\nModern MIMO System Design.\nModern control theory is carried out in the state space, and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first order differential equations defined using state variables. Nonlinear, multivariable, adaptive and robust control theories come under this division. Matrix methods are significantly limited for MIMO systems where linear independence cannot be assured in the relationship between inputs and outputs . Being fairly new, modern control theory has many areas yet to be explored. Scholars like Rudolf E. Kálmán and Aleksandr Lyapunov are well known among the people who have shaped modern control theory.\nTopics in control theory.\nStability.\nThe \"stability\" of a general dynamical system with no input can be described with Lyapunov stability criteria.\nFor simplicity, the following descriptions focus on continuous-time and discrete-time linear systems.\nMathematically, this means that for a causal linear system to be stable all of the poles of its transfer function must have negative-real values, i.e. the real part of each pole must be less than zero. Practically speaking, stability requires that the transfer function complex poles reside\nThe difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is in Cartesian coordinates where the formula_1 axis is the real axis and the discrete Z-transform is in circular coordinates where the formula_2 axis is the real axis.\nWhen the appropriate conditions above are satisfied a system is said to be asymptotically stable; the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations. Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or a modulus equal to one (in the discrete time case). If a simply stable system response neither decays nor grows over time, and has no oscillations, it is marginally stable; in this case the system transfer function has non-repeated poles at the complex plane origin (i.e. their real and complex component is zero in the continuous time case). Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero.\nIf a system in question has an impulse response of\nthen the Z-transform (see this example), is given by\nwhich has a pole in formula_5 (zero imaginary part). This system is BIBO (asymptotically) stable since the pole is \"inside\" the unit circle.\nHowever, if the impulse response was\nthen the Z-transform is\nwhich has a pole at formula_8 and is not BIBO stable since the pole has a modulus strictly greater than one.\nNumerous tools exist for the analysis of the poles of a system. These include graphical systems like the root locus, Bode plots or the Nyquist plots.\nMechanical changes can make equipment (and control systems) more stable. Sailors add ballast to improve the stability of ships. Cruise ships use antiroll fins that extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll.\nControllability and observability.\nControllability and observability are main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termed \"stabilizable\". Observability instead is related to the possibility of \"observing\", through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable.\nFrom a geometrical point of view, looking at the states of each variable of the system to be controlled, every \"bad\" state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system. That is, if one of the eigenvalues of the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis.\nSolutions to problems of an uncontrollable or unobservable system include adding actuators and sensors.\nControl specification.\nSeveral different control strategies have been devised in the past years. These vary from extremely general ones (PID controller), to others devoted to very particular classes of systems (especially robotics or aircraft cruise control).\nA control problem can have several specifications. Stability, of course, is always present. The controller must ensure that the closed-loop system is stable, regardless of the open-loop stability. A poor choice of controller can even worsen the stability of the open-loop system, which must normally be avoided. Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e. that the poles have formula_9, where formula_10 is a fixed value strictly greater than zero, instead of simply asking that formula_11.\nAnother typical specification is the rejection of a step disturbance; including an integrator in the open-loop chain (i.e. directly before the system under control) easily achieves this. Other classes of disturbances need different types of sub-systems to be included.\nOther \"classical\" control theory specifications regard the time-response of the closed-loop system. These include the rise time (the time needed by the control system to reach the desired value after a perturbation), peak overshoot (the highest value reached by the response before reaching the desired value) and others (settling time, quarter-decay). Frequency domain specifications are usually related to robustness (see after).\nModern performance assessments use some variation of integrated tracking error (IAE, ISA, CQI).\nModel identification and robustness.\nA control system must always have some robustness property. A robust controller is such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis. This requirement is important, as no real physical system truly behaves like the series of differential equations used to represent it mathematically. Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the true system dynamics can be so complicated that a complete model is impossible.\nThe process of determining the equations that govern the model's dynamics is called system identification. This can be done off-line: for example, executing a series of measures from which to calculate an approximated mathematical model, typically its transfer function or matrix. Such identification from the output, however, cannot take account of unobservable dynamics. Sometimes the model is built directly starting from known physical equations, for example, in the case of a mass-spring-damper system we know that formula_12. Even assuming that a \"complete\" model is used in designing the controller, all the parameters included in these equations (called \"nominal parameters\") are never known with absolute precision; the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal.\nSome advanced control techniques include an \"on-line\" identification process (see later). The parameters of the model are calculated (\"identified\") while the controller itself is running. In this way, if a drastic variation of the parameters ensues, for example, if the robot's arm releases a weight, the controller will adjust itself consequently in order to ensure the correct performance.\nAnalysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and using Nyquist and Bode diagrams. Topics include gain and phase margin and amplitude margin. For MIMO (multi-input multi output) and, in general, more complicated control systems, one must consider the theoretical results devised for each control technique (see next section). I.e., if particular robustness qualities are needed, the engineer must shift their attention to a control technique by including these qualities in its properties.\nA particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints. In the physical world every signal is limited. It could happen that a controller will send control signals that cannot be followed by the physical system, for example, trying to rotate a valve at excessive speed. This can produce undesired behavior of the closed-loop system, or even damage or break actuators or other subsystems. Specific control techniques are available to solve the problem: model predictive control (see later), and anti-wind up systems. The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold.\nSystem classifications.\nLinear systems control.\nFor MIMO systems, pole placement can be performed mathematically using a state space representation of the open-loop system and calculating a feedback matrix assigning poles in the desired positions. In complicated systems this can require computer-assisted calculation capabilities, and cannot always ensure robustness. Furthermore, all system states are not in general measured and so observers must be included and incorporated in pole placement design.\nNonlinear systems control.\nProcesses in industries like robotics and the aerospace industry typically have strong nonlinear dynamics. In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems. These, e.g., feedback linearization, backstepping, sliding mode control, trajectory linearization control normally take advantage of results based on Lyapunov's theory. Differential geometry has been widely used as a tool for generalizing well-known linear control concepts to the nonlinear case, as well as showing the subtleties that make it a more challenging problem. Control theory has also been used to decipher the neural mechanism that directs cognitive states.\nDecentralized systems control.\nWhen the system is controlled by multiple controllers, the problem is one of decentralized control. Decentralization is helpful in many ways, for instance, it helps control systems to operate over a larger geographical area. The agents in decentralized control systems can interact using communication channels and coordinate their actions.\nDeterministic and stochastic systems control.\nA stochastic control problem is one in which the evolution of the state variables is subjected to random shocks from outside the system. A deterministic control problem is not subject to external random shocks.\nMain control strategies.\nEvery control system must guarantee first the stability of the closed-loop behavior. For linear systems, this can be obtained by directly placing the poles. Nonlinear control systems use specific theories (normally based on Aleksandr Lyapunov's Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen.\nPeople in systems and control.\nMany active and historical figures made significant contribution to control theory including", "Automation-Control": 0.9955496192, "Qwen2": "Yes"} {"id": "42610100", "revid": "11795830", "url": "https://en.wikipedia.org/wiki?curid=42610100", "title": "Part program", "text": "The part program is a sequence of instructions, which describe the work, which has to be done on a part, in the form required by a computer under the control of computer numerical control (CNC) software. It is the task of preparing a program sheet from a drawing sheet. All data is fed into the CNC system using a standardized format. Programming is where all the machining data are compiled and where the data are translated into a language which can be understood by the control system of the machine tool. \nThe machining data is as follows : ", "Automation-Control": 0.9957755208, "Qwen2": "Yes"} {"id": "18894568", "revid": "84893", "url": "https://en.wikipedia.org/wiki?curid=18894568", "title": "Mathematical programming with equilibrium constraints", "text": "Mathematical programming with equilibrium constraints (MPEC) is the study of \nconstrained optimization problems where the constraints include variational inequalities or complementarities. MPEC is related to the Stackelberg game. \nMPEC is used in the study of engineering design, economic equilibrium, and multilevel games.\nMPEC is difficult to deal with because its feasible region is not necessarily convex or even connected.", "Automation-Control": 0.9934529662, "Qwen2": "Yes"} {"id": "1860095", "revid": "44274926", "url": "https://en.wikipedia.org/wiki?curid=1860095", "title": "Turning", "text": "Turning is a machining process in which a cutting tool, typically a non-rotary tool bit, describes a helix toolpath by moving more or less linearly while the workpiece rotates.\nUsually the term \"turning\" is reserved for the generation of \"external\" surfaces by this cutting action, whereas this same essential cutting action when applied to \"internal\" surfaces (holes, of one kind or another) is called \"boring\". Thus the phrase \"turning and boring\" categorizes the larger family of processes known as lathing. The cutting of faces on the workpiece, whether with a turning or boring tool, is called \"facing\", and may be lumped into either category as a subset.\nTurning can be done manually, in a traditional form of lathe, which frequently requires continuous supervision by the operator, or by using an automated lathe which does not. Today the most common type of such automation is computer numerical control, better known as CNC. (CNC is also commonly used with many other types of machining besides turning.)\nWhen turning, the workpiece (a piece of relatively rigid material such as wood, metal, plastic, or stone) is rotated and a cutting tool is traversed along 1, 2, or 3 axes of motion to produce precise diameters and depths. Turning can be either on the outside of the cylinder or on the inside (also known as boring) to produce tubular components to various geometries. Although now quite rare, early lathes could even be used to produce complex geometric figures, even the platonic solids; although since the advent of CNC it has become unusual to use non-computerized toolpath control for this purpose.\nThe turning processes are typically carried out on a lathe, considered to be the oldest of machine tools, and can be of different types such as \"straight turning\", \"taper turning\", \"profiling\" or \"external grooving\". Those types of turning processes can produce various shapes of materials such as \"straight\", \"conical\", \"curved\", or \"grooved\" workpieces.\nIn general, turning uses simple \"single-point cutting\" tools. Each group of workpiece materials has an optimum set of tool angles that have been developed through the years.\nThe bits of waste metal from turning operations are known as chips (North America), or swarf (Britain). In some areas they may be known as \"turnings\".\nThe tool's axes of movement may be literally a straight line, or they may be along some set of curves or angles, but they are essentially linear (in the non mathematical sense).\nA component that is subject to turning operations can be termed as a “Turned Part” or “Machined Component”. Turning operations are carried out on a lathe machine which can be manually or CNC operated.\nTurning operations.\nTurning specific operations include:\nThe general process of turning involves rotating a part while a single-point cutting tool is moved parallel to the axis of rotation. Turning can be done on the external surface of the part as well as the internal surface (the process known as boring). The starting material is generally a workpiece generated by other processes such as casting, forging, extrusion, or drawing.\nFacing in the context of turning work involves moving the cutting tool at right angles to the axis of rotation of the rotating workpiece. This can be performed by the operation of the cross-slide, if one is fitted, as distinct from the longitudinal feed (turning). It is frequently the first operation performed in the production of the workpiece, and often the last—hence the phrase \"ending up\".\nThis process, also called parting off or cutoff, is used to create deep grooves which will remove a completed or part-complete component from its parent stock.\nGrooving is like parting, except that grooves are cut to a specific depth instead of severing a completed/part-complete component from the stock. Grooving can be performed on internal and external surfaces, as well as on the face of the part (face grooving or trepanning).\nNon-specific operations include:\nLathes.\nA lathe is a machine tool used principally for shaping pieces of metal, wood, or other materials by causing the workpiece to be held and rotated by the lathe while a tool bit is advanced into the work causing the cutting action. Lathes can be divided into three types for easy identification: engine lathe, turret lathe, and \"special purpose lathes\". Some smaller ones are bench mounted and semi-portable. The larger lathes are floor mounted and may require special transportation if they must be moved.\nField and maintenance shops generally use a lathe that can be adapted to many operations and that is not too large to be moved from one work site to another. The engine lathe is ideally suited for this purpose. A trained operator can accomplish more machining jobs with the engine lathe than with any other machine tool. Turret lathes and special purpose lathes are usually used in production or job shops for mass production or specialized parts, while basic engine lathes are usually used for any type of lathe work.\nTooling.\nThe various angles, shapes, and sizes of a \"single-point cutting\" tool have direct relation to the resulting surface of a workpiece in machining operations. Different types of angle such as \"rake angle\", \"side rake angle\", \"cutting-edge angle\", \"relief angle\", \"nose radius\" exist and may be different with respect to the workpiece. Also, there are many shapes of \"single-point cutting\" tools, such as \"V-shaped\" and \"Square.\" Usually, a special toolholder is used to hold the cutting tool firmly during operation.\nDynamics of turning.\nForces.\nThe relative forces in a turning operation are important in the design of machine tools. The machine tool and its components must be able to withstand these forces without causing significant deflections, vibrations, or chatter during the operation. There are three principal forces during a turning process:\nSpeeds and feeds.\nSpeeds and feeds for turning are chosen based on cutter material, workpiece material, setup rigidity, machine tool rigidity and spindle power, coolant choice, and other factors.", "Automation-Control": 0.9522681832, "Qwen2": "Yes"} {"id": "14621035", "revid": "1083529614", "url": "https://en.wikipedia.org/wiki?curid=14621035", "title": "Similarities between Wiener and LMS", "text": "The Least mean squares filter solution converges to the Wiener filter solution, assuming that the unknown system is LTI and the noise is stationary. Both filters can be used to identify the impulse response of an unknown system, knowing only the original input signal and the output of the unknown system. By relaxing the error criterion to reduce current sample error instead of minimizing the total error over all of n, the LMS algorithm can be derived from the Wiener filter.\nDerivation of the Wiener filter for system identification.\nGiven a known input signal formula_1, the output of an unknown LTI system formula_2 can be expressed as:\nformula_3\nwhere formula_4 is an unknown filter tap coefficients and formula_5 is noise.\nThe model system formula_6, using a Wiener filter solution with an order N, can be expressed as:\nformula_7\nwhere formula_8 are the filter tap coefficients to be determined.\nThe error between the model and the unknown system can be expressed as:\nformula_9\nThe total squared error formula_10 can be expressed as:\nformula_11\nformula_12\nformula_13\nUse the Minimum mean-square error criterion over all of formula_14 by setting its gradient to zero:\nformula_15\nwhich is\nformula_16 \nfor all formula_17\nformula_18\nSubstitute the definition of formula_6:\nformula_20\nDistribute the partial derivative:\nformula_21\nUsing the definition of discrete cross-correlation:\nformula_22\nformula_23\nRearrange the terms:\nformula_24 \nfor all formula_17\nThis system of N equations with N unknowns can be determined.\nThe resulting coefficients of the Wiener filter can be determined by: formula_26, where formula_27 is the cross-correlation vector between formula_28 and formula_29.\nDerivation of the LMS algorithm.\nBy relaxing the infinite sum of the Wiener filter to just the error at time formula_14, the LMS algorithm can be derived.\nThe squared error can be expressed as:\nformula_31\nUsing the Minimum mean-square error criterion, take the gradient:\nformula_32\nApply chain rule and substitute definition of y[n]\nformula_33\nformula_34\nUsing gradient descent and a step size formula_35:\nformula_36\nwhich becomes, for i = 0, 1, ..., N-1, \nformula_37\nThis is the LMS update equation.", "Automation-Control": 0.636295557, "Qwen2": "Yes"} {"id": "19455326", "revid": "38627444", "url": "https://en.wikipedia.org/wiki?curid=19455326", "title": "Machining vibrations", "text": "In machining, vibrations, also called chatter, are the relative movements between the workpiece and the cutting tool. The vibrations result in waves on the machined surface. This affects typical machining processes, such as turning, milling and drilling, and atypical machining processes, such as grinding.\nA chatter mark is an irregular surface flaw left by a wheel that is \"out of true\" (off-center) in grinding, or regular marks left when turning a long piece on a lathe, due to machining vibrations.\nAs early as 1907, Frederick W. Taylor described machining vibrations as the most obscure and delicate of all the problems facing the machinist, an observation still true today, as shown in many publications on machining.\nThe explanation of the machine tool regenerative chatter was made by Tobias. S. A. and W. Fishwick in 1958, by modeling the feedback loop between the metal cutting process and the machine tool structure, and came with the stability lobes diagram. The structure stiffness, damping ratio and the machining process damping factor, are the main parameters that defines the limit where the machining process vibration is prone to enlarge with time. \nMathematical models make it possible to simulate machining vibration quite accurately, but in practice it is always difficult to avoid vibrations.\nAvoidance techniques.\nBasic rules for the machinist for avoiding vibrations:\nIndustrial context.\nThe use of high speed machining (HSM) has enabled an increase in productivity and the realization of workpieces that were impossible before, such as thin walled parts. Unfortunately, machine centers are less rigid because of the very high dynamic movements. In many applications, i.e. long tools, thin workpieces, the appearance of vibrations is the most limiting factor and compels the machinist to reduce cutting speeds and feeds well below the capacities of machines or tools.\nVibration problems generally result in noise, bad surface quality and sometimes tool breakage. The main sources are of two types: forced vibrations and self-generated vibrations.\nForced vibrations are mainly generated by interrupted cutting (inherent to milling), runout, or vibrations from outside the machine.\nSelf generated vibrations are related to the fact that the actual chip thickness depends also on the relative position between tool and workpiece during the previous tooth passage. Thus increasing vibrations may appear up to levels which can seriously degrade the machined surface quality.\nLaboratory research.\nIndustrial and academic researchers have widely studied machining vibration. Specific strategies have been developed, especially for thin-walled work pieces, by alternating small machining passes in order to avoid static and dynamic flexion of the walls. The length of the cutting edge in contact with the workpiece is also often reduced in order to limit self-generated vibrations.\nThe modeling of the cutting forces and vibrations, although not totally accurate, makes it possible to simulate problematic machining and reduce unwanted effects of vibration.\nMultiplication of the models based on stability lobe theory, which makes it possible to find the best spindle speed for machining, gives robust models for any kind of machining.\nTime domain simulations compute workpiece and tool position on very small time scales without great sacrifice in accuracy of the instability process and of the surface modeled. These models need more computing resources than stability lobe models, but give greater freedom (cutting laws, runout, ploughing, finite element models). Time domain simulations are quite difficult to robustify, but a lot of work is being done in this direction in the research laboratories.\nIn addition to stability lobe theory, the use of variable tool pitch often gives good results, at a relatively low cost. These tools are increasingly proposed by tool manufacturers, although this is not really compatible with a reduction in the number of tools used. Other research leads are also promising, but often need major modifications to be practical in machining centers.\nTwo kinds of software are very promising: Time domain simulations which give not yet reliable prediction but should progress, and vibration machining expert software, pragmatically based on knowledge and rules.\nIndustrial methods used to limit machining vibrations.\nThe usual method for setting up a machining process is still mainly based on historical technical knowhow and on trial and error method to determine the best parameters. According to the particular skills of a company, various parameters are studied in priority, such as depth of cut, tool path, workpiece set-up, and geometrical definition of the tool. When a vibration problem occurs, information is usually sought from the tool manufacturer or the CAM (Computer-aided manufacturing) software retailer, and they may give a better strategy for machining the workpiece. Sometimes, when vibration problems are too much of a financial prejudice, experts can be called upon to prescribe, after measurement and calculation, spindle speeds or tool modifications.\nCompared to the industrial stakes, commercial solutions are rare. To analyse the problems and to propose solutions, only few experts propose their services. Computational software for stability lobes and measurement devices are proposed but, in spite of widespread publicity, they remain relatively rarely used. Lastly, vibration sensors are often integrated into machining centers but they are used mainly for wear diagnosis of the tools or the spindle.\nNew Generation Tool Holders and especially the Hydraulic Expansion Tool Holders minimise the undesirable effects of vibration to a large extent. First of all, the precise control of total indicator reading to less than 3 micrometres helps reduce vibrations due to balanced load on cutting edges and the little vibration created thereon is absorbed largely by the oil inside the chambers of the Hydraulic Expansion Tool Holder.\nThe machining vibration is often coming from the tool holder having a high L/D ratio and low stiffness. Stiffening the tool holder with tungsten carbide material is widely used when the tool diameter/weight is small, and the material cost of tungsten carbide is not high. A longer reach at L/D above 4 until 14, a mass damper is necessary to effectively damp out the vibration with a counteracting force to the tool structure. The simple form of mass damper has a heavy weight (made of tungsten or lead) supported by rubber rings, with or without a tuning mechanism. The tuning mechanism enables the mass damper to cover a wider L/D ratio (associated with vibration frequency) range. A more advanced mass damper on cutting tools use viscous fluid or damping oil to improve the dampening efficiency at the targeted L/D ratio (vibration frequency). The latest mass damper on cutting tools are making use of special polymers that has frequency dependent stiffness, and use these polymers to make both self-tuning/adjusting to cover a wider L/D ratio.\nThe machine tools with sensors integrated, which can measure the vibration in machining and provide a feedback to automatically tune the mass damper, is already demonstrated in lab-scale. The deployment of such solutions is still pending on its ease of use and cost.", "Automation-Control": 0.96227175, "Qwen2": "Yes"} {"id": "19469817", "revid": "41798688", "url": "https://en.wikipedia.org/wiki?curid=19469817", "title": "OpenSTA", "text": "OpenSTA is a feature-rich GUI-based web server benchmarking utility that can perform scripted HTTP and HTTPS heavy load tests with performance measurements. It is freely available and distributable under the open source GNU General Public License. OpenSTA currently only runs on Microsoft Windows-based operating systems.\nScripts are recorded in a proprietary language called \"SCL\". It is a fairly simple coding language that provides support for custom functions, variable scopes, and random or sequential lists.\nOpenSTA was originally written by Cyrano. Cyrano's intentions were to write commercial plug in modules and support for OpenSTA for performance testing non-web applications.\nNote that the most recent version posted on the OpenSTA home page is 1.4.4, released 27 Oct, 2007.", "Automation-Control": 0.6215488911, "Qwen2": "Yes"} {"id": "28644562", "revid": "63286", "url": "https://en.wikipedia.org/wiki?curid=28644562", "title": "Cray CX1000", "text": "The Cray CX1000 is a family of high-performance computers which is manufactured by Cray Inc., and consists of two individual groups of computer systems. The first group is intended for scale-up symmetric multiprocessing (SMP), and consists of the CX1000-SM and CX1000-SC nodes. The second group is meant for scale-out cluster computing, and consists of the CX1000 Blade Enclosure, and the CX1000-HN, CX1000-C and CX1000-G nodes.\nThe CX1000 line sits between Cray's entry-level CX-1 Personal Supercomputer range and Cray's high-end XT-series supercomputers.\nCX1000 scale-up symmetric multiprocessing nodes.\nThe CX1000-SM and CX1000-SC nodes can be used for cluster computing, but they are designed for scale-up Symmetric Multi-Processing (SMP). When used for cluster computing, the CX1000-SM node is intended to be the \"master (service)\" node, although it can instead be a \"compute\" node. Similarly, the CX1000-SC node, when used for cluster computing, is intended to be a compute node, but can instead act as the master (service) node. Either or both the CX1000-SC and/or CX1000-SM nodes can be deployed in a HPC cluster. The CX1000-SM and CX1000-SC nodes, when used for SMP, are connected by a cache-coherency interconnect which is a built-in subassembly of the CX1000-SM and CX1000-SC nodes, rather than a standalone device, and is called the \"Drawer Interconnect Switch\" in Cray literature. The Drawer Interconnect Switch uses the Intel QuickPath Interconnect technology.\nCX1000 scale-out cluster computing nodes.\nThe CX1000 scale-out cluster computing group of systems consists of the CX1000 Blade Enclosure, CX1000-C compute Node, CX1000-G GPU Node and CX1000-HN Management Node. Unlike the CX1000-SM and CX1000-SC nodes, these nodes cannot be used for scale-up SMP, as they were designed without a cache-coherency capability. The CX1000-C and CX1000-G nodes both have blade form factors, and the CX1000-HN node is a rackmount 2U Server. The CX1000-HN is intended to act as the head (service) node in an HPC cluster, with CX1000-C and/or CX1000-G compute nodes.", "Automation-Control": 0.7755740881, "Qwen2": "Yes"} {"id": "64780771", "revid": "30584747", "url": "https://en.wikipedia.org/wiki?curid=64780771", "title": "Neural Network Intelligence", "text": "NNI (Neural Network Intelligence) is a free and open-source AutoML toolkit developed by Microsoft. It is used to automate feature engineering, model compression, neural architecture search, and hyper-parameter tuning.\nThe source code is licensed under MIT License and available on GitHub.", "Automation-Control": 0.9274223447, "Qwen2": "Yes"} {"id": "55851493", "revid": "37898045", "url": "https://en.wikipedia.org/wiki?curid=55851493", "title": "IEC 61851", "text": "IEC 61851 is an international standard for electric vehicle conductive charging systems, parts of which are currently still under development(written 2017). IEC 61851 is one of the International Electrotechnical Commission's group of standards for electric road vehicles and electric industrial trucks and is the responsibility of IEC Technical Committee 69 (TC69).\nStandard documents.\nIEC 61851 consists of the following parts, detailed in separate IEC 61851 standard documents:\nIEC 61851-1.\nIEC 61851-1 defines four modes of charging:", "Automation-Control": 1.0000082254, "Qwen2": "Yes"} {"id": "38898056", "revid": "20836525", "url": "https://en.wikipedia.org/wiki?curid=38898056", "title": "HOSVD-based canonical form of TP functions and qLPV models", "text": "Based on the key idea of higher-order singular value decomposition (HOSVD) in tensor algebra, Baranyi and Yam proposed the concept of HOSVD-based canonical form of TP functions and quasi-LPV system models. Szeidl et al. proved that the TP model transformation is capable of numerically reconstructing this canonical form.\nRelated definitions (on TP functions, finite element TP functions, and TP models) can be found here. Details on the control theoretical background (i.e., the TP type polytopic Linear Parameter-Varying state-space model) can be found here.\nA free MATLAB implementation of the TP model transformation can be downloaded at or at MATLAB Central .\nExistence of the HOSVD-based canonical form.\nAssume a given finite element TP function:\nwhere formula_2. Assume that, the weighting functions in formula_3 are othonormal (or we transform to) for formula_4. Then, the execution of the HOSVD on the core tensor formula_5 leads to:\nThen,\nthat is:\nwhere weighting functions of formula_9 are orthonormed (as both the formula_3 and formula_11 where orthonormed) and core tensor formula_12 contains the higher-order singular values.\nDefinition.\n&* ordering: formula_33 for all possible values of formula_34.", "Automation-Control": 0.9494918585, "Qwen2": "Yes"} {"id": "38899596", "revid": "25254441", "url": "https://en.wikipedia.org/wiki?curid=38899596", "title": "Transfer line", "text": "A transfer line is a manufacturing system which consists of a predetermined sequence of machines connected by an automated material handling system and designed for working on a very small family of parts. Parts can be moved singularly because there’s no need for batching when carrying parts between process stations (as opposed to a job shop for example). The line can synchronous, meaning that all parts advance with the same speed, or asynchronous, meaning buffers exist between stations where parts wait to be processed. Not all transfer lines must geometrically be straight lines, for example circular solutions have been developed which make use of rotary tables, however using buffers becomes almost impossible.\nA crucial problem for this production system is that of line balancing: a trade-off between increasing productivity and minimizing cost conserving total processing time.", "Automation-Control": 0.8886153102, "Qwen2": "Yes"} {"id": "26174708", "revid": "6561336", "url": "https://en.wikipedia.org/wiki?curid=26174708", "title": "Vibrating feeder", "text": "A vibratory feeder is an instrument that uses vibration to \"feed\" material to a process or machine. Vibratory feeders use both vibration and gravity to move material. Gravity is used to determine the direction, either down, or down and to a side, and then vibration is used to move the material. They are mainly used to transport a large number of smaller objects.\nA belt weigher are used only to measure the material flow rate but weigh feeder can measure the flow material and also control or regulate the flow rate by varying the belt conveyor speed.\nIndustries Served.\nVersatile and rugged vibratory bowl feeders have been extremely used for automatic feeding of small to large and differently shaped industrial parts. They are the oldest but still commonly used automation machine available for aligning and feeding machine parts, electronic parts, plastic parts, chemicals, metallic parts, glass vials, pharmaceuticals, foods, miscellaneous goods etc. \nAvailable in standard and custom designs, vibratory bowl feeders have been largely purchased by varied industrial sectors for automating high-speed production lines and assembly systems. Some of the industries that use the service of this automation machine include:\nWith these easy-to-use and high-performing part-feeding machines, customers from varied industrial sectors have achieved lower error rates, less power consumption, better profits, better rates of efficiency and less dependency on manpower.", "Automation-Control": 0.9850776196, "Qwen2": "Yes"} {"id": "17048454", "revid": "1170157112", "url": "https://en.wikipedia.org/wiki?curid=17048454", "title": "List of Logitech Racing Wheels compatible games", "text": "This is a list of video games compatible with Logitech's GT Force, Driving Force, Driving Force Pro, Driving Force GT, G29, G923, G25, G27, Logitech Momo Racing, Logitech Speed Force Force Feedback Wheel for Gamecube and Logitech Wii Speed Force Wireless Wheel.\nCompatible games.\nYes* : works without H-pattern shifter.\nNote : G25 has a sequential gear shift option that makes it more compatible so this chart is inaccurate for the G27 which does not have that feature. Both G25 and G27 have H-pattern shifter, but only the G25 allows for sequential shift.\nNote : G29 has new software and a lot of games do not support this wheel or is partly compatible.\nNote : Only G29 works with PS4, other wheels needs additional adapters.\nNote : MOMO Racing was advertised as a PC wheel and it isn't compatible with some games on PS2. eg. PS2 era \"Gran Turismo\" series and \"Initial D: Special Stage\" don't recognize the wheel when plugged in.\nNote : None of the EA games on Steam work with the G29. It's most likely because the Origin launcher takes priority over the Steam launcher, so the Steam community steering wheel schemes aren't being read by the games. Test it yourself, you have 1 hour to try and get the wheel to work before you can request for a full refund.", "Automation-Control": 1.0000097752, "Qwen2": "Yes"} {"id": "17058216", "revid": "5846", "url": "https://en.wikipedia.org/wiki?curid=17058216", "title": "Stochastic control", "text": "Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayesian probability-driven fashion, that random noise with known probability distribution affects the evolution and observation of the state variables. Stochastic control aims to design the time path of the controlled variables that performs the desired control task with minimum cost, somehow defined, despite the presence of this noise. The context may be either discrete time or continuous time.\nCertainty equivalence.\nAn extremely well-studied formulation in stochastic control is that of linear quadratic Gaussian control. Here the model is linear, the objective function is the expected value of a quadratic form, and the disturbances are purely additive. A basic result for discrete-time centralized systems with only additive uncertainty is the certainty equivalence property: that the optimal control solution in this case is the same as would be obtained in the absence of the additive disturbances. This property is applicable to all centralized systems with linear equations of evolution, quadratic cost function, and noise entering the model only additively; the quadratic assumption allows for the optimal control laws, which follow the certainty-equivalence property, to be linear functions of the observations of the controllers.\nAny deviation from the above assumptions—a nonlinear state equation, a non-quadratic objective function, noise in the multiplicative parameters of the model, or decentralization of control—causes the certainty equivalence property not to hold. For example, its failure to hold for decentralized control was demonstrated in Witsenhausen's counterexample.\nDiscrete time.\nIn a discrete-time context, the decision-maker observes the state variable, possibly with observational noise, in each time period. The objective may be to optimize the sum of expected values of a nonlinear (possibly quadratic) objective function over all the time periods from the present to the final period of concern, or to optimize the value of the objective function as of the final period only. At each time period new observations are made, and the control variables are to be adjusted optimally. Finding the optimal solution for the present time may involve iterating a matrix Riccati equation backwards in time from the last period to the present period.\nIn the discrete-time case with uncertainty about the parameter values in the transition matrix (giving the effect of current values of the state variables on their own evolution) and/or the control response matrix of the state equation, but still with a linear state equation and quadratic objective function, a Riccati equation can still be obtained for iterating backward to each period's solution even though certainty equivalence does not apply.ch.13 The discrete-time case of a non-quadratic loss function but only additive disturbances can also be handled, albeit with more complications.\nExample.\nA typical specification of the discrete-time stochastic linear quadratic control problem is to minimize\nwhere E1 is the expected value operator conditional on \"y\"0, superscript T indicates a matrix transpose, and \"S\" is the time horizon, subject to the state equation\nwhere \"y\" is an \"n\" × 1 vector of observable state variables, \"u\" is a \"k\" × 1 vector of control variables, \"A\"\"t\" is the time \"t\" realization of the stochastic \"n\" × \"n\" state transition matrix, \"B\"\"t\" is the time \"t\" realization of the stochastic \"n\" × \"k\" matrix of control multipliers, and \"Q\" (\"n\" × \"n\") and \"R\" (\"k\" × \"k\") are known symmetric positive definite cost matrices. We assume that each element of \"A\" and \"B\" is jointly independently and identically distributed through time, so the expected value operations need not be time-conditional.\nInduction backwards in time can be used to obtain the optimal control solution at each time,\nwith the symmetric positive definite cost-to-go matrix \"X\" evolving backwards in time from formula_4 according to\nwhich is known as the discrete-time dynamic Riccati equation of this problem. The only information needed regarding the unknown parameters in the \"A\" and \"B\" matrices is the expected value and variance of each element of each matrix and the covariances among elements of the same matrix and among elements across matrices.\nThe optimal control solution is unaffected if zero-mean, i.i.d. additive shocks also appear in the state equation, so long as they are uncorrelated with the parameters in the \"A\" and \"B\" matrices. But if they are so correlated, then the optimal control solution for each period contains an additional additive constant vector. If an additive constant vector appears in the state equation, then again the optimal control solution for each period contains an additional additive constant vector.\nThe steady-state characterization of \"X\" (if it exists), relevant for the infinite-horizon problem in which \"S\" goes to infinity, can be found by iterating the dynamic equation for \"X\" repeatedly until it converges; then \"X\" is characterized by removing the time subscripts from its dynamic equation.\nContinuous time.\nIf the model is in continuous time, the controller knows the state of the system at each instant of time. The objective is to maximize either an integral of, for example, a concave function of a state variable over a horizon from time zero (the present) to a terminal time \"T\", or a concave function of a state variable at some future date \"T\". As time evolves, new observations are continuously made and the control variables are continuously adjusted in optimal fashion.\nStochastic model predictive control.\nIn the literature, there are two types of MPCs for stochastic systems; Robust model predictive control and Stochastic Model Predictive Control (SMPC). Robust model predictive control is a more conservative method which considers the worst scenario in the optimization procedure. However, this method, similar to other robust controls, deteriorates the overall controller's performance and also is applicable only for systems with bounded uncertainties. The alternative method, SMPC, considers soft constraints which limit the risk of violation by a probabilistic inequality.\nIn finance.\nIn a continuous time approach in a finance context, the state variable in the stochastic differential equation is usually wealth or net worth, and the controls are the shares placed at each time in the various assets. Given the asset allocation chosen at any time, the determinants of the change in wealth are usually the stochastic returns to assets and the interest rate on the risk-free asset. The field of stochastic control has developed greatly since the 1970s, particularly in its applications to finance. Robert Merton used stochastic control to study optimal portfolios of safe and risky assets. His work and that of Black–Scholes changed the nature of the finance literature. Influential mathematical textbook treatments were by Fleming and Rishel, and by Fleming and Soner. These techniques were applied by Stein to the financial crisis of 2007–08.\nThe maximization, say of the expected logarithm of net worth at a terminal date \"T\", is subject to stochastic processes on the components of wealth. In this case, in continuous time Itô's equation is the main tool of analysis. In the case where the maximization is an integral of a concave function of utility over an horizon (0,\"T\"), dynamic programming is used. There is no certainty equivalence as in the older literature, because the coefficients of the control variables—that is, the returns received by the chosen shares of assets—are stochastic.", "Automation-Control": 0.9817114472, "Qwen2": "Yes"} {"id": "8050090", "revid": "38132428", "url": "https://en.wikipedia.org/wiki?curid=8050090", "title": "Java Agent Development Framework", "text": "Java Agent Development Framework, or JADE, is a software framework for the development of software agents, implemented in Java. JADE system supports coordination between several agents FIPA and provides a standard implementation of the communication language FIPA-ACL, which facilitates the communication between agents and allows the services detection of the system. JADE was originally developed by Telecom Italia and is distributed as free software.\nResume.\nJADE is a middleware which facilitates the development of multi-agent systems under the standard FIPA for which purpose it creates multiple containers for agents, each of them can run on one or more systems. It's understood that a set of containers constitutes a platform.\nJADE provides:\nHistory.\nJADE was initially developed by Telecom Italia Lab. This sector is the R & D branch of Telecom Italia Group which is responsible for promoting technological innovation. Telecom Italia conceived and promoted JADE by basing it in 2000. The latest available dates from June 2017 (version 4.5). The first version of JADE distributed as free software is available from February 2000 (versión 1.3).\nIn March 2003 Motorola and Telecom Italia created the JADE Governing Board with the objective of promoting the development and adoption of JADE in the mobile telecommunications industry as middleware based. The JADE Governing Board accepts to any company and/or organization interested in the commercial use and exploitation of JADE to commit to its development and promotion.\nIn 2021, the team that successfully developed JADE announced that they could not continue to work on it anymore. A team of researcher forked it and is now pursuing the platform development.\nPlatform.\nJADE is a distributed agents platform, which has a container for each host where you are running the agents. Additionally, the platform has various debugging tools, mobility of code and content agents, the possibility of parallel execution of the behavior of agents, as well as support for the definition of languages and ontologies. \nEach platform must have a parent container that has two special agents called AMS and DF.\nDF Agent.\nTo access the DF agent the class \"jade.domain.DFService\" and its static methods are used: \"register, deregister, modify\" and \"Search\".\nAMS agent.\nTo access the AMS Service an agent is created which automatically runs the \"register\" method of the AMS by default before executing the method \"setup\" from the new agent. When an agent is destroyed it executes its \"takeDown\" method by default and automatically calls the \"deregister\" method of the AMS.\nAgent class.\nThe Agent class is a superclass which allows the users to create JADE agents. To create an agent one needs to inherit directly from \"Agent\". Normally, each agent recorder several services which they should be implemented by one or more behaviors.\nThis class provides methods to perform the basic tasks of the agents as:\nJADE agent.\nThe cycle of life of a JADE agent follows the cycle proposed by FIPA. These agents go through different states defined as:\nAgents' behaviour.\nThe behavior defines the actions under a given event. This behavior of the agent is defined in the method \"setup\" using the method \"addBehaviour\".\nThe different behaviors that the agent will adopt are defined from the abstract class Behaviour. The class Behaviour contains the abstract methods:\nA user can override the methods \"onStart \" and\" OnEnd \" property. Additionally, there are other methods such as block\" \" and\" restart \" used for modifying the agent's behavior. When an agent is locked it can be unlocked in different ways.\nOtherwise the user can override the methods \"onStart\" and \"onEnd\" the agent possess.\nACL messages.\nMessage passing ACL (Agent Communication Language) is the base of communication between agents. Sending messages is done by the method \"send\" of the class Agent. In this method, you have to pass an object of type ACLMessage that contains the recipient information, language, coding and content of the message.\nThese messages are sent asynchronously, while messages are received they will be stored in a message queue. There are two types of receiving ACL messages, blocking or non-blocking. For this provide methods \"blockingReceive \" and \"receive \" respectively. In both methods, you can make filtering messages to be retrieved from the queue by setting different templates.\nExtensions.\nJADE has an extension denominated WADE (Workflows and Agents Development Environment) which is a system of workflow which allows create process by a graphic editor named WOLF.", "Automation-Control": 0.7943972349, "Qwen2": "Yes"} {"id": "8828332", "revid": "42592374", "url": "https://en.wikipedia.org/wiki?curid=8828332", "title": "MYTV International", "text": "MYTV International Ltd is a company in digital entertainment media industry. The field of operations includes business music and music video service for DJs and VJs, IPTV development, and digital signage. MYTV product family combines the multimedia and networking technology to produce a digital media networking system that provides control and flexibility of delivering entertaining and promotional content. The products of MYTV International Ltd. are TrendSign and miXJay.\nMYTV was established in Finland in 2000. At the beginning MYTV was part of UCM Group Oy. CEO Petteri Honkaniemi was founder of MYTV concept in UCM Group.\nIn January 2002 shareholders of MYTV International bought the business model from publishing company UCM Group and started to develop the MYTV concept as core business of MYTV International and MYTV Finland Ltd. MYTV International is private owned company. At the present MYTV International has offices in 9 countries worldwide.", "Automation-Control": 0.9872214794, "Qwen2": "Yes"} {"id": "8847552", "revid": "5662528", "url": "https://en.wikipedia.org/wiki?curid=8847552", "title": "Aras Corp", "text": "Aras Corporation is an American developer and publisher of product development software, Aras Innovator. The product is used for product lifecycle management (PLM) and other purposes. Since 2007, Aras has been providing Aras Innovator for free as \"Enterprise open-source software\", with Aras Corp providing technical support, software updates, and other consulting as a subscription service. Aras Corp was founded in 2000 in Andover, Massachusetts by Peter Schroer.\nAras Innovator is an enterprise software suite for managing product lifecycle management business processes. The product is based on the Microsoft .NET Framework and SQL Server. The product is used for product lifecycle management (PLM), advanced product quality planning (APQP), lean product development, product quality control, collaborative product development and new product introduction (NPI).\nUntil 2007, Aras sold their product as proprietary software for enterprises.\nIn 2007, Aras began providing Aras Innovator as open-source software. Clients obtain the software for free, and Aras Corp provides technical support, software updates, and other consulting as a subscription service.\nIn July 2020, Aras confirmed the introduction of a new framework, Digital Twin Core, which introduces functionality for generating and handling digital twins to the Aras low-code package. Aras Cloud PLM provides a secure, cost-effective, and scalable solution for customers who need to manage their product data in the cloud.", "Automation-Control": 0.7573500276, "Qwen2": "Yes"} {"id": "19330002", "revid": "829949", "url": "https://en.wikipedia.org/wiki?curid=19330002", "title": "Smith predictor", "text": "The Smith predictor (invented by O. J. M. Smith in 1957) is a type of predictive controller designed to control systems with a significant feedback time delay. The idea can be illustrated as follows.\nSuppose the plant consists of formula_1 followed by a pure time delay formula_2. formula_3 refers to the Z-transform of the transfer function relating the inputs and outputs of the plant formula_4.\nAs a first step, suppose we only consider formula_1 (the plant without a delay) and design a controller formula_6 with a closed-loop transfer function formula_7 that we consider satisfactory.\nNext, our objective is to design a controller formula_8 for the plant formula_9 so that the closed loop transfer function formula_10 equals formula_11.\nSolving formula_12,\nwe obtain\nformula_13. The controller is implemented as shown in the following figure, where formula_1 has been changed to formula_15 to indicate that it is a model used by the controller.\nNote that there are two feedback loops. The outer control loop feeds the output back to the input, as usual. However, this loop alone would not provide satisfactory control, because of the delay; this loop is feeding back outdated information. Intuitively, for the k sample intervals during which no fresh information is available, the system is controlled by the inner loop which contains a predictor of what the (unobservable) output of the plant G currently is.\nTo check that this works, a re-arrangement can be made as follows:\nHere we can see that if the model used in the controller, formula_16, matches the plant formula_9 perfectly, then the outer and middle feedback loops cancel each other, and the controller generates the \"correct\" control action. In reality, however, it is impossible for the model to perfectly match the plant.", "Automation-Control": 0.9975018501, "Qwen2": "Yes"} {"id": "28077280", "revid": "196446", "url": "https://en.wikipedia.org/wiki?curid=28077280", "title": "Annealing by short circuit", "text": "Annealing by short circuit is a method of efficiently annealing copper wire which employs a controlled electrical short circuit. It can be advantageous because it does not require a temperature-regulated furnace like other methods of annealing.\nProcess.\nThe process consists of two conductive pulleys (\"step pulleys\") which the wire passes across after it is drawn. The two pulleys have an electrical potential across them, which causes the wire to form a short circuit. The Joule effect causes the temperature of the wire to rise to approximately 400 °C. This temperature is affected by the rotational speed of the pulleys, the ambient temperature, and the voltage applied. Where formula_1 is the temperature of the wire, formula_2 is a constant, formula_3 is the voltage applied, formula_4 is the number of rotations of the pulleys per minute, and formula_5 is the ambient temperature:\nThe constant formula_2 depends on the diameter of the pulleys and the resistivity of the copper.\nPurely in terms of the temperature of the copper wire, an increase in the speed with which the wire passes through the pulley system has the same effect as a decrease in resistance. Therefore, the speed with which the wire can be drawn through varies quadratically as the voltage applied.", "Automation-Control": 0.9992084503, "Qwen2": "Yes"} {"id": "43269516", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=43269516", "title": "Sample complexity", "text": "The sample complexity of a machine learning algorithm represents the number of training-samples that it needs in order to successfully learn a target function.\nMore precisely, the sample complexity is the number of training-samples that we need to supply to the algorithm, so that the function returned by the algorithm is within an arbitrarily small error of the best possible function, with probability arbitrarily close to 1.\nThere are two variants of sample complexity:\nThe No free lunch theorem, discussed below, proves that, in general, the strong sample complexity is infinite, i.e. that there is no algorithm that can learn the globally-optimal target function using a finite number of training samples.\nHowever, if we are only interested in a particular class of target functions (e.g, only linear functions) then the sample complexity is finite, and it depends linearly on the VC dimension on the class of target functions.\nDefinition.\nLet formula_1 be a space which we call the input space, and formula_2 be a space which we call the output space, and let formula_3 denote the product formula_4. For example, in the setting of binary classification, formula_1 is typically a finite-dimensional vector space and formula_2 is the set formula_7.\nFix a hypothesis space formula_8 of functions formula_9. A learning algorithm over formula_8 is a computable map from formula_11 to formula_8. In other words, it is an algorithm that takes as input a finite sequence of training samples and outputs a function from formula_1 to formula_2. Typical learning algorithms include empirical risk minimization, without or with Tikhonov regularization.\nFix a loss function formula_15, for example, the square loss formula_16, where formula_17. For a given distribution formula_18 on formula_4, the expected risk of a hypothesis (a function) formula_20 is\nIn our setting, we have formula_22, where formula_23 is a learning algorithm and formula_24 is a sequence of vectors which are all drawn independently from formula_18. Define the optimal riskformula_26Set formula_27, for each formula_28. Note that formula_29 is a random variable and depends on the random variable formula_30, which is drawn from the distribution formula_31. The algorithm formula_23 is called consistent if formula_33 probabilistically converges to formula_34. In other words, for all formula_35, there exists a positive integer formula_36, such that, for all formula_37, we have\nformula_38\nThe sample complexity of formula_23 is then the minimum formula_36 for which this holds, as a function of formula_41, and formula_42. We write the sample complexity as formula_43 to emphasize that this value of formula_36 depends on formula_41, and formula_42. If formula_23 is not consistent, then we set formula_48. If there exists an algorithm for which formula_49 is finite, then we say that the hypothesis space formula_50 is learnable.\nIn others words, the sample complexity formula_49 defines the rate of consistency of the algorithm: given a desired accuracy formula_52 and confidence formula_42, one needs to sample formula_49 data points to guarantee that the risk of the output function is within formula_52 of the best possible, with probability at least formula_56 .\nIn probably approximately correct (PAC) learning, one is concerned with whether the sample complexity is \"polynomial\", that is, whether formula_49 is bounded by a polynomial in formula_58 and formula_59. If formula_49 is polynomial for some learning algorithm, then one says that the hypothesis space formula_61 is PAC-learnable. Note that this is a stronger notion than being learnable.\nUnrestricted hypothesis space: infinite sample complexity.\nOne can ask whether there exists a learning algorithm so that the sample complexity is finite in the strong sense, that is, there is a bound on the number of samples needed so that the algorithm can learn any distribution over the input-output space with a specified target error. More formally, one asks whether there exists a learning algorithm formula_23, such that, for all formula_35, there exists a positive integer formula_36 such that for all formula_37, we have\nformula_66\nwhere formula_27, with formula_24 as above. The No Free Lunch Theorem says that without restrictions on the hypothesis space formula_8, this is not the case, i.e., there always exist \"bad\" distributions for which the sample complexity is arbitrarily large.\nThus, in order to make statements about the rate of convergence of the quantity\nformula_70\none must either\nRestricted hypothesis space: finite sample-complexity.\nThe latter approach leads to concepts such as VC dimension and Rademacher complexity which control the complexity of the space formula_8. A smaller hypothesis space introduces more bias into the inference process, meaning that formula_74 may be greater than the best possible risk in a larger space. However, by restricting the complexity of the hypothesis space it becomes possible for an algorithm to produce more uniformly consistent functions. This trade-off leads to the concept of regularization.\nIt is a theorem from VC theory that the following three statements are equivalent for a hypothesis space formula_8:\nThis gives a way to prove that certain hypothesis spaces are PAC learnable, and by extension, learnable.\nAn example of a PAC-learnable hypothesis space.\nformula_79, and let formula_8 be the space of affine functions on formula_1, that is, functions of the form formula_82 for some formula_83. This is the linear classification with offset learning problem. Now, note that four coplanar points in a square cannot be shattered by any affine function, since no affine function can be positive on two diagonally opposite vertices and negative on the remaining two. Thus, the VC dimension of formula_8 is formula_85, so it is finite. It follows by the above characterization of PAC-learnable classes that formula_8 is PAC-learnable, and by extension, learnable.\nSample-complexity bounds.\nSuppose formula_8 is a class of binary functions (functions to formula_88). Then, formula_8 is formula_90-PAC-learnable with a sample of size:\nformula_91\nwhere formula_92 is the VC dimension of formula_8.\nMoreover, any formula_90-PAC-learning algorithm for formula_8 must have sample-complexity:\nformula_96\nThus, the sample-complexity is a linear function of the VC dimension of the hypothesis space.\nSuppose formula_8 is a class of real-valued functions with range in formula_98. Then, formula_8 is formula_90-PAC-learnable with a sample of size:\nformula_101\nwhere formula_102 is Pollard's pseudo-dimension of formula_8.\nOther Settings.\nIn addition to the supervised learning setting, sample complexity is relevant to semi-supervised learning problems including active learning, where the algorithm can ask for labels to specifically chosen inputs in order to reduce the cost of obtaining many labels. The concept of sample complexity also shows up in reinforcement learning, online learning, and unsupervised algorithms, e.g. for dictionary learning.\nEfficiency in robotics.\nA high sample complexity means, that many calculations are needed for running a Monte Carlo tree search. Its equal to a model free brute force search in the state space. In contrast, a high efficiency algorithm has a low sample complexity. Possible techniques for reducing the sample complexity are metric learning and model based reinforcement learning.", "Automation-Control": 0.9571600556, "Qwen2": "Yes"} {"id": "3877767", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=3877767", "title": "Describing function", "text": "In control systems theory, the describing function (DF) method, developed by Nikolay Mitrofanovich Krylov and Nikolay Bogoliubov in the 1930s, and extended by Ralph Kochenburger is an approximate procedure for analyzing certain nonlinear control problems. It is based on quasi-linearization, which is the approximation of the non-linear system under investigation by a linear time-invariant (LTI) transfer function that depends on the amplitude of the input waveform. By definition, a transfer function of a true LTI system cannot depend on the amplitude of the input function because an LTI system is linear. Thus, this dependence on amplitude generates a family of linear systems that are combined in an attempt to capture salient features of the non-linear system behavior. The describing function is one of the few widely applicable methods for designing nonlinear systems, and is very widely used as a standard mathematical tool for analyzing limit cycles in closed-loop controllers, such as industrial process controls, servomechanisms, and electronic oscillators.\nThe method.\nConsider feedback around a discontinuous (but piecewise continuous) nonlinearity (e.g., an amplifier with saturation, or an element with deadband effects) cascaded with a slow stable linear system. The continuous region in which the feedback is presented to the nonlinearity depends on the amplitude of the output of the linear system. As the linear system's output amplitude decays, the nonlinearity may move into a different continuous region. This switching from one continuous region to another can generate periodic oscillations. The describing function method attempts to predict characteristics of those oscillations (e.g., their fundamental frequency) by assuming that the slow system acts like a low-pass or bandpass filter that concentrates all energy around a single frequency. Even if the output waveform has several modes, the method can still provide intuition about properties like frequency and possibly amplitude; in this case, the describing function method can be thought of as describing the sliding mode of the feedback system.\nUsing this low-pass assumption, the system response can be described by one of a family of sinusoidal waveforms; in this case the system would be characterized by a sine input describing function (SIDF) formula_1 giving the system response to an input consisting of a sine wave of amplitude A and frequency formula_2. This SIDF is a modification of the transfer function formula_3 used to characterize linear systems. In a quasi-linear system, when the input is a sine wave, the output will be a sine wave of the same frequency but with a scaled amplitude and shifted phase as given by formula_1. Many systems are approximately quasi-linear in the sense that although the response to a sine wave is not a pure sine wave, most of the energy in the output is indeed at the same frequency formula_2 as the input. This is because such systems may possess intrinsic low-pass or bandpass characteristics such that harmonics are naturally attenuated, or because external filters are added for this purpose. An important application of the SIDF technique is to estimate the oscillation amplitude in sinusoidal electronic oscillators.\nOther types of describing functions that have been used are DFs for level inputs and for Gaussian noise inputs. Although not a complete description of the system, the DFs often suffice to answer specific questions about control and stability. DF methods are best for analyzing systems with relatively weak nonlinearities. In addition the higher order sinusoidal input describing functions (HOSIDF), describe the response of a class of nonlinear systems at harmonics of the input frequency of a sinusoidal input. The HOSIDFs are an extension of the SIDF for systems where the nonlinearities are significant in the response.\nCaveats.\nAlthough the describing function method can produce reasonably accurate results for a wide class of systems, it can fail badly for others. For example, the method can fail if the system emphasizes higher harmonics of the nonlinearity. Such examples have been presented by Tzypkin for bang–bang systems. A fairly similar example is a closed-loop oscillator consisting of a non-inverting Schmitt trigger followed by an \"inverting\" integrator that feeds back its output to the Schmitt trigger's input. The output of the Schmitt trigger is going to be a square waveform, while that of the integrator (following it) is going to have a triangle waveform with peaks coinciding with the transitions in the square wave. Each of these two oscillator stages lags the signal exactly by 90 degrees (relative to its input). If one were to perform DF analysis on this circuit, the triangle wave at the Schmitt trigger's input would be replaced by its fundamental (sine wave), which passing through the trigger would cause a phase shift of less than 90 degrees (because the sine wave would trigger it sooner than the triangle wave does) so the system would appear not to oscillate in the same (simple) way.\nAlso, in the case where the conditions for Aizerman's or Kalman conjectures are fulfilled, there are no periodic solutions by describing function method, but counterexamples with hidden periodic attractors are known. Counterexamples to the describing function method can be constructed for discontinuous dynamical systems when a rest segment destroys predicted limit cycles. Therefore, the application of the describing function method requires additional justification.", "Automation-Control": 0.9897883534, "Qwen2": "Yes"} {"id": "5812333", "revid": "6727347", "url": "https://en.wikipedia.org/wiki?curid=5812333", "title": "Closed-loop pole", "text": "In systems theory, closed-loop poles are the positions of the poles (or eigenvalues) of a closed-loop transfer function in the s-plane. The open-loop transfer function is equal to the product of all transfer function blocks in the forward path in the block diagram. The closed-loop transfer function is obtained by dividing the open-loop transfer function by the sum of one and the product of all transfer function blocks throughout the negative feedback loop. The closed-loop transfer function may also be obtained by algebraic or block diagram manipulation. Once the closed-loop transfer function is obtained for the system, the closed-loop poles are obtained by solving the characteristic equation. The characteristic equation is nothing more than setting the denominator of the closed-loop transfer function to zero. \nIn control theory there are two main methods of analyzing feedback systems: the transfer function (or frequency domain) method and the state space method. When the transfer function method is used, attention is focused on the locations in the s-plane where the transfer function is undefined (the \"poles\") or zero (the \"zeroes\"; see Zeroes and poles). Two different transfer functions are of interest to the designer. If the feedback loops in the system are opened (that is prevented from operating) one speaks of the \"open-loop transfer function\", while if the feedback loops are operating normally one speaks of the \"closed-loop transfer function\". For more on the relationship between the two, see root-locus.\nClosed-loop poles in control theory.\nThe response of a linear time-invariant system to any input can be derived from its impulse response and step response. The eigenvalues of the system determine completely the natural response (unforced response). In control theory, the response to any input is a combination of a transient response and steady-state response. Therefore, a crucial design parameter is the location of the eigenvalues, or closed-loop poles.\nIn root-locus design, the gain \"K\" is usually parameterized. Each point on the locus satisfies the angle condition and magnitude condition and corresponds to a different value of \"K\". For negative feedback systems, the closed-loop poles move along the root-locus from the open-loop poles to the open-loop zeroes as the gain is increased. For this reason, the root-locus is often used for design of proportional control, i.e. those for which formula_1.\nFinding closed-loop poles.\nConsider a simple feedback system with controller formula_1, plant formula_3 and transfer function formula_4 in the feedback path. Note that a unity feedback system has formula_5 and the block is omitted. For this system, the open-loop transfer function is the product of the blocks in the forward path, formula_6. The product of the blocks around the entire closed loop is formula_7. Therefore, the closed-loop transfer function is\nThe closed-loop poles, or eigenvalues, are obtained by solving the characteristic equation formula_9. In general, the solution will be n complex numbers where n is the order of the characteristic polynomial.\nThe preceding is valid for single-input-single-output systems (SISO). An extension is possible for multiple input multiple output systems, that is for systems where formula_3 and formula_11 are matrices whose elements are made of transfer functions. In this case the poles are the solution of the equation", "Automation-Control": 1.0000039339, "Qwen2": "Yes"} {"id": "5814476", "revid": "35246606", "url": "https://en.wikipedia.org/wiki?curid=5814476", "title": "Tombstone (manufacturing)", "text": "A tombstone, also known as a pedestal-type fixture, tooling tower, tooling column or fixture block, is a fixture of two or more sides, onto which are mounted parts to be manufactured. Tombstones are typically used in automated systems; parts are loaded onto the tombstone so that robots may operate on one part, flip the tombstone, and operate on the next part until all processes are completed, then transport the entire tombstone to the next station.\nThe first tombstone type fixture was patented in 1971 by the Vereinigte Flugtechnische Werke.\nTombstones are used in agile manufacturing to facilitate quick and easy installation, scalability and reconfiguration.", "Automation-Control": 0.9989304543, "Qwen2": "Yes"} {"id": "5409095", "revid": "16809467", "url": "https://en.wikipedia.org/wiki?curid=5409095", "title": "Dead-beat control", "text": "In discrete-time control theory, the dead-beat control problem consists of finding what input signal must be applied to a system in order to bring the output to the steady state in the smallest number of time steps.\nFor an \"N\"th-order linear system it can be shown that this minimum number of steps will be at most \"N\" (depending on the initial condition), provided that the system is null controllable (that it can be brought to state zero by \"some\" input). The solution is to apply feedback such that all poles of the closed-loop transfer function are at the origin of the \"z\"-plane. This approach is straightforward for linear systems. However, when it comes to nonlinear systems, dead beat control is an open research problem.\nUsage.\nDead beat controllers are often used in process control due to their good dynamic properties. They are a classical feedback controller where the control gains are set using a table based on the plant system order and normalized natural frequency.\nThe deadbeat response has the following characteristics:\nTransfer function of dead-beat controller.\nConsider that a plant has the transfer function\nwhere \nThe transfer function of the corresponding dead-beat controller is\nwhere \"d\" is the minimum necessary system delay for controller to be realizable. For example, systems with two poles must have at minimum 2 step delay from controller to output, so \"d\" = 2.\nThe closed-loop transfer function is\nand has all poles at the origin. In general, a closed loop transfer function which has all of its poles at the origin is called a dead beat transfer function.", "Automation-Control": 0.9987013936, "Qwen2": "Yes"} {"id": "5410577", "revid": "29411829", "url": "https://en.wikipedia.org/wiki?curid=5410577", "title": "Linear matrix inequality", "text": "In convex optimization, a linear matrix inequality (LMI) is an expression of the form\nwhere\nThis linear matrix inequality specifies a convex constraint on \"y\".\nApplications.\nThere are efficient numerical methods to determine whether an LMI is feasible (\"e.g.\", whether there exists a vector \"y\" such that LMI(\"y\") ≥ 0), or to solve a convex optimization problem with LMI constraints.\nMany optimization problems in control theory, system identification and signal processing can be formulated using LMIs. Also LMIs find application in Polynomial Sum-Of-Squares. The prototypical primal and dual semidefinite program is a minimization of a real linear function respectively subject to the primal and dual convex cones governing this LMI.\nSolving LMIs.\nA major breakthrough in convex optimization was the introduction of interior-point methods. These methods were developed in a series of papers and became of true interest in the context of LMI problems in the work of Yurii Nesterov and Arkadi Nemirovski.", "Automation-Control": 0.6303978562, "Qwen2": "Yes"} {"id": "1670188", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=1670188", "title": "Backtracking line search", "text": "In (unconstrained) mathematical optimization, a backtracking line search is a line search method to determine the amount to move along a given search direction. Its use requires that the objective function is differentiable and that its gradient is known.\nThe method involves starting with a relatively large estimate of the step size for movement along the line search direction, and iteratively shrinking the step size (i.e., \"backtracking\") until a decrease of the objective function is observed that adequately corresponds to the amount of decrease that is expected, based on the step size and the local gradient of the objective function. A common stopping criterion is the Armijo–Goldstein condition.\nBacktracking line search is typically used for gradient descent (GD), but it can also be used in other contexts. For example, it can be used with Newton's method if the Hessian matrix is positive definite.\nMotivation.\nGiven a starting position formula_1 and a search direction formula_2, the task of a line search is to determine a step size formula_3 that adequately reduces the objective function formula_4 (assumed formula_5 i.e. continuously differentiable), i.e., to find a value of formula_6 that reduces formula_7 relative to formula_8. However, it is usually undesirable to devote substantial resources to finding a value of formula_6 to precisely minimize formula_10. This is because the computing resources needed to find a more precise minimum along one particular direction could instead be employed to identify a better search direction. Once an improved starting point has been identified by the line search, another subsequent line search will ordinarily be performed in a new direction. The goal, then, is just to identify a value of formula_6 that provides a reasonable amount of improvement in the objective function, rather than to find the actual minimizing value of formula_6.\nThe backtracking line search starts with a large estimate of formula_6 and iteratively shrinks it. The shrinking continues until a value is found that is small enough to provide a decrease in the objective function that adequately matches the decrease that is expected to be achieved, based on the local function gradient formula_14\nDefine the local slope of the function of formula_6 along the search direction formula_2 as formula_17 (where formula_18 denotes the dot product). It is assumed that formula_2 is a vector for which some local decrease is possible, i.e., it is assumed that formula_20.\nBased on a selected control parameter formula_21, the Armijo–Goldstein condition tests whether a step-wise movement from a current position\nformula_1 to a modified position formula_23 achieves an adequately corresponding decrease in the objective function. The condition is fulfilled, see , if formula_24\nThis condition, when used appropriately as part of a line search, can ensure that the step size is not excessively large. However, this condition is not sufficient on its own to ensure that the step size is nearly optimal, since any value of formula_25 that is sufficiently small will satisfy the condition.\nThus, the backtracking line search strategy starts with a relatively large step size, and repeatedly shrinks it by a factor formula_26 until the Armijo–Goldstein condition is fulfilled.\nThe search will terminate after a finite number of steps for any positive values of formula_27 and formula_28 that are less than 1. For example, Armijo used for both formula_27 and formula_28 in .\nAlgorithm.\nThis condition is from . Starting with a maximum candidate step size value formula_31, using search control parameters formula_26 and formula_21, the backtracking line search algorithm can be expressed as follows:\nIn other words, reduce formula_40 by a factor of formula_41 in each iteration until the Armijo–Goldstein condition is fulfilled.\nFunction minimization using backtracking line search in practice.\nIn practice, the above algorithm is typically iterated to produce a sequence formula_42, formula_43, to converge to a minimum, provided such a minimum exists and formula_44 is selected appropriately in each step. For gradient descent, formula_44 is selected as formula_46.\nThe value of formula_39 for the formula_37 that fulfills the Armijo–Goldstein condition depends on formula_1 and formula_2, and is thus denoted below by formula_51. It also depends on formula_10, formula_40, formula_28 and formula_27 of course, although these dependencies can be left implicit if they are assumed to be fixed with respect to the optimization problem.\nThe detailed steps are thus, see , :\nTo assure good behavior, it is necessary that some conditions must be satisfied by formula_44. Roughly speaking formula_44 should not be too far away from formula_65. A precise version is as follows (see e.g. ). There are constants formula_66 so that the following two conditions are satisfied:\nLower bound for learning rates.\nThis addresses the question whether there is a systematic way to find a positive number formula_79 - depending on the function f, the point formula_1 and the descent direction formula_2 - so that all learning rates formula_82 satisfy Armijo's condition. When formula_83, we can choose formula_79 in the order of formula_85, where formula_86 is a local Lipschitz constant for the gradient formula_87 near the point formula_1 (see Lipschitz continuity). If the function is formula_89, then formula_86 is close to the Hessian of the function at the point formula_1. See for more detail.\nUpper bound for learning rates.\nIn the same situation where formula_83, an interesting question is how large learning rates can be chosen in Armijo's condition (that is, when one has no limit on formula_40 as defined in the section \"Function minimization using backtracking line search in practice\"), since larger learning rates when formula_42 is closer to the limit point (if exists) can make convergence faster. For example, in Wolfe conditions, there is no mention of formula_40 but another condition called curvature condition is introduced. \nAn upper bound for learning rates is shown to exist if one wants the constructed sequence formula_42 converges to a non-degenerate critical point, see : The learning rates must be bounded from above roughly by formula_97. Here H is the Hessian of the function at the limit point, formula_98 is its inverse, and formula_99 is the norm of a linear operator. Thus, this result applies for example when one uses Backtracking line search for Morse functions. Note that in dimension 1, formula_100 is a number and hence this upper bound is of the same size as the lower bound in the section \"Lower bound for learning rates\".\nOn the other hand, if the limit point is degenerate, then learning rates can be unbounded. For example, a modification of backtracking line search known as unbounded backtracking gradient descent (see ) allows the learning rate to be half the size formula_101, where formula_102 is a constant. Experiments with simple functions such as formula_103 show that unbounded backtracking gradient descent converges much faster than the basic version described in the section \"Function minimization using backtracking line search in practice\".\nTime efficiency.\nAn argument against the use of Backtracking line search, in particular in Large scale optimisation, is that satisfying Armijo's condition is expensive. There is a way (so-called Two-way Backtracking) to go around, with good theoretical guarantees and has been tested with good results on deep neural networks, see . (There, one can find also good/stable implementations of Armijo's condition and its combination with some popular algorithms such as Momentum and NAG, on datasets such as Cifar10 and Cifar100.) One observes that if the sequence formula_61 converges (as wished when one makes use of an iterative optimisation method), then the sequence of learning rates formula_105 should vary little when n is large enough. Therefore, in the search for formula_105, if one always starts from formula_40, one would waste a lot of time if it turns out that the sequence formula_105 stays far away from formula_40. Instead, one should search for formula_105 by starting from formula_111. The second observation is that formula_105 could be larger than formula_111, and hence one should allow to increase learning rate (and not just decrease as in the section Algorithm). Here is the detailed algorithm for Two-way Backtracking: At step n\nOne can save time further by a hybrid mixture between two-way backtracking and the basic standard gradient descent algorithm. This procedure also has good theoretical guarantee and good test performance. Roughly speaking, we run two-way backtracking a few times, then use the learning rate we get from then unchanged, except if the function value increases. Here is precisely how it is done. One choose in advance a number formula_126, and a number formula_127.\nTheoretical guarantee (for gradient descent).\nCompared with Wolfe's conditions, which is more complicated, Armijo's condition has a better theoretical guarantee. Indeed, so far backtracking line search and its modifications are the most theoretically guaranteed methods among all numerical optimization algorithms concerning convergence to critical points and avoidance of saddle points, see below. \nCritical points are points where the gradient of the objective function is 0. Local minima are critical points, but there are critical points which are not local minima. An example is saddle points. Saddle points are critical points, at which there are at least one direction where the function is (local) maximum. Therefore, these points are far from being local minima. For example, if a function has at least one saddle point, then it cannot be convex. The relevance of saddle points to optimisation algorithms is that in large scale (i.e. high-dimensional) optimisation, one likely sees more saddle points than minima, see . Hence, a good optimisation algorithm should be able to avoid saddle points. In the setting of deep learning, saddle points are also prevalent, see . Thus, to apply in deep learning, one needs results for non-convex functions.\nFor convergence to critical points: For example, if the cost function is a real analytic function, then it is shown in that convergence is guaranteed. The main idea is to use Łojasiewicz inequality which is enjoyed by a real analytic function. For non-smooth functions satisfying Łojasiewicz inequality, the above convergence guarantee is extended, see . In , there is a proof that for every sequence constructed by backtracking line search, a cluster point (i.e. the limit of one subsequence, if the subsequence converges) is a critical point. For the case of a function with at most countably many critical points (such as a Morse function) and compact sublevels, as well as with Lipschitz continuous gradient where one uses standard GD with learning rate <1/L (see the section \"Stochastic gradient descent\"), then convergence is guaranteed, see for example Chapter 12 in . Here the assumption about compact sublevels is to make sure that one deals with compact sets of the Euclidean space only. In the general case, where formula_10 is only assumed to be formula_5 and have at most countably many critical points, convergence is guaranteed, see . In the same reference, similarly convergence is guaranteed for other modifications of Backtracking line search (such as Unbounded backtracking gradient descent mentioned in the section \"Upper bound for learning rates\"), and even if the function has uncountably many critical points still one can deduce some non-trivial facts about convergence behaviour. In the stochastic setting, under the same assumption that the gradient is Lipschitz continuous and one uses a more restrictive version (requiring in addition that the sum of learning rates is infinite and the sum of squares of learning rates is finite) of diminishing learning rate scheme (see section \"Stochastic gradient descent\") and moreover the function is strictly convex, then the convergence is established in the well-known result , see for generalisations to less restrictive versions of a diminishing learning rate scheme. None of these results (for non-convex functions) have been proven for any other optimization algorithm so far.\nFor avoidance of saddle points: For example, if the gradient of the cost function is Lipschitz continuous and one chooses standard GD with learning rate Lebesgue measure zero), the sequence constructed will not converge to a non-degenerate saddle point (proven in ), and more generally it is also true that the sequence constructed will not converge to a degenerate saddle point (proven in ). Under the same assumption that the gradient is Lipschitz continuous and one uses a diminishing learning rate scheme (see the section \"Stochastic gradient descent\"), then avoidance of saddle points is established in .\nA special case: (standard) stochastic gradient descent (SGD).\nWhile it is trivial to mention, if the gradient of a cost function is Lipschitz continuous, with Lipschitz constant L, then with choosing learning rate to be constant and of the size formula_139, one has a special case of backtracking line search (for gradient descent). This has been used at least in . This scheme however requires that one needs to have a good estimate for L, otherwise if learning rate is too big (relative to 1/L) then the scheme has no convergence guarantee. One can see what will go wrong if the cost function is a smoothing (near the point 0) of the function f(t)=|t|. Such a good estimate is, however, difficult and laborious in large dimensions. Also, if the gradient of the function is not globally Lipschitz continuous, then this scheme has no convergence guarantee. For example, this is similar to an exercise in , for the cost function formula_140 and for whatever constant learning rate one chooses, with a random initial point the sequence constructed by this special scheme does not converge to the global minimum 0.\nIf one does not care about the condition that learning rate must be bounded by 1/L, then this special scheme has been used much older, at least since 1847 by Cauchy, which can be called standard GD (not to be confused with stochastic gradient descent, which is abbreviated herein as SGD). In the stochastic setting (such as in the mini-batch setting in deep learning), standard GD is called stochastic gradient descent, or SGD.\nEven if the cost function has globally continuous gradient, good estimate of the Lipschitz constant for the cost functions in deep learning may not be feasible or desirable, given the very high dimensions of deep neural networks. Hence, there is a technique of fine-tuning of learning rates in applying standard GD or SGD. One way is to choose many learning rates from a grid search, with the hope that some of the learning rates can give good results. (However, if the loss function does not have global Lipschitz continuous gradient, then the example with formula_140 above shows that grid search cannot help.) Another way is the so-called adaptive standard GD or SGD, some representatives are Adam, Adadelta, RMSProp and so on, see the article on Stochastic gradient descent. In adaptive standard GD or SGD, learning rates are allowed to vary at each iterate step n, but in a different manner from Backtracking line search for gradient descent. Apparently, it would be more expensive to use Backtracking line search for gradient descent, since one needs to do a loop search until Armijo's condition is satisfied, while for adaptive standard GD or SGD no loop search is needed. Most of these adaptive standard GD or SGD do not have the descent property formula_142, for all n, as Backtracking line search for gradient descent. Only a few has this property, and which have good theoretical properties, but they turn out to be special cases of Backtracking line search or more generally Armijo's condition . The first one is when one chooses learning rate to be a constant <1/L, as mentioned above, if one can have a good estimate of L. The second is the so called diminishing learning rate, used in the well-known paper by , if again the function has globally Lipschitz continuous gradient (but the Lipschitz constant may be unknown) and the learning rates converge to 0.\nSummary.\nIn summary, backtracking line search (and its modifications) is a method which is easy to implement, is applicable for very general functions, has very good theoretical guarantee (for both convergence to critical points and avoidance of saddle points) and works well in practice. Several other methods which have good theoretical guarantee, such as diminishing learning rates or standard GD with learning rate <1/L – both require the gradient of the objective function to be Lipschitz continuous, turn out to be a special case of Backtracking line search or satisfy Armijo's condition. Even though \"a priori\" one needs the cost function to be continuously differentiable to apply this method, in practice one can apply this method successfully also for functions which are continuously differentiable on a dense open subset such as formula_143 or formula_144.", "Automation-Control": 0.855124712, "Qwen2": "Yes"} {"id": "65778642", "revid": "15104030", "url": "https://en.wikipedia.org/wiki?curid=65778642", "title": "March 1870 West Sydney colonial by-election", "text": "A by-election was held for the New South Wales Legislative Assembly electorate of West Sydney on 2 March 1870 because of the resignation of John Robertson due to financial difficulties.\nResult.\n\nThe by-election was caused by the resignation of John Robertson due to financial difficulties.", "Automation-Control": 0.7041546702, "Qwen2": "Yes"} {"id": "9926926", "revid": "176575", "url": "https://en.wikipedia.org/wiki?curid=9926926", "title": "Microfactory", "text": "A microfactory either refers to a capital-light facility used for the local assembly of a complex product or system or a small (normally automated) factory for producing small quantities of products. The term was proposed by the Mechanical Engineer Laboratory (MEL) of Japan in 1990 and has recently been used to describe the approach of manufacturers like Arrival. The microfactory's main advantages are saving a substantial amount of space, energy, materials, time, and upfront capital costs. \nDue to their reduced dimensions, microfactories are normally highly automated. They might contain automatic machine tools, assembly systems, quality inspection systems, material feed systems, waste elimination systems, a system to evaluate tool deterioration and a system to replace tools.\nAt least one proposed microfactory is being designed to make many of its own parts, i.e., a partially self-replicating machine.\nA microfactory can also refer to a factory designed for flexible small batch production that can produce a wide variety of products as opposed to a single monolithic mass production type approach. Typically the manufacturing processes of microfactories take advantage of digital fabrication technology such as 3D printing and CNC machines in order to accomplish this. For example, Local Motors had microfactories in Phoenix, Ariz. and Knoxville, Tenn. The company built products, like the Rally Fighter prerunner sports car, in its microfactories.", "Automation-Control": 0.9899207354, "Qwen2": "Yes"} {"id": "11113752", "revid": "1666642", "url": "https://en.wikipedia.org/wiki?curid=11113752", "title": "Substation Configuration Language", "text": "System Configuration description Language formerly known as Substation Configuration description Language (SCL) is the language and representation format specified by IEC 61850 for the configuration of electrical substation devices. This includes representation of modeled data and communication services specified by IEC 61850–7–X standard documents. The complete SCL representation and its details are specified in IEC 61850-6 standard document. It includes data representation for substation device entities; its associated functions represented as logical nodes, communication systems and capabilities. The complete representation of data as SCL enhances the different devices of a substation to exchange the SCL files and to have a complete interoperability.\nParts of SCL files.\nAn SCL file contains the following parts:\nTypes of SCL files.\nDepending on the purpose of SCL file, it is classified into the following types:\nThe last two file types were introduced with Edition 2.", "Automation-Control": 0.7409992218, "Qwen2": "Yes"} {"id": "60415939", "revid": "42921602", "url": "https://en.wikipedia.org/wiki?curid=60415939", "title": "Hokuyo Automatic Co., Ltd.", "text": "Hokuyo Automatic Co., Ltd. is a global manufacturer of sensor and automation technology headquartered in Osaka, Japan. \nHokuyo is known for its 2D and 3D scanning laser range finders for use in AGV, UAV, and mobile robot applications. The company also develops photoelectric switches, optical data transceivers, automatic counters, and automatic doors, primarily for use in factory and logistics automation.", "Automation-Control": 0.9984506369, "Qwen2": "Yes"} {"id": "36106122", "revid": "11009441", "url": "https://en.wikipedia.org/wiki?curid=36106122", "title": "Discontinuous filament winding machine", "text": "A discontinuous filament winding machine (DFW machine or DW machine) is a machine for laying fiberglass filament windings over a cylindrical mould or mandrel bar using a carriage that is traveling along the axis of that mandrel. The mandrel is fixed on a mandrel stand and is rotated by an asynchronous motor. The carriage is the set-up that holds and winds the fiberglass on the rotating mandrel. The difference between the continuous and discontinuous filament winding machine is the area on which filament winding is laying out.\nDW machines are normally used for manufacturing epoxy pipes.", "Automation-Control": 0.9930307269, "Qwen2": "Yes"} {"id": "36114320", "revid": "1167059163", "url": "https://en.wikipedia.org/wiki?curid=36114320", "title": "Device Description Language", "text": "Device Description Language (DDL) is the formal language describing the service and configuration of field devices for process and factory automation.\nBackground.\nCurrent field devices for process and factory automation have a number of configuration options, to customize them to their individual use case. For these means they are equipped with a digital communication interface (HART, PROFIBUS, Fieldbus Foundation). Different software tools provide the means to control and configure the devices. In the 1990s, the DDL was developed to remove the requirement to write a new software tool for each new device type. Software can, through the interpretation of a device description (DD), configure and control many different devices. The creation of a description with the DDL is less effort than writing an entire software tool.\nThe HART Communication Foundation, PROFIBUS and Fieldbus Foundation have merged their individual dialects of the DDL. The result became the \"Electronic Device Description Language (EDDL)\", an IEC standard (IEC 61804).\nThe harmonization and enhancement of the EDDL is being undertaken in the EDDL Cooperation Team (ECT). The ECT consists of the leadership of the Fieldbus Foundation, Profibus Nutzerorganisation (PNO), Hart Communication Foundation, OPC Foundation and the FDT Group.\nStructure of the DDL.\nThe DDL describes:\nSoftware.\nA device description (DD) can be created with a plain text editor. But like any other programing or description language, the authoring is error prone and as such special development tools may be used, to create valid and norm conforming EDDs.\nThe following tools assists the creation of EDDs:\nThe following control and configuration tools interpret the DDL:", "Automation-Control": 0.9722961783, "Qwen2": "Yes"} {"id": "6903579", "revid": "575347", "url": "https://en.wikipedia.org/wiki?curid=6903579", "title": "Performance supervision system", "text": "A performance supervision system (PSS) is a software system used to improve the performance of a process plant. Typical process plants include oil refineries, paper mills, and chemical plants.\nThe PSS gathers real-time data from the process control system, typically a distributed control system. Using this data, the PSS can calculate performance metrics for process equipment, controls, and operations.", "Automation-Control": 0.9768458009, "Qwen2": "Yes"} {"id": "42541439", "revid": "26785110", "url": "https://en.wikipedia.org/wiki?curid=42541439", "title": "Closed-loop manufacturing", "text": "Closed-loop manufacturing (abbreviated CLM) is a closed-loop process of manufacturing and measuring (checking) in the manufacturing machine. The pre-stage to this is inspection in manufacturing. The idea is to reduce costs and improve the quality and accuracy of the produced parts.\nGeneral procedure.\nClosed-loop manufacturing can be done in different ways dependent on the manufacturing technique and on the accuracy requirements.\nSuitable manufacturing techniques.\nCLM is very suitable for electrical discharge machining. Milling or turning is also suitable for CLM.\nSuitable measuring techniques.\nIn machining measurement techniques have to fulfill special needs. In particular optical techniques have the advantage that they do not touch the part. The following parts are practically used:\nAdvantages / Disadvantages.\nThe advantages are:\nThe disadvantages are:", "Automation-Control": 0.9993751645, "Qwen2": "Yes"} {"id": "2027920", "revid": "44562786", "url": "https://en.wikipedia.org/wiki?curid=2027920", "title": "Blow molding", "text": "Blow molding (or moulding) is a manufacturing process for forming hollow plastic parts. It is also used for forming glass bottles or other hollow shapes.\nIn general, there are three main types of blow molding: extrusion blow molding, injection blow molding, and injection stretch blow molding.\nThe blow molding process begins with softening plastic by heating a preform or parison. The parison is a tube-like piece of plastic with a hole in one end through which compressed air can enter.\nThe plastic workpiece is then clamped into a mold and air is blown into it. The air pressure inflates the plastic which conforms to the mold. Once the plastic has cooled and hardened the mold opens and the part is ejected. Water channels within the mold assist cooling.\nHistory.\nThe process principle comes from the idea of glassblowing. Enoch Ferngren and William Kopitke produced a blow molding machine and sold it to Hartford Empire Company in 1938. This was the beginning of the commercial blow molding process. During the 1940s the variety and number of products were still very limited and therefore blow molding did not take off until later. Once the variety and production rates went up the number of products created soon followed.\nThe technical mechanisms needed to produce hollow-bodied workpieces using the blowing technique were established very early on. Because glass is very breakable, after the introduction of plastic, plastic was used to replace glass in some cases. The first mass production of plastic bottles was done in America in 1939. Germany started using this technology a little bit later but is currently one of the leading manufacturers of blow molding machines.\nIn the United States soft drink industry, the number of plastic containers went from zero in 1977 to ten billion pieces in 1999. Today, an even greater number of products are blown and it is expected to keep increasing.\nFor amorphous metals, also known as bulk metallic glasses, blow molding has been recently demonstrated under pressures and temperatures comparable to plastic blow molding.\nTypologies.\nExtrusion blow molding.\nIn extrusion blow molding, plastic is melted and extruded into a hollow tube forming a tube like piece of plastic with a hole in one end for compressed gas - known as a parison. The parison is captured by closing it into a cooled metal mold. Air is blown into the parison, inflating it into the shape of the hollow bottle, container, or part. After the plastic has cooled, the mold is opened and the part is ejected.\n\"Straight extrusion blow molding is a way of propelling material forward similar to injection molding whereby an Archimedean screw turns, feeding plastic material down a heated tube. Once the plastic is meleted the screw stops rotating and linearly moves to push the melt out. With the accumulator method, an accumulator gathers melted plastic and after the previous mold has cooled and enough plastic has accumulated, a rod pushes the melted plastic and forms the parison. In this case the screw may turn continuously or intermittently. With continuous extrusion the weight of the parison drags the parison and makes calibrating the wall thickness difficult. The accumulator head or reciprocating screw methods use hydraulic systems to push the parison out quickly reducing the effect of the weight and allowing precise control over the wall thickness by adjusting the die gap with a parison programming device.\nContinuous extrusion equipment includes rotary wheel blow molding systems and shuttle machinery, while intermittent extrusion machinery includes reciprocating screw machinery and accumulator head machinery.\nSpin trimming.\nContainers such as jars often have an excess of material due to the molding process. This is trimmed off by spinning a cutting blade around the container which separates the material. The excess plastic is then recycled to create new moldings. Spin Trimmers are used on a number of materials, such as PVC, HDPE and PE+LDPE. Different types of the materials have their own physical characteristics affecting trimming. For example, moldings produced from amorphous materials are much more difficult to trim than crystalline materials. Titanium nitride-coated blades are often used rather than standard steel to increase life by a factor of 30 times.\nInjection blow molding.\nThe process of injection blow molding (IBM) is used for the production of hollow glass and plastic objects in large quantities. In the IBM process, the polymer is injection molded onto a core pin; then the core pin is rotated to a blow molding station to be inflated and cooled. This is the least-used of the three blow molding processes, and is typically used to make small medical and single serve bottles. The process is divided into three steps: injection, blowing and ejection.\nThe injection blow molding machine is based on an extruder barrel and screw assembly which melts the polymer. The molten polymer is fed into a hot runner manifold where it is injected through nozzles into a heated cavity and core pin. The cavity mold forms the external shape and is clamped around a core rod which forms the internal shape of the preform. The preform consists of a fully formed bottle/jar neck with a thick tube of polymer attached, which will form the body. similar in appearance to a test tube with a threaded neck.\nThe preform mold opens and the core rod is rotated and clamped into the hollow, chilled blow mold. The end of the core rod opens and allows compressed air into the preform, which inflates it to the finished article shape.\nAfter a cooling period the blow mold opens and the core rod is rotated to the ejection position. The finished article is stripped off the core rod and as an option can be leak-tested prior to packing. The preform and blow mold can have many cavities, typically three to sixteen depending on the article size and the required output. There are three sets of core rods, which allow concurrent preform injection, blow molding and ejection.\nInjection stretch blow molding.\nInjection Stretch Blow Molding has two main different methods, namely Single-stage and Double-stage process. The Single-stage process is then again broken down into 3-station and 4-station machines.\nSingle-Stage.\nIn the single-stage process, both preform manufacture and bottle blowing is performed in the same machine. The older 4-station method of injection, reheat, stretch blow and ejection is more costly than the 3-station machine which eliminates the reheat stage and uses latent heat in the preform, thus saving costs of energy to reheat and 25% reduction in tooling. The process explained: Imagine the molecules are small round balls, when together they have large air gaps and small surface contact, by first stretching the molecules vertically then blowing to stretch horizontally the biaxial stretching makes the molecules a cross shape. These \"crosses\" fit together leaving little space as more surface area is contacted thus making the material less porous and increasing barrier strength against permeation. This process also increases the strength to be ideal for filling with carbonated drinks.\nTwo-stage.\nIn the two-stage injection stretch blow molding process, the plastic is first molded into a \"preform\" using the injection molding process. These preforms are produced with the necks of the bottles, including threads (the \"finish\") on one end. These preforms are packaged, and fed later (after cooling) into a reheat stretch blow molding machine. In the ISBM process, the preforms are heated (typically using infrared heaters) above their glass transition temperature, then blown using high-pressure air into bottles using metal blow molds. The preform is always stretched with a core rod as part of the process.", "Automation-Control": 0.8416734934, "Qwen2": "Yes"} {"id": "831689", "revid": "10951369", "url": "https://en.wikipedia.org/wiki?curid=831689", "title": "Pontryagin's maximum principle", "text": "Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It states that it is necessary for any optimal control along with the optimal state trajectory to solve the so-called Hamiltonian system, which is a two-point boundary value problem, plus a maximum condition of the control Hamiltonian. These necessary conditions become sufficient under certain convexity conditions on the objective and constraint functions.\nThe maximum principle was formulated in 1956 by the Russian mathematician Lev Pontryagin and his students, and its initial application was to the maximization of the terminal speed of a rocket. The result was derived using ideas from the classical calculus of variations. After a slight perturbation of the optimal control, one considers the first-order term of a Taylor expansion with respect to the perturbation; sending the perturbation to zero leads to a variational inequality from which the maximum principle follows.\nWidely regarded as a milestone in optimal control theory, the significance of the maximum principle lies in the fact that maximizing the Hamiltonian is much easier than the original infinite-dimensional control problem; rather than maximizing over a function space, the problem is converted to a pointwise optimization. A similar logic leads to Bellman's principle of optimality, a related approach to optimal control problems which states that the optimal trajectory remains optimal at intermediate points in time. The resulting Hamilton–Jacobi–Bellman equation provides a necessary and sufficient condition for an optimum, and admits a straightforward extension to stochastic optimal control problems, whereas the maximum principle does not. However, in contrast to the Hamilton–Jacobi–Bellman equation, which needs to hold over the entire state space to be valid, Pontryagin's Maximum Principle is potentially more computationally efficient in that the conditions which it specifies only need to hold over a particular trajectory.\nNotation.\nFor set formula_1 and functions formula_2 formula_3 formula_4 and formula_5 we use the following notation:\nFormal statement of necessary conditions for minimization problem.\nHere the necessary conditions are shown for minimization of a functional. Take formula_11 to be the state of the dynamical system with input formula_12, such that\nwhere formula_1 is the set of admissible controls and formula_15 is the terminal (i.e., final) time of the system. The control formula_16 must be chosen for all formula_17 to minimize the objective functional formula_18 which is defined by the application and can be abstracted as\nThe constraints on the system dynamics can be adjoined to the Lagrangian formula_20 by introducing time-varying Lagrange multiplier vector formula_21, whose elements are called the costates of the system. This motivates the construction of the Hamiltonian formula_22 defined for all formula_17 by:\nwhere formula_25 is the transpose of formula_21.\nPontryagin's minimum principle states that the optimal state trajectory formula_27, optimal control formula_28, and corresponding Lagrange multiplier vector formula_29 must minimize the Hamiltonian formula_22 so that\nfor all time formula_17 and for all permissible control inputs formula_16. Additionally, the costate equation and its terminal conditions\nmust be satisfied. If the final state formula_33 is not fixed (i.e., its differential variation is not zero), it must also be that\nThese four conditions in (1)-(4) are the necessary conditions for an optimal control. Note that (4) only applies when formula_33 is free. If it is fixed, then this condition is not necessary for an optimum.", "Automation-Control": 0.9998818636, "Qwen2": "Yes"} {"id": "17974229", "revid": "8809589", "url": "https://en.wikipedia.org/wiki?curid=17974229", "title": "Comparison of agent-based modeling software", "text": "In the last few years, the agent-based modeling (ABM) community has developed several practical agent based modeling toolkits that enable individuals to develop agent-based applications. More and more such toolkits are coming into existence, and each toolkit has a variety of characteristics. Several individuals have made attempts to compare toolkits to each other (see references). Below is a chart intended to capture many of the features that are important to ABM toolkit users.", "Automation-Control": 0.7408652306, "Qwen2": "Yes"} {"id": "8756788", "revid": "27138126", "url": "https://en.wikipedia.org/wiki?curid=8756788", "title": "One-pass algorithm", "text": "In computing, a one-pass algorithm or single-pass algorithm is a streaming algorithm which reads its input exactly once. It does so by processing items in order, without unbounded buffering; it reads a block into an input buffer, processes it, and moves the result into an output buffer for each step in the process. A one-pass algorithm generally requires \"O\"(\"n\") (see 'big O' notation) time and less than \"O\"(\"n\") storage (typically \"O\"(1)), where \"n\" is the size of the input. An example of a one-pass algorithm is the Sondik partially observable Markov decision process.\nExample problems solvable by one-pass algorithms.\nGiven any list as an input:\nGiven a list of numbers:\nGiven a list of symbols from an alphabet of \"k\" symbols, given in advance.\nExample problems not solvable by one-pass algorithms.\nGiven any list as an input:\nGiven a list of numbers:\nThe two-pass algorithms above are still streaming algorithms but not one-pass algorithms.", "Automation-Control": 0.9960713387, "Qwen2": "Yes"} {"id": "8771473", "revid": "6727347", "url": "https://en.wikipedia.org/wiki?curid=8771473", "title": "Kernel (statistics)", "text": "The term kernel is used in statistical analysis to refer to a window function. The term \"kernel\" has several distinct meanings in different branches of statistics.\nBayesian statistics.\nIn statistics, especially in Bayesian statistics, the kernel of a probability density function (pdf) or probability mass function (pmf) is the form of the pdf or pmf in which any factors that are not functions of any of the variables in the domain are omitted. Note that such factors may well be functions of the parameters of the pdf or pmf. These factors form part of the normalization factor of the probability distribution, and are unnecessary in many situations. For example, in pseudo-random number sampling, most sampling algorithms ignore the normalization factor. In addition, in Bayesian analysis of conjugate prior distributions, the normalization factors are generally ignored during the calculations, and only the kernel considered. At the end, the form of the kernel is examined, and if it matches a known distribution, the normalization factor can be reinstated. Otherwise, it may be unnecessary (for example, if the distribution only needs to be sampled from).\nFor many distributions, the kernel can be written in closed form, but not the normalization constant.\nAn example is the normal distribution. Its probability density function is\nand the associated kernel is\nNote that the factor in front of the exponential has been omitted, even though it contains the parameter formula_3 , because it is not a function of the domain variable formula_4 .\nPattern analysis.\nThe kernel of a reproducing kernel Hilbert space is used in the suite of techniques known as kernel methods to perform tasks such as statistical classification, regression analysis, and cluster analysis on data in an implicit space. This usage is particularly common in machine learning.\nNonparametric statistics.\nIn nonparametric statistics, a kernel is a weighting function used in non-parametric estimation techniques. Kernels are used in kernel density estimation to estimate random variables' density functions, or in kernel regression to estimate the conditional expectation of a random variable. Kernels are also used in time-series, in the use of the periodogram to estimate the spectral density where they are known as window functions. An additional use is in the estimation of a time-varying intensity for a point process where window functions (kernels) are convolved with time-series data.\nCommonly, kernel widths must also be specified when running a non-parametric estimation.\nDefinition.\nA kernel is a non-negative real-valued integrable function \"K.\" For most applications, it is desirable to define the function to satisfy two additional requirements:\nThe first requirement ensures that the method of kernel density estimation results in a probability density function. The second requirement ensures that the average of the corresponding distribution is equal to that of the sample used.\nIf \"K\" is a kernel, then so is the function \"K\"* defined by \"K\"*(\"u\") = λ\"K\"(λ\"u\"), where λ > 0. This can be used to select a scale that is appropriate for the data.\nKernel functions in common use.\nSeveral types of kernel functions are commonly used: uniform, triangle, Epanechnikov, quartic (biweight), tricube, triweight, Gaussian, quadratic and cosine. \nIn the table below, if formula_7 is given with a bounded support, then formula_8 for values of \"u\" lying outside the support.\n^3)^3\nSupport: formula_9\n! Gaussian\n! Cosine\nSupport: formula_9\n! Logistic\n! Sigmoid function\n! Silverman kernel", "Automation-Control": 0.7574649453, "Qwen2": "Yes"} {"id": "53737832", "revid": "45711568", "url": "https://en.wikipedia.org/wiki?curid=53737832", "title": "Thermic fluid heater", "text": "A thermic fluid heater is industrial heating equipment, used where only heat transfers are desired instead of pressure. In this equipment, a thermic fluid is circulated in the entire system for heat transfers to the desired processes. Combustion process heats up the thermic fluid, and this fluid carries and rejects this heat to the desired fluid for concluding the processes. After rejecting it, this fluid comes back again to the thermic fluid heater and this cycle goes on.", "Automation-Control": 0.6878314018, "Qwen2": "Yes"} {"id": "3500774", "revid": "7611264", "url": "https://en.wikipedia.org/wiki?curid=3500774", "title": "SCSI check condition", "text": "In computer terminology, a Check Condition occurs when a SCSI device needs to report an error. \nSCSI communication takes place between an initiator and a target. The initiator sends a command to the target which then responds. SCSI commands are sent in a Command Descriptor Block (CDB). At the end of the command the target returns a Status Code byte which is usually 00h for success, 02h for a Check Condition (error), or 08h for busy. \nWhen the target returns a Check Condition in response to a command, the initiator usually then issues a SCSI Request Sense command in order to obtain more information. During the time between the reporting of a Check Condition and the issuing of a Request Sense command, the target is in a special state called a Contingent Allegiance Condition.", "Automation-Control": 0.8173232675, "Qwen2": "Yes"} {"id": "3501298", "revid": "7328338", "url": "https://en.wikipedia.org/wiki?curid=3501298", "title": "SCSI contingent allegiance condition", "text": "On a computer SCSI connection, a contingent allegiance condition occurs while a SCSI device reports an error.\nSCSI communication takes place between an initiator and a target. The initiator sends a command to the target which then responds. At the end of the command the target returns a Status Code byte which is usually 00h for \"success\", 02h for a \"Check Condition\" (error), or 08h for \"busy\".\nWhen the target returns a Check Condition in response to a command, the initiator usually then issues a SCSI Request Sense command in order to obtain more information. During the time between the reporting of a Check Condition and the issuing of the Request Sense command, the target is in the special state called the \"contingent allegiance condition\".\nDetails.\nWhile a target is in a contingent allegiance condition it must retain the sense information that relates to the error that caused it to enter that condition. This can be a complex issue in configurations which contain more than one initiator. A well-designed target may be able to maintain sense data for one initiator while servicing commands from another initiator. If a check condition then needs to be reported to a second or third initiator then this may become prohibitively difficult. The SCSI definition of the contingent allegiance condition allows the target to use the \"busy\" response to incoming commands and to suspend servicing of any recent commands that are still in its execution queue.\nThe events that can cause a target to enter the contingent allegiance condition are\nThe events that can cause a target to exit the contingent allegiance condition are\nExtended contingent allegiance condition.\nWhen the target needs to perform a long error-recovery procedure (typically one that lasts more than one second) it can enter the \"extended contingent allegiance condition\". This may be necessary in high performance systems or in cases where there is a danger that initiator may reset the target after a short timeout interval, thereby aborting the error-recovery procedure. As in the contingent allegiance condition, the target is allowed to use the \"busy\" response to incoming commands and to suspend servicing of any recent commands that are still in its execution queue.\nWhen a target enters the extended contingent allegiance condition it will send an Initiate Recovery message to the initiator.\nThe SCSI events that can cause a target to exit the extended contingent allegiance condition are", "Automation-Control": 0.9929879308, "Qwen2": "Yes"} {"id": "3510287", "revid": "27015025", "url": "https://en.wikipedia.org/wiki?curid=3510287", "title": "IEC 61499", "text": "The international standard IEC 61499, addressing the topic of function blocks for industrial process measurement and control systems, was initially published by the International Electrotechnical Commission (IEC) in 2005. The specification of IEC 61499 defines a generic model for distributed control systems and is based on the IEC 61131 standard. The concepts of IEC 61499 are also explained by Lewis and Zoitl as well as Vyatkin.\nPart 1: Architecture.\nIEC 61499-1 defines the architecture for distributed systems. In IEC 61499 the cyclic execution model of IEC 61131 is replaced by an event driven execution model. The event driven execution model allows an explicit specification of the execution order of function blocks. If necessary, periodically executed applications can be implemented by using the E_CYCLE function block for the generation of periodic events as described in Annex A of IEC 61499-1.\nIEC 61499 enables an \"application-centric\" design, in which one or more applications, defined by networks of interconnected function blocks, are created for the whole system and subsequently distributed to the available devices. All devices within a system are described within a \"device model\". The topology of the system is reflected by the \"system model\". The distribution of an application is described within the \"mapping model\". Therefore, applications of a system are distributable but maintained together. \nIEC 61499 is strongly influenced by Erlang, with its shared-nothing model and distribution transparency.\nLike IEC 61131-3 function blocks, IEC 61499 function block types specify both an interface and an implementation. In contrast to IEC 61131-3, an IEC 61499 interface contains \"event\" inputs and outputs in addition to \"data\" inputs and outputs. Events can be associated with data inputs and outputs by \"WITH constraints\". IEC 61499 defines several function block types, all of which can contain a behavior description in terms of service sequences:\nTo maintain the applications on a device IEC 61499 provides a \"management model\". The \"device manager\" maintains the lifecycle of any resource and manages the communication with the software tools (e.g., configuration tool, agent) via \"management commands\". Through the interface of the software tool and the management commands, online reconfiguration of IEC 61499 applications can be realized.\nPart 2: Software tool requirements.\nIEC 61499-2 defines requirements for software tools to be compliant to IEC 61499. This includes requirements for the representation and the portability of IEC 61499 elements as well as a DTD format to exchange IEC 61499 elements between different software tools.\nThere are already some IEC 61499 compliant software tools available. Among these are commercial software tools, open-source software tools, and academic and research developments. Usually an IEC 61499 compliant runtime environment and an IEC 61499 compliant development environment is needed.\nPart 3: Tutorial Information (2008 withdrawn).\nIEC 61499-3 was related to an early Publicly Available Specification (PAS) version of the standard and was withdrawn in 2008. This part answered FAQs related to the IEC 61499 standard and described the use of IEC 61499 elements with examples to solve common challenges during the engineering of automation systems.\nAmong other examples, IEC 61499-3 described the use of SIFBs as communication function blocks for remote access to real-time data and parameters of function blocks; the use of adapter interfaces to implement object oriented concepts; initialization algorithms in function block networks; and the implementation of ECCs for a simplified motor control of hypothetical VCRs.\nAdditionally the impacts of the mapping concerning the communication function blocks was explained, as well as the device management by management applications and its function blocks, and the principle of the device manager function block (DEV_MGR).\nPart 4: Rules for compliance profiles.\nIEC 61499-4 describes the rules that a system, device or software tool must follow to be compliant to IEC 61499. These rules are related to \"interoperability, portability\" and \"configuration\". Two devices are \"interoperable\" if they can work together to provide the functionality specified by a system configuration. Applications compliant to IEC 61499 have to be \"portable\", which means that they can be exchanged between software tools of different vendors considering the requirements for software tools described within IEC 61499-2. Devices of any vendor have to be \"configurable\" by any IEC 61499 compliant software tool.\nBesides these general rules, IEC 61499-4 also defines the structure of \"compliance profiles\". A compliance profile describes how a system conforms to the rules of the IEC 61499 standard. For example, the configurability of a device by a software tool is determined by the supported management commands. The XML exchange format which determines portability of IEC 61499 compliant applications is defined within part 2 and is completed by the compliance profile, for example by declaring the supported file name extensions for exchange of software library elements.\nThe \"interoperability\" between devices of different vendors is defined by the layers of the OSI models. Also status outputs, IP addresses, port numbers as well as the data encoding of function blocks like PUBLISH/SUBSCRIBE and CLIENT/SERVER, which are used for the communication between devices, have to be considered. HOLOBLOC, Inc. defines the \"IEC 61499 compliance profile for feasibility demonstrations\", which is for example supported by the IEC 61499 compliant software tools FBDK, 4diac IDE, and nxtSTUDIO.", "Automation-Control": 0.7503961325, "Qwen2": "Yes"} {"id": "35570864", "revid": "14965160", "url": "https://en.wikipedia.org/wiki?curid=35570864", "title": "Measurement-assisted assembly", "text": "Measurement-assisted assembly (MAA) is any method of assembly in which measurements are used to guide assembly processes. Such processes include:\nMeasurement-assisted assembly is typically used for large structures such as aircraft and steel fabrications. It can be used to improve production rates, reduce reworking and increase flexibility for processes where manual reworking during assembly is required to maintain assembly form and component interface conditions. This type of approach generally offers no advantages where part-to-part interchangeability can already be achieved.", "Automation-Control": 0.9885446429, "Qwen2": "Yes"} {"id": "35575382", "revid": "20483999", "url": "https://en.wikipedia.org/wiki?curid=35575382", "title": "Schmidt–Kalman filter", "text": "The Schmidt–Kalman Filter is a modification of the Kalman filter for reducing the dimensionality of the state estimate, while still considering the effects of the additional state in the calculation of the covariance matrix and the Kalman gains. A common application is to account for the effects of nuisance parameters such as sensor biases without increasing the dimensionality of the state estimate. This ensures that the covariance matrix will accurately represent the distribution of the errors.\nThe primary advantage of utilizing the Schmidt–Kalman filter instead of increasing the dimensionality of the state space is the reduction in computational complexity. This can enable the use of filtering in real-time systems. Another usage of Schmidt–Kalman is when residual biases are unobservable; that is, the effect of the bias cannot be separated out from the measurement. In this case, Schmidt–Kalman is a robust way to not try and estimate the value of the bias, but only keep track of the effect of the bias on the true error distribution.\nFor use in non-linear systems, the observation and state transition models may be linearized around the current mean and covariance estimate in a method analogous to the extended Kalman filter.\nNaming and historical development.\nStanley F. Schmidt developed the Schmidt–Kalman filter as a method to account for unobservable biases while maintaining the low dimensionality required for implementation in real time systems.", "Automation-Control": 0.8860444427, "Qwen2": "Yes"} {"id": "16336160", "revid": "7903804", "url": "https://en.wikipedia.org/wiki?curid=16336160", "title": "Distributed algorithmic mechanism design", "text": "Distributed algorithmic mechanism design (DAMD) is an extension of algorithmic mechanism design.\nDAMD differs from Algorithmic mechanism design since the algorithm is computed in a distributed manner rather than by a central authority. This greatly improves computation time since the burden is shared by all agents within a network.\nOne major obstacle in DAMD is ensuring that agents reveal the true costs or preferences related to a given scenario. Often these agents would rather lie in order to improve their own utility.\nDAMD is full of new challenges since one can no longer assume an obedient networking and mechanism infrastructure where rational players control the message paths and mechanism computation.\nGame theoretic model.\nGame theory and distributed computing both deal with a system with many agents, in which the agents may possibly pursue different goals. However they have different focuses. For instance one of the concerns of distributed computing is to prove the correctness of algorithms that tolerate faulty agents and agents performing actions concurrently. On the other hand, in game theory the focus is on devising a strategy which leads us to an equilibrium in the system.\nNash equilibrium.\nNash equilibrium is the most commonly-used notion of equilibrium in game theory. However Nash equilibrium does not deal with faulty or unexpected behavior. A protocol that reaches Nash equilibrium is guaranteed to execute correctly in the face of rational agents, with no agent being able to improve its utility by deviating from the protocol.\nSolution preference.\nThere is no trusted center as there is in AMD. Thus, mechanisms must be implemented by the agents themselves. The solution preference assumption requires that each agent prefers any outcome to no outcome at all: thus, agents have no incentive to disagree on an outcome or cause the algorithm to fail. In other words, as Afek et al. said, \"agents cannot gain if the algorithm fails\". As a result, though agents have preferences, they have no incentive to fail the algorithm.\nTruthfulness.\nA mechanism is considered to be truthful if the agents gain nothing by lying about \ntheir or other agents' values.\nA good example would be a leader election algorithm that selects a computation server within a network. The algorithm specifies that agents should send their total computational power to each other, after which the most powerful agent is chosen as the leader to complete the task. In this algorithm agents may lie about their true computation power because they are potentially in danger of being tasked with CPU-intensive jobs which will reduce their power to complete local jobs. This can be overcome with the help of truthful mechanisms which, without any a priori knowledge of the existing data and inputs of each agent, cause each agent to respond truthfully to requests.\nA well-known truthful mechanism in game theory is the Vickrey auction.\nClassic distributed computing problems.\nLeader election (completely connected network, synchronous case).\nLeader election is a fundamental problem in distributed computing and there are numerous protocols to solve this problem. System agents are assumed to be rational, and therefore prefer having a leader to not having one. The agents may also have different preferences regarding who becomes the leader (an agent may prefer that he himself becomes the leader). Standard protocols may choose leaders based on the lowest or highest ID of system agents. However, since agents have an incentive to lie about their ID in order to improve their utility such protocols are rendered useless in the setting of algorithmic mechanism design. \nA protocol for leader election in the presence of rational agents has been introduced by Ittai et al.:\nThis protocol correctly elects a leader while reaching equilibrium and is truthful since no agent can benefit by lying about its input.", "Automation-Control": 0.9042343497, "Qwen2": "Yes"} {"id": "5926426", "revid": "46317736", "url": "https://en.wikipedia.org/wiki?curid=5926426", "title": "Newton Game Dynamics", "text": "Newton Game Dynamics is an open-source physics engine for realistically simulating rigid bodies in games and other real-time applications. Its solver is deterministic and not based on traditional LCP or iterative methods.\nNewton Game Dynamics is actively developed by Julio Jerez. Currently a new version which will take advantage of multi-core CPUs and GPUs is being developed.\nGames that used Newton.\nThis is a select list of games using Newton Game Dynamics.\nEngines which incorporated Newton.\nA list of game engines using Newton Game Dynamics:", "Automation-Control": 0.9129011631, "Qwen2": "Yes"} {"id": "35713574", "revid": "1166911546", "url": "https://en.wikipedia.org/wiki?curid=35713574", "title": "S Voice", "text": "S Voice is a discontinued intelligent personal assistant and knowledge navigator which is only available as a built-in application for the Samsung Galaxy S III, S III Mini (including NFC), S4, S4 Mini, S4 Active, S5, S5 Mini, S II Plus, Note II, Note 3, Note 4, Note 10.1, Note 8.0, Stellar, Mega, Grand, Avant, Core, Ace 3, Tab 3 7.0, Tab 3 8.0,Samsung Galaxy Express 2, Tab 3 10.1, Galaxy Camera, and other 2013 or later Samsung Android devices. The application uses a natural language user interface to answer questions, make recommendations, and perform actions by delegating requests to a set of Web services. S Voice is based on the Vlingo personal assistant. For Galaxy S5 and later Samsung Galaxy devices, S Voice runs on Nuance instead of Vlingo.\nSome of the capabilities of S Voice include making appointments, opening apps, setting alarms, updating social network websites such as Facebook or Twitter and navigation. S Voice also offers multitasking, as well as automatic activation features, for example, when the car engine is started.\nIn a disclaimer that pops up on first opening S Voice, Samsung states that the app is provided by a third party, which it does not name.\nIn the Galaxy S8 and S8+, Bixby was announced to be a major update and replacement to S Voice from the prior phones. It was discontinued on 1 June 2020.", "Automation-Control": 1.0000064373, "Qwen2": "Yes"} {"id": "35718464", "revid": "16809467", "url": "https://en.wikipedia.org/wiki?curid=35718464", "title": "Active disturbance rejection control", "text": "Active disturbance rejection control (or ADRC) is a model-free control technique used for designing controllers for systems with unknown dynamics and external disturbances. This approach only necessitates an estimated representation of the system's behavior to design controllers that effectively counteract disturbances without causing any overshooting.\nADRC has been successfully used as an alternative to PID control in many applications, such as the control of permanent magnet synchronous motors, thermal power plants and robotics. In particular, the precise control of brushless motors for joint motion is vital in high-speed industrial robot applications. However, flexible robot structures can introduce unwanted vibrations, challenging PID controllers. ADRC offers a solution by real-time disturbance estimation and compensation, without needing a detailed model.\nDisturbance rejection.\nTo achieve robustness, ADRC is based on extension of the system model with an additional and fictitious state variable representing everything that the user does not include in the mathematical description of the base system to be controlled. This virtual state (sum of unknown part of model dynamics and external disturbances, usually denoted as a \"total disturbance\") is estimated online with an extended state observer and used in the control signal in order to decouple the system from the actual perturbation acting on the plant. This disturbance rejection feature allows users to treat the considered system with a simpler model insofar as the negative effects of modeling uncertainty are compensated in real time. As a result, the operator does not need a precise analytical description of the base system; one can model the unknown parts of the dynamics as internal disturbances in the base system. \nControl architecture.\nThe ADRC consists of three main components: a tracking differentiator, a non-linear state error feedback and an extended state observer. The global convergence of ADRC has been proved for a class of general multiple-input multiple-output systems.\nThe following architecture is known as the output-form structure of ADRC:There also exists a special form of ADRC, known as error-form structure, which is used for comparing the ADRC with classical controllers such as PID. \nTracking differentiator.\nThe primary objective of the tracking differentiator is to follow the transient profile of the reference signal, addressing the issue of sudden changes in the set point that occur in the conventional PID controller. Moreover, the tracking differentiator also mitigates the possible noise amplification that affects the derivative term of the PID controller by using numerical integration instead of numerical differentiation.\nExtended state observer.\nAn extended state observer (ESO) keeps track of the system's states as well as external disturbances and unknown model's perturbations. As a result, ADRC does not rely on any particular mathematical model of disturbance. Nonlinear ESO (NESO) is a subtype of general ESO that uses a nonlinear discontinuous function of the output estimate error. NESO are comparable to sliding mode observers in that both use a nonlinear function of output estimation error (rather than a linear function as in linear, high gain, and extended observers). A sliding mode observer's discontinuity is at the origin, but the NESO's discontinuity is at a preset error threshold.\nNonlinear state error feedback.\nThe intuitiveness of PID control can be attributed to the simplicity of its error feedback. ADRC extends the PID by employing a nonlinear state error feedback, and because of this, seminal works referred to ADRC as nonlinear PID. Weighted state errors can also be used as feedback in a linearization system.", "Automation-Control": 0.988363862, "Qwen2": "Yes"} {"id": "17247558", "revid": "28481209", "url": "https://en.wikipedia.org/wiki?curid=17247558", "title": "D*", "text": "D* (pronounced \"D star\") is any one of the following three related incremental search algorithms:\nAll three search algorithms solve the same assumption-based path planning problems, including planning with the freespace assumption, where a robot has to navigate to given goal coordinates in unknown terrain. It makes assumptions about the unknown part of the terrain (for example: that it contains no obstacles) and finds a shortest path from its current coordinates to the goal coordinates under these assumptions. The robot then follows the path. When it observes new map information (such as previously unknown obstacles), it adds the information to its map and, if necessary, replans a new shortest path from its current coordinates to the given goal coordinates. It repeats the process until it reaches the goal coordinates or determines that the goal coordinates cannot be reached. When traversing unknown terrain, new obstacles may be discovered frequently, so this replanning needs to be fast. Incremental (heuristic) search algorithms speed up searches for sequences of similar search problems by using experience with the previous problems to speed up the search for the current one. Assuming the goal coordinates do not change, all three search algorithms are more efficient than repeated A* searches.\nD* and its variants have been widely used for mobile robot and autonomous vehicle navigation. Current systems are typically based on D* Lite rather than the original D* or Focused D*. In fact, even Stentz's lab uses D* Lite rather than D* in some implementations. Such navigation systems include a prototype system tested on the Mars rovers Opportunity and Spirit and the navigation system of the winning entry in the DARPA Urban Challenge, both developed at Carnegie Mellon University.\nThe original D* was introduced by Anthony Stentz in 1994. The name D* comes from the term \"Dynamic A*\", because the algorithm behaves like A* except that the arc costs can change as the algorithm runs.\nOperation.\nThe basic operation of D* is outlined below.\nLike Dijkstra's algorithm and A*, D* maintains a list of nodes to be evaluated, known as the \"OPEN list\". Nodes are marked as having one of several states:\nExpansion.\nThe algorithm works by iteratively selecting a node from the OPEN list and evaluating it. It then propagates the node's changes to all of the neighboring nodes and places them on the OPEN list. This propagation process is termed \"expansion\". In contrast to canonical A*, which follows the path from start to finish, D* begins by searching backwards from the goal node. This means that the algorithm is actually computing the A* optimal path for every possible start node. Each expanded node has a backpointer which refers to the next node leading to the target, and each node knows the exact cost to the target. When the start node is the next node to be expanded, the algorithm is done, and the path to the goal can be found by simply following the backpointers.\nObstacle handling.\nWhen an obstruction is detected along the intended path, all the points that are affected are again placed on the OPEN list, this time marked RAISE. Before a RAISED node increases in cost, however, the algorithm checks its neighbors and examines whether it can reduce the node's cost. If not, the RAISE state is propagated to all of the nodes' descendants, that is, nodes which have backpointers to it. These nodes are then evaluated, and the RAISE state is passed on, forming a wave. When a RAISED node can be reduced, its backpointer is updated, and passes the LOWER state to its neighbors. These waves of RAISE and LOWER states are the heart of D*.\nBy this point, a whole series of other points are prevented from being \"touched\" by the waves. The algorithm has therefore only worked on the points which are affected by change of cost.\nAnother deadlock occurs.\nThis time, the deadlock cannot be bypassed so elegantly. None of the points can find a new route via a neighbor to the destination. Therefore, they continue to propagate their cost increase. Only points outside of the channel can be found, which can lead to destination via a viable route. This is how two Lower waves develop, which expand as points marked as unattainable with new route information.\nPseudocode.\nwhile (!openList.isEmpty) {\n point = openList.getFirst;\n expand(point);\nExpand.\nvoid expand(currentPoint) {\n boolean isRaise = isRaise(currentPoint);\n double cost;\n for each (neighbor in currentPoint.getNeighbors) {\n if (isRaise) {\n if (neighbor.nextPoint == currentPoint) {\n neighbor.setNextPointAndUpdateCost(currentPoint);\n openList.add(neighbor);\n } else {\n cost = neighbor.calculateCostVia(currentPoint);\n if (cost < neighbor.getCost) {\n currentPoint.setMinimumCostToCurrentCost;\n openList.add(currentPoint);\n } else {\n cost = neighbor.calculateCostVia(currentPoint);\n if (cost < neighbor.getCost) {\n neighbor.setNextPointAndUpdateCost(currentPoint);\n openList.add(neighbor);\nCheck for raise.\nboolean isRaise(point) {\n double cost;\n if (point.getCurrentCost > point.getMinimumCost) {\n for each(neighbor in point.getNeighbors) {\n cost = point.calculateCostVia(neighbor);\n if (cost < point.getCurrentCost) {\n point.setNextPointAndUpdateCost(neighbor);\n return point.getCurrentCost > point.getMinimumCost;\nVariants.\nFocused D*.\nAs its name suggests, Focused D* is an extension of D* which uses a heuristic to focus the propagation of RAISE and LOWER toward the robot. In this way, only the states that matter are updated, in the same way that A* only computes costs for some of the nodes.\nD* Lite.\nD* Lite is not based on the original D* or Focused D*, but implements the same behavior. It is simpler to understand and can be implemented in fewer lines of code, hence the name \"D* Lite\". Performance-wise, it is as good as or better than Focused D*. D* Lite is based on Lifelong Planning A*, which was introduced by Koenig and Likhachev few years earlier. D* Lite\nMinimum cost versus current cost.\nFor D*, it is important to distinguish between current and minimum costs. The former is only important at the time of collection and the latter is critical because it sorts the OpenList. The function which returns the minimum cost is always the lowest cost to the current point since it is the first entry of the OpenList.", "Automation-Control": 0.92067945, "Qwen2": "Yes"} {"id": "28381575", "revid": "1031034881", "url": "https://en.wikipedia.org/wiki?curid=28381575", "title": "Partial stroke testing", "text": "Partial stroke testing (or PST) is a technique used in a control system to allow the user to test a percentage of the possible failure modes of a shut down valve without the need to physically close the valve. PST is used to assist in determining that the safety function will operate on demand. PST is most often used on high integrity emergency shutdown valves (ESDVs) in applications where closing the valve will have a high cost burden yet proving the integrity of the valve is essential to maintaining a safe facility. In addition to ESDVs PST is also used on high integrity pressure protection systems or HIPPS. Partial stroke testing is not a replacement for the need to fully stroke valves as proof testing is still a mandatory requirement.\nStandards.\nPartial stroke testing is an accepted petroleum industry standard technique and is also quantified in detail by regulatory bodies such as the International Electrotechnical Commission (IEC) and the Instrument Society of Automation (ISA). The following are the standards appropriate to these bodies.\nThese standards define the requirements for safety related systems and describe how to quantify the performance of PST systems\nMeasuring safety performance.\nIEC61508 adapts a safety life cycle approach to the management of plant safety. During the design phase of this life cycle of a safety system the required safety performance level is determined using techniques such as Markov analysis, FMEA, fault tree analysis and Hazop. These techniques allow the user to determine the potential frequency and consequence of hazardous activities and to quantify the level of risk. A common method for this quantification is the safety integrity level. This is quantified from one to four with level four being the most hazardous.\nOnce the SIL level is determined this specifies the required performance level of the safety systems during the operational phase of the plant. The metric for measuring the performance of a safety function is called the average probability of failure on demand (or PFDavg) and this correlates to the SIL level as follows\nOne method of calculating the PFDavg for a basic safety function with no redundancy is using the formula\nWhere:\nThe proof test coverage is a measure of how effective the partial stroke test is and the higher the PTC the greater the effect of the test.\nBenefits.\nThe benefits of using PST are not limited to simply the safety performance but gains can also be made in the production performance of a plant and the capital cost of a plant. These are summarised as follows\nSafety benefits.\nGains can be made in the following areas by the use of PST.\nProduction benefits.\nThere are a number of areas where production efficiency can be improved by the successful implementation of a PST system.\nDrawbacks.\nThe main drawback of all PST systems is the increased probability of causing an accidental activation of the safety system thus causing a plant shutdown, this is the primary concern of PST systems by operators and for this reason many PST system remain dormant after installation. Different techniques mitigate for this issue in different manners but all systems have an inherent risk\nIn addition in some cases, a PST cannot be performed due to the limitations inherent in the process or the valve being used. Further, as the PST introduces a disturbance into the process or system, it may not be appropriate for some processes or systems that are sensitive to disturbances.\nFinally, a PST cannot always differentiate between different faults or failures within the valve and actuator assembly thus limiting the diagnostic capability.\nTechniques.\nThere are a number of different techniques available for partial stroke testing and the selection of the most appropriate technique depends on the main benefits the operator is trying to gain.\nMechanical Jammers.\nMechanical jammers are devices where a device is inserted into the valve and actuator assembly that physically prevents the valve from moving past a certain point. These are used in cases where accidentally shutting the valve would have severe consequences, or any application where the end user prefers a mechanical device.\nTypical benefits of this type of device are as follows:\nHowever, opinions differ whether these devices are suitable for functional safety systems as the safety function is offline for the duration of the test.\nModern mechanical PST devices may be automated.\nExamples of this kind of device include direct interface products that mount between the valve and the actuator and may use cams fitted to the valve stem. \nAn example of such a mechanical PST system:\nOther methods include adjustable actuator end stops.\nPneumatic valve positioners.\nThe basic principle behind partial stroke testing is that the valve is moved to a predetermined position in order to determine the performance of the shut down valve. This led to the adaptation of pneumatic positioners used on flow control valve for use in partial stroke testing. These systems are often suitable for use on shutdown valves up to and including SIL3.\nThe main benefits are : \nThe main benefit of these systems is that positioners are common equipment on plants and thus operators are familiar with the operation of these systems, however the primary drawback is the increased risk of spurious trip caused by the introduction of additional control components that are not normally used on on/off valves. These systems are however limited to use on pneumatically actuated valves.\nElectrical relay systems.\nThese systems use an electrical switch to de-energise the solenoid valve and use an electrical relay attached to the actuator to re-energise the solenoid coil when the desired PST point is reached.\nElectronic control systems.\nElectronic control systems use a configurable electronic module that connects between the supply from the ESD system and the solenoid valve. In order to perform a test the timer de-energises the solenoid valve to simulate a shutdown and re-energises the solenoid when the required degree of partial stroke is reached. These systems are fundamentally a miniature PLC dedicated to the testing of the valve.\nDue to their nature these devices do not actually form part of the safety function and are therefore 100% fail safe. With the addition of a pressure sensor and/or a position sensor for feedback timer systems are also capable of providing intelligent diagnostics in order to diagnose the performance of all components including the valve, actuator and solenoid valves.\nIn addition timers are capable of operating with any type of fluid power actuator and can also be used with subsea valves where the solenoid valve is located top-side.\nIntegrated solenoid valve systems.\nAnother technique is to embed the control electronics into a solenoid valve enclosure removing the need for additional control boxes. In addition there is no need to change the control schematic as no dedicated components are required.", "Automation-Control": 0.6967855692, "Qwen2": "Yes"} {"id": "28414748", "revid": "15996738", "url": "https://en.wikipedia.org/wiki?curid=28414748", "title": "Web-guiding systems", "text": "Web-guiding systems are used in the converting industry to position flat materials, known as webs, before processing. They are typically positioned just before a critical stage on a converting machine. Each type of web guiding system uses a sensor to monitor the web position for lateral tracking, and each has an actuator to shift the running web mechanically back on course whenever the sensor detects movement away from the set path. Actuators may be pneumatic or hydraulic cylinders, or some kind of electromechanical device. Because the web may be fragile — particularly at its edge — non-contact sensors are used. These sensors may be pneumatic, photoelectric, ultrasonic, or infrared. The system’s controls must put the output signals from the sensors in to a form that can drive the actuator.\nWeb guiding systems work at high speed, constantly making small adjustments to maintain the position of the material. The latest systems use digital technology and touch screen operator interfaces to simplify set up. Web guiding systems are used on slitting machines, slitter rewinders, printing presses, coating and laminating machines.\nHistory.\nIn 1939, Irwin Fife invented the first web guide in his Garage in Oklahoma City, Oklahoma, solving a newspaper owner’s challenge of keeping paper aligned in his high-speed newspaper press.\nActive Guiding Systems.\nActive guiding systems are composed of a sensor, an actuator connected to a guide mechanism, and a controller. The sensor can be any detector, which can reliably pick up the edges of a web. The most common types of sensors are pneumatic (only works with nonporous webs), optical (works well with opaque webs), ultrasonic (works with most material), or paddies (thick webs). Recent developments have introduced to the industry a new sensor technology based on light scattering and spatial filtering that allows the use of an optical sensor to detect the edge of any material. The web must be flat (free of curl) and stable (free of flutter) through the edge sensor. For this and other reasons, the sensor is often placed near a roller. If two sensors are used, the web could be guided to the front edge, back edge, or center. Common active guide systems include steering guide (remotely pivoted guide), displacement guide (offset-pivot guide), unwind guide, and rewind guide.\nTension Adjustment Challenge.\nTension adjustment is necessary due to several mechanical factors: oscillations caused by mechanical misalignments, differing inertial response (lag) of mechanical elements during web acceleration, out-of-round unwind and tension rolls, slipping through nip rolls, and over-aggressive web-guide correction. Several technical process and control issues also affect tension: tension setpoint changes, phase offset on driven rolls, tension bleed from one zone to another, and thermal effect (contraction/expansion) as the substrate passes through various processes. It is impossible to eliminate all factors requiring tension adjustment. Variance in any one factor in a zone necessitates changes in tension control and web speed. Consequently, with coupled tension zone control, jitter is inevitable in a continuous web where the controllers cause a feedback loop.\nPrecise control of the system is essential. If the line speed of web is reduced, the amount of lateral displacement error that can be controlled by the steering guide system also decreases. If the input error decreases, the lateral displacement error also becomes smaller. Lateral displacement occurs on the transport web by the air blow from the dryer and the increase of blowing frequency can reduce the lateral displacement.", "Automation-Control": 0.7610448599, "Qwen2": "Yes"} {"id": "13970187", "revid": "45767519", "url": "https://en.wikipedia.org/wiki?curid=13970187", "title": "Heidenhain", "text": "Dr. Johannes Heidenhain GmbH is a privately owned enterprise located in Traunreut, Germany. Heidenhain manufactures numerical controls for machine tools, as well as mechatronic measuring devices for length and angle.\nTheir linear and angle encoders are built for use in automated machines and systems, particularly in machine tools.\nHistory.\nThe company began as a metal etching factory founded in Berlin by Wilhelm Heidenhain in 1889 that manufactured templates, company plaques, product labels, and scales.\nIn 1928 Heidenhain invented the Metallur process. This lead-sulfide copying process made it possible for the first time to make exact copies of an original grating on a metal surface for industrial use. By 1943, Heidenhain was producing linear scales with accuracy of ± 15 µm and circular scale disks with accuracy of ± 3 angular seconds.\nAfter World War II, in 1948, Dr. Johannes Heidenhain, a pupil of Otto Hahn, founded the present company in Traunreut.\nIts invention of the Diadur process enabled it to apply very fine structures of chromium on suitable substrates, such as glass.\nThe Diadur process was the basis in 1952 for adding optical position measuring devices for machine tools to the product program. These were followed in 1961 by photoelectrically scanned linear and angle encoders.\nIn 1968 Heidenhain manufactured its first digital readouts.\nThe first Heidenhain numerical control was launched in 1976.\nIn 1987, a linear encoder series operating on the principle of light interference was introduced. It permitted measuring steps as fine as one nanometer.\nAccording to Heidenhain, in 2006 the company had regional sales locations in 43 countries and employed about 7,000 people, 2,600 of whom worked in the main facility in Traunreut, Germany.\nBy the end of 2006 the company had manufactured about 10.5 million linear or angle encoders, 420,000 position displays and nearly 200,000 CNC controls.\nCritique of Activities in Russia.\nAfter the 2022 Russian invasion of Ukraine, the Chief Executive Leadership Institute of the US Yale University criticized that Heidenhain was still operating in Russia through a third party, which was not disclosed publicly. Heidenhain products are used in the Russian arms industry.", "Automation-Control": 0.6946579218, "Qwen2": "Yes"} {"id": "13979602", "revid": "2051880", "url": "https://en.wikipedia.org/wiki?curid=13979602", "title": "John Doyle (engineer)", "text": "John Comstock Doyle is the John G Braun Professor of Control and Dynamical Systems, Electrical Engineering, and BioEngineering at the California Institute of Technology. He is known for his work in control theory and his current research interests are in theoretical foundations for complex networks in engineering, biology, and multiscale physics.\nEducation.\nHe earned a B.S. and an M.S. in electrical engineering from the Massachusetts Institute of Technology in 1977 and a Ph.D. in Mathematics from the University of California, Berkeley in 1984 with his thesis titled \"Matrix interpolation theory and optimal control\".\nWork.\nDoyle's early work was in the mathematics of robust control, linear-quadratic-Gaussian control robustness, (structured) singular value analysis, and H-infinity methods. He has co-authored books and software toolboxes, and a control analysis tool for high performance commercial and military aerospace systems, as well as other industrial systems.\nAwards.\nDoyle earned the IEEE W.R.G. Baker Prize Paper Award (1991), the IEEE Automatic Control Transactions Axelby Award twice, and the AACC Schuck award. He also has been awarded the AACC Donald P. Eckman Award, the 2004 IEEE Control Systems Award and the Centennial Outstanding Young Engineer Award.", "Automation-Control": 0.9082943201, "Qwen2": "Yes"} {"id": "5046884", "revid": "754619", "url": "https://en.wikipedia.org/wiki?curid=5046884", "title": "Lightweight software test automation", "text": "Lightweight software test automation is the process of creating and using relatively short and simple computer programs, called lightweight test harnesses, designed to test a software system. Lightweight test automation harnesses are not tied to a particular programming language but are most often implemented with the Java, Perl, Visual Basic .NET, and C# programming languages. Lightweight test automation harnesses are generally four pages of source code or less, and are generally written in four hours or less. Lightweight test automation is often associated with Agile software development methodology.\nThe three major alternatives to the use of lightweight software test automation are commercial test automation frameworks, Open Source test automation frameworks, and heavyweight test automation. The primary disadvantage of lightweight test automation is manageability. Because lightweight automation is relatively quick and easy to implement, a test effort can be overwhelmed with harness programs, test case data files, test result files, and so on. However, lightweight test automation has significant advantages. Compared with commercial frameworks, lightweight automation is less expensive in initial cost and is more flexible. Compared with Open Source frameworks, lightweight automation is more stable because there are fewer updates and external dependencies. Compared with heavyweight test automation, lightweight automation is quicker to implement and modify. Lightweight test automation is generally used to complement, not replace these alternative approaches.\nLightweight test automation is most useful for regression testing, where the intention is to verify that new source code added to the system under test has not created any new software failures. Lightweight test automation may be used for other areas of software testing such as performance testing, stress testing, load testing, security testing, code coverage analysis, mutation testing, and so on. The most widely published proponent of the use of lightweight software test automation is Dr. James D. McCaffrey.", "Automation-Control": 0.7777224183, "Qwen2": "Yes"} {"id": "5167489", "revid": "21436738", "url": "https://en.wikipedia.org/wiki?curid=5167489", "title": "Flexible manufacturing system", "text": "A flexible manufacturing system (FMS) is a manufacturing system in which there is some amount of flexibility that allows the system to react in case of changes, whether predicted or unpredicted. \nThis flexibility is generally considered to fall into two categories, which both contain numerous subcategories.\nMost flexible manufacturing systems consist of three main systems:\nThe main advantages of a flexible manufacturing system is its high flexibility in managing manufacturing resources like time and effort in order to manufacture a new product.\nThe best application of a flexible manufacturing system is found in the 'production of small sets of products like those from a mass production.\nFlexibility.\nFlexibility in manufacturing means the ability to deal with slightly or greatly mixed parts, to allow variation in parts assembly and variations in process sequence, change the production volume and change the design of certain product being manufactured.\nIndustrial FMS communication.\nAn industrial flexible manufacturing system consists of robots, computer-controlled Machines, computer numerical controlled machines (CNC), instrumentation devices, computers, sensors, and other stand alone systems such as inspection machines. The use of robots in the production segment of manufacturing industries promises a variety of benefits ranging from high utilization to high volume of productivity. Each Robotic cell or node will be located along a material handling system such as a conveyor or automatic guided vehicle. The production of each part or work-piece will require a different combination of manufacturing nodes. The movement of parts from one node to another is done through the material handling system. At the end of part processing, the finished parts will be routed to an automatic inspection node, and subsequently unloaded from the Flexible Manufacturing System.\nThe FMS data traffic consists of large files and short messages, and mostly come from nodes, devices and instruments. The message size ranges between a few bytes to several hundreds of bytes. Executive software and other data, for example, are files with a large size, while messages for machining data, instrument to instrument communications, status monitoring, and data reporting are transmitted in small size.\nThere is also some variation on response time. Large program files from a main computer usually take about 60 seconds to be down loaded into each instrument or node at the beginning of FMS operation. Messages for instrument data need to be sent in a periodic time with deterministic time delay. Other types of messages used for emergency reporting are quite short in size and must be transmitted and received with an almost instantaneous response.\nThe demands for reliable FMS protocol that support all the FMS data characteristics are now urgent. The existing IEEE standard protocols do not fully satisfy the real time communication requirements in this environment. The delay of CSMA/CD is unbounded as the number of nodes increases due to the message collisions. Token bus has a deterministic message delay, but it does not support prioritized access scheme which is needed in FMS communications. Token Ring provides prioritized access and has a low message delay, however, its data transmission is unreliable. A single node failure which may occur quite often in FMS causes transmission errors of passing message in that node. In addition, the topology of Token Ring results in high wiring installation and cost.\nA design of FMS communication that supports a real time communication with bounded message delay and reacts promptly to any emergency signal is needed. Because of machine failure and malfunction due to heat, dust, and electromagnetic interference is common, a prioritized mechanism and immediate transmission of emergency messages are needed so that a suitable recovery procedure can be applied. A modification of standard Token Bus to implement a prioritized access scheme was proposed to allow transmission of short and periodic messages with a low delay compared to the one for long messages.", "Automation-Control": 0.9744982719, "Qwen2": "Yes"}